text
stringlengths
1.46k
56.1k
The .j namespace¶ JSON serialization The .j namespace contains functions for converting between JSON and q dictionaries. The .j namespace is reserved for use by KX, as are all single-letter namespaces. Consider all undocumented functions in the namespace as its private API – and do not use them. Prior to V3.2, JSON parsing was catered for via use of the script KxSystems/kdb/e/json.k .j.j (serialize)¶ .j.j x Where x is a K object, returns a string representing it in JSON. .j.jd (serialize infinity)¶ .j.jd (x;d) Where x is a K objectd is a dictionary returns the result of .j.j unless d[null0w] is 1b, in which case 0wand -0ware mapped to "null"`. (Since V3.6 2018.12.06.) q).j.j -0w 0 1 2 3 0w "[-inf,0,1,2,3,inf]" q).j.jd(-0w 0 1 2 3 0w;()!()) "[-inf,0,1,2,3,inf]" q).j.jd(-0w 0 1 2 3 0w;([null0w:1b])) "[null,0,1,2,3,null]" .j.k (deserialize)¶ .j.k x Where x is a string containing JSON, returns a K object. q).j.k 0N!.j.j `a`b!(0 1;("hello";"world")) / dictionary "{\"a\":[0,1],\"b\":[\"hello\",\"world\"]}" a| 0 1 b| "hello" "world" q).j.k 0N!.j.j ([]a:1 2;b:`Greetings`Earthlings) / table "[{\"a\":1,\"b\":\"Greetings\"},{\"a\":2,\"b\":\"Earthlings\"}]" a b -------------- 1 "Greetings" 2 "Earthlings" Note serialization and deserialization to and from JSON may not preserve q datatype If your JSON data is spread over multiple lines, reduce those to a single char vector with raze . $ cat t1.json { "code" : 3, "message" : "This request requires authorization" } q).j.k raze read0 `:t1.json code | 3f message| "This request requires authorization" The .m namespace¶ Since V4.0 2020.03.17 Memory can be backed by a filesystem, allowing use of DAX-enabled filesystems (e.g. AppDirect) as a non-persistent memory extension for kdb+. Command-line option -m path directs kdb+ to use the filesystem path specified as a separate memory domain. This splits every thread’s heap into two: | domain | description | |---|---| | 0 | regular anonymous memory, active and used for all allocs by default | | 1 | filesystem-backed memory | The .m namespace is reserved for objects in memory domain 1, however names from other namespaces can reference them too, e.g. a:.m.a:1 2 3 \d .m changes current memory domain to 1, causing it to be used by all further allocs. \d .anyotherns sets it back to 0. .m.x:x ensures the entirety of .m.x is in memory domain 1, performing a deep copy of x as needed. (Objects of types 100h -103h , 112h are not copied and remain in memory domain 0.) Lambdas defined in .m set current memory domain to 1 during execution. This will nest, since other lambdas don’t change memory domains: q)\d .myns q)g:{til x} q)\d .m q)w:{system"w"};f:{.myns.g x} q)\d . q)x:.m.f 1000000;.m.w` / x allocated in domain 1 Internal function -120!x returns x ’s memory domain, currently 0 or 1. q)-120!'(1 2 3;.m.x:1 2 3) 0 1 System command \w returns memory info for the current memory domain only. q)value each ("\\d .m";"\\w";"\\d .";"\\w") :: 353968 67108864 67108864 0 0 8589934592 :: 354032 67108864 67108864 0 0 8589934592 Command-line option -w limit (M1/m2) is no longer thread-local, but memory domain-local. Command-line option -w , and system command \w set limit for memory domain 0. Mapped is a single global counter, the same in every thread’s \w .
// @kind function // @category preprocessing // @desc Remove columns/keys with zero variance // @param data {table|dictionary} Data in various formats // @return {table|dictionary} All columns/keys with zero variance are removed dropConstant:{[data] typeData:type data; if[not typeData in 98 99h; '"Data must be simple table or dictionary" ]; if[99h=typeData; if[98h~type value data; '"Data cannot be a keyed table" ] ]; // Find keys/cols that contain non-numeric data findFunc:$[typeData=99h;i.findKey;i.findCols]; findKeys:findFunc .(data;"csg ",upper .Q.t); // Store instructions to flip table and execute this flipData:$[99=typeData;;flip]; dataDict:flipData data; // Drop constant numeric and non numeric cols/keys dropNum:i.dropConstant.num[findKeys _ dataDict]; dropOther:i.dropConstant.other findKeys#dataDict; flipData dropNum,dropOther } // @kind function // @category preprocessing // @desc Fit min max scaling model // @param data {table|dictionary|number[]} Numerical data // @return {dictionary} Contains the following information: // modelInfo - The min/max value of the fitted data // transform - A projection allowing for transformation on new input data minMaxScaler.fit:{[data] typData:type[data] in 0 99h; minData:$[typData;min each;min]data; maxData:$[typData;max each;max]data; scalingInfo:`minData`maxData!(minData;maxData); returnInfo:enlist[`modelInfo]!enlist scalingInfo; transform:i.apUpd minMaxScaler.transform returnInfo; returnInfo,enlist[`transform]!enlist transform } // @kind function // @category preprocessing // @desc Scale data between 0-1 based on fitted model // @params config {dictionary} Information returned from `ml.minMaxScaler.fit` // including: // modelInfo - The min/max value of the fitted data // transform - A projection allowing for transformation on new input data // @param data {table|dictionary|number[]} Numerical data // @return {table|dictionary|number[]} A min-max scaled representation with // values scaled between 0 and 1f minMaxScaler.transform:{[config;data] minData:config[`modelInfo;`minData]; maxData:config[`modelInfo;`maxData]; (data-minData)%maxData-minData } // @kind function // @category preprocessing // @desc Scale data between 0-1 // @param data {table|dictionary|number[]} Numerical data // @return {table|dictionary|number[]} A min-max scaled representation with // values scaled between 0 and 1f minMaxScaler.fitTransform:{[data] scaler:minMaxScaler.fit data; scaler[`transform]data } // @kind function // @category preprocessing // @desc Fit standard scaler model // @param data {table|dictionary|number[]} Numerical data // @return {dictionary} Contains the following information: // modelInfo - The avg/dev value of the fitted data // transform - A projection allowing for transformation on new input data stdScaler.fit:{[data] typData:type[data]; if[typData=98;data:flip data]; avgData:$[typData in 0 98 99h;avg each;avg]data; devData:$[typData in 0 98 99h;dev each;dev]data; scalingInfo:`avgData`devData!(avgData;devData); returnInfo:enlist[`modelInfo]!enlist scalingInfo; transform:i.apUpd stdScaler.transform returnInfo; returnInfo,enlist[`transform]!enlist transform } // @kind function // @category preprocessing // @desc Standard scaler transform-based representation of data // using a fitted model // @params config {dictionary} Information returned from `ml.stdScaler.fit` // including: // modelInfo - The avg/dev value of the fitted data // transform - A projection allowing for transformation on new input data // @param data {table|dictionary|number[]} Numerical data // @return {table|dictionary|number[]} All data has undergone standard scaling stdScaler.transform:{[config;data] avgData:config[`modelInfo;`avgData]; devData:config[`modelInfo;`devData]; (data-avgData)%devData } // @kind function // @category preprocessing // @desc Standard scaler transform-based representation of data // @param data {table|dictionary|number[]} Numerical data // @return {table|dictionary|number[]} All data has undergone standard scaling stdScaler.fitTransform:{[data] scaler:stdScaler.fit data; scaler[`transform]data } // @kind function // @category preprocessing // @desc Replace +/- infinities with data min/max // @param data {table|dictionary|number[]} Numerical data // @return {table|dictionary|number[]} Data with positive/negative // infinities are replaced by max/min values infReplace:i.ap{[data;inf;func] t:.Q.t abs type first first data; if[not t in "hijefpnuv";:data]; i:$[t;]@/:(inf;0n); @[data;i;:;func@[data;i:where data=i 0;:;i 1]] }/[;-0w 0w;min,max] // @kind function // @category preprocessing // @desc Tunable polynomial features from an input table // @param tab {table} Numerical data // @param n {int} Order of the polynomial feature being created // @return {table} The polynomial derived features of degree n polyTab:{[tab;n] colsTab:cols tab; colsTab@:combs[count colsTab;n]; updCols:`$"_"sv'string colsTab; updVals:prd each tab colsTab; flip updCols!updVals } // @kind function // @category preprocessing // @desc Tunable filling of null data for a simple table // @param tab {table} Numerical and non numerical data // @param groupCol {symbol} A grouping column for the fill // @param timeCol {symbol} A time column in the data // @param dict {dictionary} Defines fill behavior, setting this to (::) will // result in forward followed by reverse filling // @return {table} Columns filled according to assignment of keys in the // dictionary dict, the null values are also encoded within a new column // to maintain knowledge of the null positions fillTab:{[tab;groupCol;timeCol;dict] dict:$[0=count dict; :tab; (::)~dict; [fillCols:i.findCols[tab;"ghijefcspmdznuvt"]except groupCol,timeCol; fillCols!(count fillCols)#`forward ]; dict ]; keyDict:key dict; nullKeys:`$string[keyDict],\:"_null"; nullVals:null tab keyDict; tab:flip flip[tab],nullKeys!nullVals; grouping:$[count groupCol,:();groupCol!groupCol;0b]; ![tab;();grouping;@[i.fillMap;`linear;,';timeCol][dict],'keyDict] } // @kind function // @category preprocessing // @desc Fit one-hot encoding model to categorical data // @param tab {table} Numerical and non numerical data // @param symCols {symbol[]} Columns to apply encoding to // @return {dictionary} Contains the following information: // modelInfo - The mapping information // transform - A projection allowing for transformation on new input data oneHot.fit:{[tab;symCols] if[(::)~symCols;symCols:i.findCols[tab;"s"]]; mapVals:asc each distinct each tab symCols,:(); mapDict:symCols!mapVals; returnInfo:enlist[`modelInfo]!enlist mapDict; transform:oneHot.transform returnInfo; returnInfo,enlist[`transform]!enlist transform } // @kind function // @category preprocessing // @desc Encode categorical features using one-hot encoded fitted model // @params config {dictionary} Information returned from `ml.oneHot.fit` // including: // modelInfo - The mapping information // transform - A projection allowing for transformation on new input data // @param tab {table} Numerical and non numerical data // @param symDict {dictionary} Keys indicate the columns in the table to be // encoded, values indicate what mapping to use when encoding // @return {table} One-hot encoded representation of categorical data oneHot.transform:{[config;tab;symDict] mapDict:config`modelInfo; symDict:i.mappingCheck[tab;symDict;mapDict]; oneHotVal:mapDict value symDict; oneHotData:key symDict; updDict:i.oneHotCols[tab]'[oneHotData;oneHotVal]; flip(oneHotData _ flip tab),raze updDict } // @kind function // @category preprocessing // @desc Encode categorical features using one-hot encoding // @param tab {table} Numerical and non numerical data // @param symCols {symbol[]} Columns to apply encoding to // @return {table} One-hot encoded representation of categorical data oneHot.fitTransform:{[tab;symCols] encode:oneHot.fit[tab;symCols]; map:raze key encode`modelInfo; symDict:map!map; encode[`transform][tab;symDict] } // @kind function // @category preprocessing // @desc Encode categorical features with frequency of // category occurrence // @param tab {table} Numerical data // @param symCols {symbol[]} Columns to apply encoding to // @return {table} Frequency of occurrance of individual symbols // within a column freqEncode:{[tab;symCols] if[(::)~symCols;symCols:i.findCols[tab;"s"]]; updCols:`$string[symCols],\:"_freq"; updVals:i.freqEncode each tab symCols,:(); updDict:updCols!updVals; flip(symCols _ flip tab),updDict } // @kind function // @category preprocessing // @desc Fit lexigraphical ordering model to categorical data // @param tab {table} Numerical and categorical data // @param symCols {symbol[]} Columns to apply encoding to // @return {dictionary} Contains the following information: // modelInfo - The mapping information // transform - A projection allowing for transformation on new input data lexiEncode.fit:{[tab;symCols] if[(::)~symCols;symCols:i.findCols[tab;"s"]]; mapping:labelEncode.fit each tab symCols,:(); mapVals:exec modelInfo from mapping; mapDict:symCols!mapVals; returnInfo:enlist[`modelInfo]!enlist mapDict; transform:lexiEncode.transform returnInfo; returnInfo,enlist[`transform]!enlist transform } // @kind function // @category preprocessing // @desc Lexicode encode data based on previously fitted model // @params config {dictionary} Information returned from `ml.lexiEncode.fit` // including: // modelInfo - The mapping information // transform - A projection allowing for transformation on new input data // @param tab {table} Numerical and categorical data // @param symDict {dictionary} Keys indicate the columns in the table to be // encoded, values indicate what mapping to use when encoding // @return {table} Addition of lexigraphical order of symbol column lexiEncode.transform:{[config;tab;symDict] mapDict:config`modelInfo; symDict:i.mappingCheck[tab;symDict;mapDict]; tabCols:key symDict; mapCols:value symDict; updCols:`$string[tabCols],\:"_lexi"; modelInfo:enlist[`modelInfo]!/:enlist each mapDict mapCols; updVals:labelEncode.transform'[modelInfo;tab tabCols]; updDict:updCols!updVals; flip(tabCols _ flip tab),updDict } // @kind function // @category preprocessing // @desc Encode categorical features based on lexigraphical order // @param tab {table} Numerical data // @param symCols {symbol[]} Columns to apply encoding to // @return {table} Addition of lexigraphical order of symbol column lexiEncode.fitTransform:{[tab;symCols] encode:lexiEncode.fit[tab;symCols]; map:raze key encode`modelInfo; symDict:map!map; encode[`transform][tab;symDict] } // @kind function // @category preprocessing // @desc Fit a label encoder model // @param data {any[]} Data to encode // @return {dictionary} Contains the following information: // modelInfo - The schema mapping values // transform - A projection allowing for transformation on new input data labelEncode.fit:{[data] uniqueData:asc distinct data; map:uniqueData!til count uniqueData; returnInfo:enlist[`modelInfo]!enlist map; transform:labelEncode.transform returnInfo; encoding:uniqueData?data; returnInfo,enlist[`transform]!enlist transform } // @kind function // @category preprocessing // @desc Encode categorical data to an integer value representation // @params config {dictionary} Information returned from `ml.labelEncode.fit` // including: // modelInfo - The schema mapping values // transform - A projection allowing for transformation on new input data // @param data {any[]} Data to be reverted to original representation // @return {int[]} List transformed to integer value labelEncode.transform:{[config;data] map:config`modelInfo; -1^map data } // @kind function // @category preprocessing // @desc Encode categorical data to an integer value representation // @param data {any[]} Data to encode // @return {int[]} List is encoded to an integer representation labelEncode.fitTransform:{[data] encoder:labelEncode.fit data; encoder[`transform]data }
runners:()!() runners[`perf]:{[expec];} runners[`test]:{[expec]; expec[`code][]; / We use just a dab of state to communicate with the assertions expec[`failures]:.tst.assertState.failures; expec[`assertsRun]:.tst.assertState.assertsRun; expec[`result]: $[count expec`failures;`testFail;`pass]; expec } ================================================================================ FILE: qspec_lib_tests_fuzz.q SIZE: 2,700 characters ================================================================================ \d .tst fuzzListMaxLength:100 typeNames: `boolean`guid`byte`short`int`long`real`float`char`symbol`timestamp`month`date`datetime`timespan`minute`second`time typeCodes: 1 2 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19h typeDefaults:(0b;0Ng;0x0;0h;0;0j;10000e;1000000f;" ";`7;value (string `year$.z.D),".12.31D23:59:59.999999999";2000.01m;2000.01.01;value (string `year$.z.D),".12.31T23:59:59.999";0D0;00:00;00:00:00;00:00:00.000) typeFuzzN: typeNames!typeDefaults typeFuzzC: typeCodes!typeDefaults pickFuzz:{[x;runs] $[-11h ~ t:type x; / [`type] form. Use the default fuzz for the type pneumonic (`symbol/`int/etc) runs ? typeFuzzN[x]; 100h ~ type x; / [{...}] form. function type, x is a fuzz generator x each til runs; 99h ~ type x; / [`name1`name2...`nameN!...] form. Wants multiple fuzzes flip pickFuzz[;runs] each x; $[(type x) > 0; / Any list form. Fuzz should be a fuzzy list of fuzz pickListFuzz[x;runs]; runs ? x / Geneal list/atom value form. ]] } pickListFuzz:{[x;runs]; $[(count x) = 0; / [`type$()] form. Use default fuzz by type, but create variable length lists { y ? typeFuzzC[x]}[abs type x] each runs ? fuzzListMaxLength; null[first distinct x] and 1 = count distinct x; / [`type$n#0N] form. Use default fuzz by type with user specified max list length { y ? typeFuzzC[x]}[abs type x] each runs ? count x; / Type safe comparison needed (symbol list) 1 = count distinct x; / [`type$n#val] form. Use provided value for fuzz generator with specified max length { y ? x }[first x] each runs ? count x; runs ? x / [`type$(val1;val2;val3)] General uniform list form ] } runners[`fuzz]:{[expec]; fuzzResults:fuzzRunCollecter[expec`code] each pickFuzz[expec`vars;expec`runs]; expec,:exec failedFuzz, fuzzFailureMessages:fuzzFailures from fuzzResults where 0 < count each failedFuzz; assertsRun:$[not count fuzzResults;0;max fuzzResults[`assertsRun]]; $[(expec[`failRate]:(count expec`failedFuzz)%expec`runs) > expec`maxFailRate; expec[`failures`result`assertsRun]:(enlist "Over max failure rate";`fuzzFail;assertsRun); expec[`failures`result`assertsRun]:(();`pass;assertsRun)]; expec } fuzzRunCollecter:{[code;fuzz]; .tst.assertState:.tst.defaultAssertState; code[fuzz]; $[count .tst.assertState.failures; `failedFuzz`fuzzFailures`assertsRun!(fuzz;.tst.assertState.failures;.tst.assertState.assertsRun); `failedFuzz`fuzzFailures`assertsRun!(();();.tst.assertState.assertsRun)] } ================================================================================ FILE: qspec_lib_tests_internals.q SIZE: 679 characters ================================================================================ \d .tst .tst.defaultAssertState:.tst.assertState:``failures`assertsRun!(::;();0); .tst.tstPath: `; halt:0b internals:()!() internals[`]:()!() internals[`specObj]:`result`title`failHard!(`didNotRun;"";0b) internals[`defaultExpecObj]:`result`errorText!(`didNotRun;()) internals[`testObj]: internals[`defaultExpecObj], ((),`type)!(),`test internals[`fuzzObj]: internals[`defaultExpecObj], `type`runs`vars`maxFailRate!(`fuzz;100;`int;0f) internals[`perfObj]: internals[`defaultExpecObj], ((),`type)!(),`perf if[not `callbacks in key .tst; / Avoid callback overwriting issue when dogfooding callbacks:((),`)!(),(::); callbacks[`descLoaded]:{}; callbacks[`expecRan]:{[x;y]}; ]; ================================================================================ FILE: qspec_lib_tests_spec.q SIZE: 345 characters ================================================================================ \d .tst .tst.context:`. runSpec:{ oldContext: .tst.context; .tst.context: x[`context]; .tst.tstPath: x[`tstPath]; x:@[x;`expectations;{[s;e]if[.tst.halt;:()];runExpec[s;e]}[x] each]; if[.tst.halt;:()]; .tst.restoreDir[]; .tst.context: oldContext; .tst.tstPath: `; x[`result]:$[all `pass = x[`expectations;;`result];`pass;`fail]; x } ================================================================================ FILE: qspec_lib_tests_ui.q SIZE: 2,759 characters ================================================================================ \d .tst uiSet:{.[`.tst;(),x;:;y]} resetExpecList:{uiSet[`expecList;enlist ()!()]} / Asserts are built up into this variable resetExpecList[]; currentBefore:{} currentAfter:{} before:{[code]; uiSet[`currentBefore;code] } after:{[code]; uiSet[`currentAfter;code] } // Before and After values can be set after the expectation (under the specification) fillExpecBA:{ x:@[x;1 _ where {not `before in key x} each x;{x,enlist[`before]!enlist currentBefore}]; 1 _ @[x;1 _ where {not `after in key x} each x;{x,enlist[`after]!enlist currentAfter}] } alt:{[code]; / Alt blocks allow different before/after behavior to be defined oldBefore: currentBefore; oldAfter: currentAfter; oldExpecList: expecList; resetExpecList[]; code[]; el:fillExpecBA expecList; / Reset environment `expecList`currentBefore`currentAfter uiSet' (oldExpecList;oldBefore;oldAfter); expecList,:el; } should:{[des;code]; expecList,: enlist .tst.internals.testObj, (`desc`code!(des;code)) } holds:{[des;props;code]; expecList,: enlist .tst.internals.fuzzObj, (`desc`code!(des;code)), props } perf:{[des;props;code]; expecList,: enlist .tst.internals.perfObj, (`desc`code!(des;code)), props } uiRuntimeNames:`fixture`fixtureAs`mock uiRuntimeCode: (.tst.fixture;.tst.fixtureAs;.tst.mock) uiNames:`before`after`should`holds`perf`alt uiCode:(before;after;should;holds;perf;alt) / Note on Global References: / Because of the way Q handles global references, we cannot use the code object of the expectations parameter / Instead We take the value string of the object and re-evaluate it to execute a new code object with the / .q assertions functions in place. (You may see this by taking an expectation function definition and / examining the list of globals "(value expectations) 3" without a custom .q function defined and with one / defined. E.g: / (value {2 musteq 2}) 3 / .q.musteq: {x+y} / (value {2 musteq 2}) 3 .tst.desc:{[title;expectations]; oldBefore: currentBefore; oldAfter: currentAfter; oldExpecList: expecList; resetExpecList[]; specObj: .tst.internals.specObj; specObj[`title]:title; / set up the UI for the expectation call / mock isn't exactly the right name for this usage. Think of it more like "substitute" ((` sv `.q,) each uiRuntimeNames,uiNames,key asserts) .tst.mock' uiRuntimeCode,uiCode,value asserts; / See Note on Global References (value string expectations)[]; / See Note on Global References specObj[`context]: system "d"; specObj[`tstPath]: .utl.FILELOADING; specObj[`expectations]:fillExpecBA expecList; / Reset environment `expecList`currentBefore`currentAfter uiSet' (oldExpecList;oldBefore;oldAfter); .tst.restore[]; .tst.callbacks.descLoaded specObj; specObj } ================================================================================ FILE: qspec_test_fixture_tests_test_directory_fixture.q SIZE: 1,700 characters ================================================================================ .tst.desc["Loading Directory Fixtures"]{ before{ `notAFixture mock 1 _ string ` sv (` vs .tst.tstPath)[0],`fixtures`not_a_fixture; `emptyDir mock 1 _ string ` sv (` vs .tst.tstPath)[0],`fixtures`emptyDir; }; should["clear any loaded partition"]{ system "l ",notAFixture; `mytable mustin tables `; fixture `a_fixture; `mytable mustnin tables `; }; should["only load one directory fixture at a time"]{ fixture `a_fixture; `sometable mustin tables `; fixture `other_fixture; `sometable mustnin tables `; `othertable mustin tables `; .tst.restoreDir[]; / This is a limitation of the fixture system: If a directory is loaded through the normal manner without first restoring (IE: Cleaningup the fixture) the fixture that was loaded will left hanging around. Moral of the story: Don't load directories in Specification objects that use fixtures }; should["allow you to restore to the previously loaded partition"]{ system "l ", notAFixture; fixture `a_fixture; .tst.restoreDir[]; `mytable mustin tables `; }; should["leave loaded partitions untouched if restore is called twice after loading a directory fixture"]{ system "l ", notAFixture; fixture `a_fixture; .tst.restoreDir[]; .tst.restoreDir[]; `mytable mustin tables `; `othertable mustnin tables `; }; should["load directory fixtures not containing partitions"]{ // Newer Q versions will load hidden files hdel ep:` sv (hsym `$emptyDir;`.empty); // Q doesn't clean up all internal variables between each file load. Simulate no previous db's having been loaded system "l ", emptyDir; .Q:`pv`pt`pf _ .Q; mustnotthrow[();{fixture `no_part_fixture}]; ep set () }; }; ================================================================================ FILE: qspec_test_fixture_tests_test_file_fixture.q SIZE: 585 characters ================================================================================ .tst.desc["Loading File Fixtures"]{ before{fixture[`myFixture]}; should["load the fixture specified"]{ mustnotthrow[()] { myFixture; }; }; should["set the fixture to the contents of the file"]{ 1 2 3 4 mustmatch myFixture; }; should["load a fixture with a different name if required"]{ fixtureAs[`myFixture;`someVariable]; mustnotthrow[()] { someVariable; }; myFixture mustmatch someVariable; }; alt{ before{fixture `splayFixture}; should["load a splayed directory as a file fixture"]{ ([]a:1 2 3;b:4 5 6) mustmatch splayFixture; }; }; }; ================================================================================ FILE: qspec_test_fixture_tests_test_text_fixture.q SIZE: 579 characters ================================================================================ .tst.desc["Loading Text Fixtures"]{ before{ fix: ` sv (` vs .tst.tstPath)[0],`fixtures`all_types.csv; `typeLine mock ssr[(read0 fix) 0;",";""]; }; should["load text based fixtures with different path separators"]{ fixture[`fixtureCommas]; fixture[`fixturePipes]; fixture[`fixtureCarets]; fixtureCommas mustmatch fixturePipes; fixtureCommas mustmatch fixtureCarets; }; should["determine the types of the fixture's columns from the type-line"]{ fixtureAs[`all_types;`allTypes]; nullTypes: typeLine$" "; nullTypes mustmatch value first allTypes; }; }; ================================================================================ FILE: qspec_test_test_assertions.q SIZE: 2,440 characters ================================================================================
// set.q - Callable functions for the publishing of items to local file system // Copyright (c) 2021 Kx Systems Inc // // @overview // Publish items to local file system // // @category Model-Registry // @subcategory Functionality // // @end \d .ml // @kind function // @category local // @subcategory set // // @overview // Set a model within local file-system storage // // @param experimentName {string} The name of the experiment to which a model // being added to the registry is associated // @param model {any} `(<|dict|fn|proj)` The model to be saved to the registry. // @param modelName {string} The name to be associated with the model // @param modelType {string} The type of model that is being saved, namely // "q"|"sklearn"|"keras"|"python" // @param config {dict} Any additional configuration needed for // setting the model // // @return {null} registry.local.set.model:{[experimentName;model;modelName;modelType;config] config:registry.util.check.registry config; $[experimentName in ("undefined";""); config[`experimentPath]:config[`registryPath],"/unnamedExperiments"; config:registry.new.experiment[config`folderPath;experimentName;config] ]; config:(enlist[`major]!enlist 0b),config; config:registry.util.update.config[modelName;modelType;config]; function:registry.util.set.model; arguments:(model;modelType;config); registry.util.protect[function;arguments;config] } // @kind function // @category local // @subcategory set // // @overview // Set parameter information associated with a model locally // // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param paramName {string} The name of the parameter to be saved // @param params {dict|table|string} The parameters to save to file // // @return {null} registry.local.set.parameters:{[experimentName;modelName;version;paramName;params;config] config:registry.util.check.registry config; // Retrieve the model from the store meeting the user specified conditions modelDetails:registry.util.search.model[experimentName;modelName;version;config]; if[not count modelDetails; logging.error"No model meeting your provided conditions was available" ]; // Construct the path to model folder containing the model to be retrieved config,:flip modelDetails; paramPath:registry.util.path.modelFolder[config`registryPath;config;`params]; paramPath:paramPath,paramName,".json"; registry.util.set.params[paramPath;params] } ================================================================================ FILE: ml_ml_registry_q_local_update.q SIZE: 1,912 characters ================================================================================ // update.q - Callable functions for updating information related to a model // on local file-sytem // Copyright (c) 2021 Kx Systems Inc // // @overview // Update local model information // // @category Model-Registry // @subcategory Functionality // // @end \d .ml // @kind function // @category local // @subcategory update // // @overview // Prepare information for local updates // // @param folderPath {string|null} A folder path indicating the location // of the registry or generic null if in the current directory // @param experimentName {string|null} The name of an experiment within which // the model having additional information added is located. // @param modelName {string|null} The name of the model to which additional // information is being added. In the case this is null, the newest model // associated with the experiment is retrieved // @param version {long[]|null} The specific version of a named model to add the // new parameters to. In the case that this is null the newest model is retrieved // generaly expressed as a duple (major;minor) // @param config {dict} Any additional configuration needed for updating // the parameter information associated with a model // // @return {dict} All information required for setting new configuration/ // requirements information associated with a model registry.local.update.prep:{[folderPath;experimentName;modelName;version;config] config:registry.util.check.registry config; modelDetails:registry.util.search.model[experimentName;modelName;version;config]; if[not count modelDetails; logging.error"No model meeting your provided conditions was available" ]; // Construct the path to model folder containing the model to be retrieved config,:flip modelDetails; config[`versionPath]:registry.util.path.modelFolder[config`registryPath;config;::]; config:registry.config.model,config; config } ================================================================================ FILE: ml_ml_registry_q_local_utils_check.q SIZE: 1,059 characters ================================================================================ // check.q - Utilities relating to checking of suitability of registry items // Copyright (c) 2021 Kx Systems Inc // // @overview // Utilities for checking items locally // // @category Model-Registry // @subcategory Utilities // // @end \d .ml // @private // // @overview // Check if the registry which is being manipulated exists, if it does not // generate the registry at the sprcified location // // @param config {dict|null} Any additional configuration needed for // initialising the registry // // @return {dict} Updated config dictionary containing registry path registry.local.util.check.registry:{[config] registryPath:config[`folderPath],"/KX_ML_REGISTRY"; config:$[()~key hsym`$registryPath; [logging.info"Registry does not exist at: '",registryPath, "'. Creating registry in that location."; registry.new.registry[config`folderPath;config] ]; [modelStorePath:hsym`$registryPath,"/modelStore"; paths:`registryPath`modelStorePath!(registryPath;modelStorePath); config,paths ] ]; config } ================================================================================ FILE: ml_ml_registry_q_local_utils_init.q SIZE: 312 characters ================================================================================ // init.q - Initialise Utilities for local FS interactions // Copyright (c) 2021 Kx Systems Inc // // Utilties relating to all interactions with local file // system storage \d .ml if[not @[get;".ml.registry.q.local.util.init";0b]; loadfile`:registry/q/local/utils/check.q ] registry.q.local.util.init:1b ================================================================================ FILE: ml_ml_registry_q_main_delete.q SIZE: 12,128 characters ================================================================================ // delete.q - Main callable functions for deleting items from the model registry // Copyright (c) 2021 Kx Systems Inc // // @overview // Delete items from the registry // // @category Model-Registry // @subcategory Functionality // // @end \d .ml // @kind function // @category main // @subcategory delete // // @overview // Delete a registry and the entirety of its contents // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param config {dict} Information relating to registry being deleted // // @return {null} registry.delete.registry:{[folderPath;config] config:registry.util.check.config[folderPath;config]; if[`local<>storage:config`storage;storage:`cloud]; registry[storage;`delete;`registry][folderPath;config] } // @kind function // @category main // @subcategory delete // // @overview // Delete an experiment and its associated models from the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string} Name of the experiment to be deleted // // @return {null} registry.delete.experiment:{[folderPath;experimentName] config:registry.util.check.config[folderPath;()!()]; $[`local<>config`storage; registry.cloud.delete.experiment[config`folderPath;experimentName;config]; [config:`folderPath`experimentName!(config`folderPath;experimentName); registry.util.delete.object[config;`experiment]; ] ]; } // @kind function // @category main // @subcategory delete // // @overview // Delete a version of a model/all models associated with a name // from the registry and modelStore table // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string} Name of the experiment to be deleted // @param modelName {string|null} The name of the model to retrieve // @param version {long[]|null} The version of the model to retrieve (major;minor) // // @return {null} registry.delete.model:{[folderPath;experimentName;modelName;version] config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; // Locate/retrieve the registry locally or from the cloud config:$[storage~`local; registry.local.util.check.registry config; [checkFunction:registry.cloud.util.check.model; checkFunction[experimentName;modelName;version;config`folderPath;config] ] ]; modelDetails:registry.util.search.model[experimentName;modelName;version;config]; modelName:first modelDetails `modelName; config:registry.util.check.config[folderPath;()!()]; if[not count modelDetails; logging.error"No model meeting your provided conditions was available" ]; $[`local<>config`storage; registry.cloud.delete.model[config;experimentName;modelName;version]; [configKeys:`folderPath`experimentName`modelName`version; configVals:(config`folderPath;experimentName;modelName;version); config:configKeys!configVals; objectType:$[(::)~version;`allModels;`modelVersion]; registry.util.delete.object[config;objectType] ] ]; } // @kind function // @category main // @subcategory delete // // @overview // Delete a parameter file associated with a name // from the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string} Name of the experiment to be deleted // @param modelName {string|null} The name of the model to retrieve // @param version {long[]} The version of the model to retrieve (major;minor) // @param paramFile {string} Name of the parameter file to delete // // @return {null} registry.delete.parameters:{[folderPath;experimentName;modelName;version;paramFile] config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; // Locate/retrieve the registry locally or from the cloud config:$[storage~`local; registry.local.util.check.registry config; [checkFunction:registry.cloud.util.check.model; checkFunction[experimentName;modelName;version;config`folderPath;config] ] ]; modelDetails:registry.util.search.model[experimentName;modelName;version;config]; modelName:first modelDetails `modelName; version:first modelDetails `version; config:registry.util.check.config[folderPath;()!()]; $[`local<>config`storage; [function:registry.cloud.delete.parameters; params:(config;experimentName;modelName;version;paramFile); function . params; ]; [function:registry.util.getFilePath; params:(config`folderPath;experimentName;modelName;version;`params;enlist[`paramFile]!enlist paramFile); location:function . params; if[()~key location;logging.error"No parameter files exists with the given name, unable to delete."]; hdel location; ] ]; }
// @private // @kind function // @category utilitiesUtility // @desc Apply function to data of various types // @param func {fn} Function to apply to data // @param data {any} Data of various types // @return {fn} function to apply to data i.ap:{[func;data] $[0=type data; func each data; 98=type data; flip func each flip data; 99<>type data; func data; 98=type key data; key[data]!.z.s[func] value data; func each data ] } // @private // @kind function // @category utilitiesUtility // @desc Apply function to data of various types // @param func {fn} Function to apply to data // @param data {any} Data of various types // @return {fn} function to apply to data i.apUpd:{[func;data] $[0=type data; func data; 98=type data; func each data; 99<>type data; func data; 98=type key data; key[data]!.z.s value data; func data ] } // @private // @kind function // @category utilitiesUtility // @desc Find columns of certain types // @param tab {table} Data in tabular format // @param char {char[]} Type of column to find // @return {symbol[]} Columns containing the type being searched i.findCols:{[tab;char] metaTab:0!meta tab; metaTab[`c]where metaTab[`t]in char } // @private // @kind function // @category utilitiesUtility // @desc Checks if object is of a specified type i.isInstance:.p.import[`builtins][`:isinstance;<] // @private // @kind function // @category utilitiesUtility // @desc Python datetime module i.dateTime:.p.import`datetime // @private // @kind function // @category utilitiesUtility // @desc Python pandas dataframe module i.pandasDF:.p.import[`pandas]`:DataFrame // @private // @kind function // @category utilitiesUtility // @desc Numpy array function i.npArray:.p.import[`numpy]`:array // @private // @kind function // @category utilitiesUtility // @desc Check that the length of the endog and another parameter // are equal // @param endog {float[]} The endogenous variable // @param param {number[][]|number[]} A parameter to compare the length of // @param paramName {string} The name of the parameter // @returns {::|err} Return an error if they aren't equal i.checkLen:{[endog;param;paramName] if[not count[endog]=count param; '"The length of the endog variable and ",paramName," must be equal" ] } // Metric utility functions // @private // @kind function // @category metricUtility // @desc Exclude collinear points // @param x {number[]} X coordinate of true positives and false negatives // @param y {number[]} Y coorfinate of true positives and false negatives // @returns {number[]} any colinear points are excluded i.curvePts:{[x;y] (x;y)@\:where(1b,2_differ deltas[y]%deltas x),1b } // @private // @kind function // @category metricUtility // @desc Calculate the area under an ROC curve // @param x {number[]} X coordinate of true positives and false negatives // @param y {number[]} Y coorfinate of true positives and false negatives // @returns {number[]} Area under the curve i.auc:{[x;y] sum 1_deltas[x]*y-.5*deltas y } // @private // @kind function // @category metricUtility // @desc Calculate the correlation of a matrix // @param matrix {number[]} A sample from a distribution // @returns {number[]} The covariance matrix i.corrMatrix:{[matrix] devMatrix:dev each matrix; covMatrix[matrix]%devMatrix*/:devMatrix } // Preproc utility functions // @private // @kind function // @category preprocessingUtility // @desc Drop any constant numeric values // @param data {dictionary} Numerical data // @return {dictionary} All keys with zero variance are removed i.dropConstant.num:{[num] (where 0=0^var each num)_num } // @private // @kind function // @category preprocessingUtility // @desc All non numeric values with zero variance are removed // @param data {dictionary} Non-numerical data // @return {dictionary} All keys with zero variance are removed i.dropConstant.other:{[data] (where{all 1_(~':)x}each data)_data } // @private // @kind function // @category preprocessingUtility // @desc Find keys of certain types // @param dict {dictionary} Data stored as a dictionary // @param char {char[]} Type of key to find // @return {symbol[]} Keys containing the type being searched i.findKey:{[dict;char] where({.Q.t abs type x}each dict)in char } // @private // @kind function // @category preprocessingUtility // @desc Fill nulls with 0 // @param data {table|number[]} Numerical data // @return {table|number[]} Nulls filled with 0 i.fillMap.zero:{[data] 0^data } // @private // @kind function // @category preprocessingUtility // @desc Fill nulls with the median value // @param data {table|number[]} Numerical data // @return {table|number[]} Nulls filled with the median value i.fillMap.median:{[data] med[data]^data } // @private // @kind function // @category preprocessingUtility // @desc Fill nulls with the average value // @param data {table|number[]} Numerical data // @return {table|number[]} Nulls filled with the average value i.fillMap.mean:{[data] avg[data]^data } // @private // @kind function // @category preprocessingUtility // @desc Fill nulls forward // @param data {table|number[]} Numerical data // @return {table|number[]} Nulls filled foward i.fillMap.forward:{[data] "f"$(data first where not null data)^fills data } // @private // @kind function // @category preprocessingUtility // @desc Fill nulls depending on timestamp component // @param time {time[]} Data containing a time component // @param nulls {any[]} Contains null values // @return {table|number[]} Nulls filled in respect to time component i.fillMap.linear:{[time;vals] nullVal:null vals; i:where not nullVal; if[2>count i;:vals]; diffs:1_deltas[vals i]%deltas time i; nullVal:where nullVal; iBin:0|(i:-1_i)bin nullVal; "f"$@[vals;nullVal;:;vals[i][iBin]+diffs[iBin]*time[nullVal]-time[i]iBin] } // @private // @kind function // @category preprocessingUtility // @desc Encode categorical features using one-hot encoding // @param data {symbol[]} Data to encode // @return {dictionary} One-hot encoded representation i.oneHot:{[data] vals:asc distinct data; vals!"f"$data=/:vals } // @private // @kind function // @category preprocessingUtility // @desc Encode categorical features with frequency of // category occurrence // @param data {symbol[]} Data to encode // @return {number[]} Frequency of occurrance of individual symbols within // a column i.freqEncode:{[data] (groupVals%sum groupVals:count each group data)data } // @private // @kind function // @category preprocessingUtility // @desc Break date column into constituent components // @param date {date} Data containing a date component // @return {dictionary} A date broken into its constituent components i.timeSplit.d:{[date] dateDict:`dayOfWeek`year`month`day!`date`year`mm`dd$/:\:date; update weekday:1<dayOfWeek from update dayOfWeek:dayOfWeek mod 7, quarter:1+(month-1)div 3 from dateDict } // @private // @kind function // @category preprocessingUtility // @desc Break month column into constituent components // @param month {month} Data containing a monthly component // @return {dictionary} A month broken into its constituent components i.timeSplit.m:{[month] monthDict:monthKey!(monthKey:`year`mm)$/:\:month; update quarter:1+(mm-1)div 3 from monthDict } // @private // @kind function // @category preprocessingUtility // @desc Break time column into constituent components // @param time {time} Data containing a time component // @return {dictionary} A time broken into its constituent components i.timeSplit[`n`t`v]:{[time] `hour`minute`second!`hh`uu`ss$/:\:time } // @private // @kind function // @category preprocessingUtility // @desc Break minute columns into constituent components // @param time {minute} Data containing a minute component // @return {dictionary} A minute broken into its constituent components i.timeSplit.u:{[time] `hour`minute!`hh`uu$/:\:time } // @private // @kind function // @category preprocessingUtility // @desc Break datetime and timestamp columns into constituent // components // @param time {datetime|timestamp} Data containing a datetime or // datetime component // @return {dictionary} A datetime or timestamp broken into its constituent // components i.timeSplit[`p`z]:{[time]raze i.timeSplit[`d`n]@\:time} // @private // @kind function // @category preprocessingUtility // @desc Break time endog columns into constituent components // @param data {any} Data containing a time endog component // @return {dictionary} Time or date types broken into their constituent // components i.timeSplit1:{[data] i.timeSplit[`$.Q.t type data]data:raze data } // @private // @kind function // @category preprocessingUtility // @desc Break time endog columns into constituent components // @param tab {table} Contains time endog columns // @param timeCols {symbol[]} Columns to apply encoding to, if set to :: all // columns with date/time types will be encoded // @return {dictionary} All time or date types broken into labeled versions of // their constituent components i.timeDict:{[tab;timeCol] timeVals:i.timeSplit1 tab timeCol; timeKeys:`$"_"sv'string timeCol,'key timeVals; timeKeys!value timeVals } // @private // @kind function // @category preprocessingUtility // @desc Ensure that keys in the mapping dictionary matches values in // the sym dictionary // @param tab {table} Numerical and categorical data // @param symDict {dictionary} Keys indicate columns in the table to be // encoded, values indicate what mapping to use when encoding // @params mapDict {dictionary} Map cateogorical values to their encoded values // @return {err;dictionary} Error if mapping keys don't match sym values or // update symDict if null is passed i.mappingCheck:{[tab;symDict;mapDict] map:key mapDict; if[(::)~symDict; symCols:i.findCols[tab;"s"]; symDict:@[symCols!;map;{'"Length of mapping and sym keys don't match"}] ]; if[not all value[symDict]in map; '"Mapping keys do not match mapping dictionary" ]; symDict } // @private // @kind function // @category preprocessingUtility // @desc Create one hot encoded columns // @param tab {table} Numerical and categorical data // @param colName {symbol[]} Name of columns in the table to apply encoding to // @params val {symbol[]} One hot encoded values // @return {dictionary} Columns in tab transformed to one hot encoded // representation i.oneHotCols:{[tab;colName;val] updCols:`$"_"sv'string colName,'val; updVals:"f"$tab[colName]='/:val; updCols!updVals } // General utility functions // @private // @kind function // @category utility // @desc Save a model locally // @param modelName {string|symbol} Name of the model to be saved // @param path {string|symbol} The path in which to save the model. // If ()/(::) is used then saves to the current directory // @return {::;err} Saves locally or returns an error i.saveModel:{[modelName;path] savePath:i.constructPath[modelName;path]; save savePath } // @private // @kind function // @category utility // @desc Load a model // @param modelName {string|symbol} Name of the model to be loaded // @param path {string|symbol} The path in which to load the model from. // If ()/(::) is used then saves to the current directory // @return {::;err} Loads a model or returns an error i.loadModel:{[modelName;path] loadPath:i.constructPath[modelName;path]; load loadPath } // @private // @kind function // @category utility // @desc Construct a path to save/load a model // @param modelName {string|symbol} Name of the model to be saved/loaded // @param path {string|symbol} The path in which to save/load the model. // If ()/(::) is used then saves to the current directory // @return {symbol|err} Constructs a path or returns an error i.constructPath:{[modelName;path] pathType:abs type path; modelType:abs type modelName; if[not modelType in 10 11h;i.inputError"modelName"]; if[11h=abs modelType;modelName:string modelName]; joinPath:$[(path~())|path~(::); ; pathType=10h; path,"/",; pathType=11h; string[path],"/",; i.inputError"path" ]modelName; hsym`$joinPath } // @private // @kind function // @category utility // @desc Return an error for the wrong input type // @param input {string} Name of the input parameter // @return {err} Error for the wrong input typr i.inputError:{[input] '`$input," must be a string or a symbol" } // @private // @kind function // @category deprecation // @desc Mapping between old names and new names - can read from file i.versionMap:.j.k raze read0 hsym`$path,"/util/functionMapping.json"
The application of foreign keys and linked columns in kdb+¶ Tables in a database define a relationship between different types of data, whether that relationship is static, dynamic (i.e. fluctuating as part of a time series) or a mixture of both. In general, it is regularly the case that database queries will require data from multiple tables for enrichment and aggregation purposes and so a key aspect of database design is developing ways in which data from several tables is mapped together quickly and efficiently. Although kdb+ contains a very rich set of functions for joining tables in real time, if permanent and well-defined relationships between different tables can be established in advance then data-retrieval latency and related memory usage may be significantly reduced. This white paper will discuss foreign keys and linked columns in a kdb+ context, two ways whereby table structure and organization can be optimized to successfully retrieve and store data in large-scale time-series databases. Tests performed using kdb+ 3.0 (2013.04.05). Foreign keys¶ Simple foreign keys¶ The concept behind a foreign key is analogous to that of an enumerated list. While enumerating a list involves separating it into its distinct elements and their associated indexes within the list, here we take an arbitrary table column and enumerate it across a keyed column, which may be either in the same table as itself or a different table in the same database. Behind the scenes, a pointer to the associated key column replaces the enumerated column’s values essentially creating a parent-child relationship if they are in the same table or a data link if they are in different tables. Reference: Enumerate $ , Enumeration ! , Enum Extend ? Basics: Enumerations In the case of a single key enumeration, creating a foreign key is very straightforward and is specified either within the initial table definition or on the fly through an update statement: //Keyed table that will be used for enumeration; values in the keyed //column must completely encapsulate the values in the column being enumerated financials:( [sym:`A`B`C] earningsPerShare:1.2 2.3 1.5; bookValPerShare:2.1 2.5 3.2 ) //Schema of trade table prior to enumeration trade:([]time:`time$();sym:`$();price:`float$()) //Use the ‘$’ operator to create an enumerated list within the trade //table by casting to the keyed table ‘financials’ update sym:`financials$sym from `trade //Alternatively the foreign key may be defined at the outset q)trade:([]time:`time$();sym:`financials$();price:`float$()) q)`trade insert (.z.T;`A;20.3) ,0 We can see that the sym column in the trade table is indeed an enumerated list with respect to the keyed table: q)exec sym from trade `financials$,`A Technical note The first enumeration defined in a database will have type 20h , with each additional enumeration incrementing this value by 1 up to a maximum of 76h . In effect, 57 is the maximum number of enumerations/foreign keys that may exist in a single database, else the session will throw an elim error. When we inserted a row into the trade table above, kdb+ performed a lookup on the keyed table to see what row of the table this entry will map to (in this case row 0) and rather than placing the sym into the table directly, a pointer to this key value was inserted instead. Given that this entry is a reference and not a value, any alterations to the referenced key column will directly influence the trade table and indeed any other tables referencing this key. It is therefore vitally important that great care be taken when managing data that is being referenced elsewhere since modifying, rearranging or deleting this data will have unwanted knock-on effects: q)delete from `financials where sym=`A `financials //trade table still has a reference to the sym entry in row 0 which is now 'B' q)trade time sym price ---------------------- 09:06:24.849 B 20.3 On the other hand – as is the case with a regular enumerated list – if an enumeration attempt is made where the referencing column value does not exist then the lookup fails and a cast error is returned: q)`trade insert (.z.T;`D;12.1) 'cast Inserting the relevant mapping data into the keyed table will fix this problem: q)`financials insert (`D;1.3;4.0) ,2 q)`trade insert (.z.T;`D;12.1) ,1 The above example demonstrates a key benefit of using foreign keys, it ensures that trade data will always have relevant referential data available for queries and lookups, thus identifying missing or corrupt data and improving data integrity. Another benefit is that since we have created a link from the sym column in the trade table to various rows in the financials table, we can use this mapping to reference other rows as well using dot notation, just as if the referenced table columns were in the original: //Display the Price-Earnings and Price-Book ratios by sym q)select priceEarningsRatio:last price%sym.earningsPerShare,priceBookRatio:last price%sym.bookValPerShare by sym from trade sym| priceEarningsRatio priceBookRatio ---| --------------------------------- B | 8.826087 8.12 D | 9.307692 3.025 Similar to the case of kdb+ column attributes only a single foreign key can be referenced by a column at any one time; establishing a second foreign key will automatically delete the link to the first. Links to more than one table using a single column may be created by linking table 1 to table 2, table 2 to table 3 and so forth: //Create a new table holding exchange information q)exchange:([id:101 102 103 104];ex:`LSE`NDQ`NYSE`AMEX) q)update exchangeID:`exchange$101 101 102 from `financials `financials //Compound dot notation q)select time,sym,sym.exchangeID.ex from trade time sym ex -------------------- 09:06:24.849 B LSE 09:07:44.282 D NDQ Compound foreign keys¶ A foreign key link across two or more columns is possible in kdb+. In this case to allow the usage of dot notation an extra column is appended to the referencing table storing the index link of the table being referenced: q)t1:([sym:`A`B`C;ex:`NYSE`NYSE`NDQ];sharesInIssue:3?1000) q)t2:([]time:2?.z.T;sym:`A`B;exchange:`NYSE`NYSE;price:2?10.) //Append columns together using Each q)update t1fkey:`t1$(t2[`sym],'t2[`exchange]) from `t2 q)t2 time sym exchange price t1fkey ----------------------------------------- 02:31:39.330 A NYSE 7.043314 0 04:25:17.604 B NYSE 9.441671 1 q)select sym, marketCap:price*t1fkey.sharesInIssue from t2 sym marketCap ------------- A 401.0 B 880.5 All future inserts into t2 must enumerate across t1 as below to avoid an error: q)`t2 insert (.z.T;`C;`NDQ;4.05;`t1$`C`NDQ) ,2 q)t2 time sym exchange price t1fkey ----------------------------------------- 02:31:39.330 A NYSE 7.043314 0 04:25:17.604 B NYSE 9.441671 1 20:08:25.689 C NDQ 4.05 2 Alternatively, a complex foreign key may be initialized along with the table itself. The following notation is required: q) t2:([]time:`time$();sym:`$();exchange:`$();price:`float$(); t1fkey:`t1$()) q)`t2 insert (.z.T;`C;`NDQ;4.05;`t1$`C`NDQ) ,0 Note that since the enumerated column stores the row index lookup value and not the actual value. The column type is converted to an integer and not a symbol list. Once again all inserts must enumerate the foreign key values: q)meta t2 c | t f a ------| ------ time | t t1fkey| i t1 price | f q)t2 time sym exchange price t1fkey -------------------------------------- 09:17:22.771 C NDQ 4.05 2 Removing foreign keys¶ To remove a foreign key from a table for simple foreign keys the keyword value is used: q)update sym:value sym from `trade `trade If a table has a large number of foreign keys then the following function may be used which looks up the index of each column containing a foreign key and applies the value function to each one: removeKeys:{[x] v[i]:value each (v:value flip x)i:where not null(0!meta x)`f; flip (cols x)!v } q)meta removeKeys t2 c | t f a ------| ----- time | t t1fkey| i price | f Calling the value function on a complex foreign key column will remove the table mapping but will leave the previously enumerated column intact as a list of integers. Contrasting foreign keys and joins¶ We start with a fresh trade table and another table, exInfo , which maps each symbol to its traded exchange: trade:([]time:`time$();sym:`$();price:`float$();size:`int$()); exInfo:([sym:`$()]exID:`int$();exSym:`$();location:`$()) //One million row entries n:10000000 //Start and end time st:08:00:00.000 et:17:00:00.000 //100 random syms of length 3 syms:-100?`3 //Exchange information exdata:( syms; count[syms]#101 102 103 104; count[syms]#`LSE`NDQ`HKSE`TSE; count[syms]#`GB`US`HK`JP ) insert[`exInfo;exdata] //Trade data tdata:(asc st+n?et-st;n?syms;n?100f;n?1000) insert[`trade;tdata] Simple select statements from the database take much longer using a left join and require more memory since the table mappings must be built up from scratch and the entire lookup table must be expanded to match the length of the source table prior to the output columns being specified: q)\ts select time,sym,exSym from trade lj exInfo 68 469762928 q)update sym:`exInfo$sym from `trade q)\ts select time,sym,sym.exSym from trade 35 268436016 The difference above is even more evident in higher dimensions: q)//Remove the existing foreign key from the trade table and add the q)//exchange ID for joining across two columns q)update sym:value sym from `trade q)update exID:exInfo[;`exID] each sym from `trade q)//Re-key exInfo to key on exchange ID as well as sym q)exInfo:`sym`exID xkey 0!exInfo q)//Left join on two columns, takes almost twenty times longer q)\ts select time,sym,exSym from trade lj exInfo 1324 402654032 q)//Now create a complex foreign key q)update exfKey:`exInfo$(trade[`sym],'trade[`exID]) from `trade q)//Same results as above in simple case q)\ts select time,sym,exfKey.exSym from trade 36 268436016 Linked columns¶ In the previous section we saw how foreign keys are established by enumerating across a key column; in kdb+ it is also possible to avoid the necessity of using key columns/enumerations and link two or more columns together, allowing all tables involved to be readily splayed to disk if desired. In general links may be applied to two or more tables whether the tables are in memory, splayed on disk or even in different kdb+ databases. We will consider all three scenarios here. Simple linked columns¶ Taking the table of financial data as before, and a table of equity position data, with the financials table remaining unkeyed, we can create a mapping similar to that in the case of complex foreign keys, that is by creating an index of integers that are used as a lookup. We use the Enumeration operator ! to establish the connection once we have mapped each row in the referencing table equityPositions to the corresponding row number in the referenced table financials : q)equityPositions:([]sym:5#`A`B`C`D`E;size:5?10000;mtm:5?2.) q)//Look up where the entries in the symbol column correspond to the q)//rows in the now unkeyed ‘financials’ table, then store the q)//references in the column ‘finLink’ q)financials:0!financials q)update finLink:`financials!financials.sym?sym from `equityPositions Much as before, the finLink column in the equityPositions table is identified as a foreign key to the financials table within the table metadata (even though it is not strictly a foreign key as before) and select , exec , update , and delete statements incorporating dot notation may again be used. Appending additional rows to the equityPositions table must maintain the link to the financials table by providing the finLink column with the row index in the financials table that will be mapped to in each case. In contrast to a foreign-key mapping, no enumeration is present and there are therefore no restrictions on what row numbers are inserted: q)`equityPositions insert (`A;200;2.;`financials!0) ,5 q)equityPositions sym size mtm finLink --------------------------- A 6927 1.266082 0 B 3700 1.150539 1 C 5588 0.01802349 2 D 5607 0.2896114 3 E 1666 1.541226 3 A 200 2 0 q)//Insert a sym not in the financials table, link to column 0 q)`equityPositions insert (`S;200;2.;`financials!0) ,6 q)//Insert the same sym again, link to an index not in the financials table q)`equityPositions insert (`S;200;2.;`financials!6) ,7 q)equityPositions sym size mtm finLink ------------------------------ A 6927 1.266082 0 B 3700 1.150539 1 C 5588 0.01802349 2 D 5607 0.2896114 3 E 1666 1.541226 3 A 200 2 0 S 200 2 0 S 200 2 6 q)//Unmapped data results in missing entries q)select sym,finLink.earningsPerShare from equityPositions sym earningsPerShare -------------------- A B 2.3 C 1.5 D 1.3 E A 2.3 S 2.3 S Simple linked columns on disk¶ It is possible to create linked columns on tables that have already been splayed to disk. //Create two tables and splay to disk companyInfo:([] sym:`a`b`c`d; exchange:`NYSE`NDQ`NYSE`TSE; sector:4?("Banking";"Retail";"Food Producers"); MarketCap:30000000+4?1000000 ) q)`:db/companyInfo/ set .Q.en[`:db] companyInfo `:db/companyInfo/ q)t:([]sym:`a`b`c`a`b; ex:`NYSE`NDQ`LSE`NYSE`NDQ; price:5?100.) q)`:db/t/ set .Q.en[`:db] t `:db/t/ q)//Create a new column in ‘t’ linking to ‘companyInfo’ via the sym column q)`:db/t/cLink set `companyInfo!(companyInfo`sym)?(t`sym) `:db/t/cLink //Update the .d file on disk so that it picks up the new column q).[`:db/t/.d;();,;`cLink] `:db/t/.d q)get `:db/t/.d `sym`ex`price`cLink q)//Load the table to update the changes in memory q)\l db/t //Sample query q)select sym,cLink.sector,cLink.MarketCap from t sym sector MarketCap ----------------------- a "Retail" 30886470 b "Banking" 30230906 c "Retail" 30352036 a "Retail" 30886470 Compound linked columns on disk¶ Only a small adjustment to the single-column case is required to link tables together based on multiple columns. We demonstrate this by continuing with the above tables: q)//We need to reload companyInfo otherwise the link example below will q)//not execute properly since the enumerated sym columns from the q)//splayed table ‘t’ have type 20h rather than 11h q)\l db/companyInfo q)//Initiate the mapping by flipping the columns to lists and searching q)//on each sym/exchange combination q)`:db/t/cLink2 set `companyInfo!(flip companyInfo`sym`exchange)?flip t`sym`ex `:db/t/cLink2 //Update the .d file and reload the table once again q).[`:db/t/.d;();,;`cLink2] q)\l db/t //Sample query, double link means sym 'c' does not map this time q)select sym,cLink2.sector,cLink2.MarketCap from t sym sector MarketCap ----------------------- a "Banking" 30450974 b "Retail" 30909716 c "" a "Banking" 30450974 b "Retail" 30909716 Linking across multiple kdb+ databases¶ For practical purposes kdb+ only allows only one on-disk database to be memory-mapped to each process at any one time. Occasionally however it may be necessary to perform analytics on data in several databases simultaneously and although it is possible to aggregate data from many locations to one centralized location via IPC, if the datasets in question are very large and span multiple days and weeks then this becomes impractical. In Unix-based operating systems such as Linux and macOS an alternative is to use symbolic links in conjunction with kdb+ linked columns to allow us to retrieve and analyze vast amounts of data while keeping the level of RAM usage to an acceptable level. The following section will outline how to construct a link from a trade table in one partitioned database to a quote table in a separate database existing on the same file network. The method may easily be generalized to link between an arbitrary number of tables across an arbitrary number of databases. There are three main steps: - Initially we create a partitioned database containing the tables we will be working with across multiple dates along with some utility functions for creating the database links. - We then map the entries in the trade table to those in thequote table for each date using a standard as-of join across time and sym. - Lastly, we append a column linking the trade table to thequote table based on this mapping and save this to disk. We use slightly modified versions of the standard .Q.en and .Q.dpft functions to ensure that no sym file clashes occur across the two databases. We will place these functions, and the other utility functions we will define, into the linked column namespace .lc . The following code defines the trade and quote tables and writes them to disk in databases db1 and db2 respectively: //Table schemas, being with fresh trade and quote schemas trade:([] time:`time$(); sym:`$(); price:`float$(); size:`int$() ) quote:([] time:`time$(); sym:`$(); bid:`float$(); bsize:`int$(); ask:`float$(); asize:`int$() ) //Number of entries in trade table n:10000 //Start and end of day st:08:00:00.000 et:17:00:00.000 syms:`A`B`C`D tdata:(asc st+n?et-st;n?syms;n?100f;n?1000) insert[`trade;tdata] //Generate 10x number of quotes n*:10 qdata:(asc st+n?et-st;n?syms;n?100f;n?1000;n?100f;n?1000) insert[`quote;qdata] //Historical database builder function buildHDB:{[dir;dt;t] .Q.dpft[hsym `$dir;dt;`sym;t];} //Partition the tables to disk buildHDB[“/root/db1”;;`trade] each .z.D-til 3 buildHDB[“/root/db2”;;`quote] each .z.D-til 3 The first utility function creates a symlink in a directory basePath to a table rTab that exists in a remote directory remotePath . The symlink name will be the same name as rTab . For our purposes basePath is the path to eachtrade tableremotePath the path to eachquote tablerTab is thequote table name All arguments are passed as strings. //Check if the symlink exists and use Unix command ‘ln –s’ to create //the symlink if not .lc.createSymLink:{[basePath;remotePath;rTab] remoteTablePath:remotePath,"/",rTab; baseTablePath:basePath,"/",rTab; if[not(`$rTab) in key hsym `$basePath; system "ln -s ",remoteTablePath," ",baseTablePath]; } In order to avoid sym file clashes when loading tables from different kdb+ databases into memory we must save the trade table using an alternative sym file name. The following are custom versions of .Q.en and .Q.dpft that take an additional argument for saving splayed tables to disk using a bespoke sym file name instead of the default sym name: //d is the database directory the table will be saved to //a is the name of the alternative sym file name used to avoid clashes //p is the database partition slice //f is the table partition field //t is the table name q).lc.dpft:{[d;a;p;f;t] if[not all .Q.qm each r:flip .lc.en[d;a]`. t;'`unmappable]; {[d;t;i;x] @[d;x;:;t[x]i]}[d:.Q.par[d;p;t];r;iasc r f] each key r; @[;f;`p#]@[d;`.d;:;f,(r:key r) except f]; } //The following function is called above when splaying the table .lc.en:{[d;a;x] if[not -11h=type a;'`$"expected symbol parameter type for a"]; @[x; cs@where 11h=type each x cs:key flip x; (` sv (hsym d),a)?] } The next utility function maps the entries in the trade table to those in the quote table using an as-of join, constructs a link between the two tables and saves to disk using the modified .lc.dpft function defined above. The as-of join columns (usually time and sym ) are passed in as a symbol list whereas the file path and table names are passed in as strings. .lc.joinSaveTables:{[ajCols;basePath;dt;baseTable;remoteTable] //Cast table names to syms for convenience remoteTable:`$remoteTable; baseTable:`$baseTable; //Force-load the asof join columns from remote table into memory remoteFileHandle:` sv (hsym `$basePath),(`$string dt),remoteTable; remoteTable set select sym,time from (get remoteFileHandle); //Re-apply attributes of original table to the in-memory copy ![remoteTable;();0b;a[`c]!{(#;enlist x;y)} .' flip value a:exec a,c from meta get remoteFileHandle where c in ajCols]; //Join tables and set the link column to be the point at which the //tables map together baseTable set aj[ ajCols; value baseTable; ?[value remoteTable; (); 0b; (ajCols!ajCols),(enlist `id)!enlist `i] ]; update link:remoteTable! (exec i from select i from value remoteTable)?id from baseTable; // Splay base table to disk but use different, independent sym file `tsym .lc.dpft[hsym `$basePath;`tsym;dt;`sym;baseTable]; } Now that we have all the prerequisite utility functions defined, the following master function creates a symlink from the trade table directory to the quote table for each partition slice in question: .lc.createPart:{[basePath;baseTable;remotePath; remoteTable;ajCols;dt] .lc.createSymLink[raze basePath,"/",string dt; raze remotePath,"/",string dt;remoteTable]; .lc.joinSaveTables[ajCols;basePath;dt;baseTable;remoteTable]; } If we create the partitioned databases and run something like the following, a link will be created across all database partitions. For simplicity, the absolute directory paths are passed to the function as arguments and hence used when defining the symlink. The example above may accommodate relative directory paths with a little manipulation. q).lc.createPart["/root/db1";"trade";"/root/db2";"quote";`sym`time;] each .z.D-til 3 Loading the database from memory, we achieve a trade table with an embedded link to the remote quote table. q)\l db1 q)tables[] `quote`trade q)meta trade c | t f a -----| --------- date | d sym | s p time | t price| f size | i id | j link | i quote Aggregations may be carried out using a single table; queries will be very efficient especially if repeated due to caching: // First run q)\ts res:select size wavg price,bsize wavg bid,asize wavg ask by sym,10 xbar time.minute from select time,sym,size,price,link.ask,link.asize,link.bid,link.bsize from trade where date=max date 3030 1117552 // Second run q)\ts res:select size wavg price,bsize wavg bid,asize wavg ask by sym,10 xbar time.minute from select time,sym,size,price,link.ask,link.asize,link.bid,link.bsize from trade where date=max date 9 1116656 q)res sym minute| price bid ask ----------| -------------------------- A 08:00 | 50.87719 49.76909 48.63532 A 08:10 | 48.9346 49.99889 52.01281 A 08:20 | 47.48985 49.68657 53.01129 A 08:30 | 50.03407 51.82779 47.96814 Conclusion¶ This white paper introduced how foreign keys and linked columns may be established and applied in kdb+ databases. As we observed, the benefits of using foreign keys are numerous; firstly, establishing a permanent link between tables is considerably more efficient than building up a relationship in real time through the use of a join, particularly if queries will be repeated regularly. Furthermore, enumerating columns ensures data integrity and helps identify missing or corrupt referential data. Database normalization is much easier to achieve allowing greater data consistency, reduction of redundant data and more flexible database design. A drawback to using foreign keys is that keyed tables cannot be splayed to disk. This is circumvented using linked columns that can establish permanent mappings between tables whether they are both in memory or on disk. Although not featuring the benefits of enumeration, linked columns are useful for establishing mappings between tables in large-scale historical databases, allowing users to either map data within partition slices in a single database or map each table in a particular partition slice in one database to the corresponding partition slice in another. Author¶ Kevin Smyth has worked as a consultant for some of the world's leading financial institutions. Based in London, Kevin has implemented data capture and high-frequency data analysis projects across a large number of mainstream and alternative asset classes.
hdbtypes:@[value;`hdbtypes;`hdb]; //list of hdb types to look for and call in hdb reload hdbnames:@[value;`hdbnames;()]; //list of hdb names to search for and call in hdb reload tickerplanttypes:@[value;`tickerplanttypes;`tickerplant]; //list of tickerplant types to try and make a connection to gatewaytypes:@[value;`gatewaytypes;`gateway] //list of gateway types connectonstart:@[value;`connectonstart;1b]; //rdb connects to tickerplant as soon as it is started replaylog:@[value;`replaylog;1b]; //replay the tickerplant log file schema:@[value;`schema;1b]; //retrieve the schema from the tickerplant subscribeto:@[value;`subscribeto;`]; //a list of tables to subscribe to, default (`) means all tables ignorelist:@[value;`ignorelist;`heartbeat`logmsg]; //list of tables to ignore when saving to disk subscribesyms:@[value;`subscribesyms;`]; //a list of syms to subscribe for, (`) means all syms tpconnsleepintv:@[value;`tpconnsleepintv;10]; //number of seconds between attempts to connect to the tp onlyclearsaved:@[value;`onlyclearsaved;0b]; //if true, eod writedown will only clear tables which have been successfully saved to disk savetables:@[value;`savetables;1b]; //if true tables will be saved at end of day, if false tables wil not be saved, only wiped gc:@[value;`gc;1b]; //if true .Q.gc will be called after each writedown - tradeoff: latency vs memory usage hdbdir:@[value;`hdbdir;`:hdb]; //the location of the hdb directory sortcsv:@[value;`sortcsv;`:config/sort.csv] //location of csv file reloadenabled:@[value;`reloadenabled;0b]; //if true, the RDB will not save when .u.end is called but //will clear it's data using reload function (called by the WDB) parvaluesrc:@[value;`parvaluesrc;`log]; //where to source the rdb partition value, can be log (from tp log file name), //tab (from the the first value in the time column of the table that is subscribed for) //anything else will return a null date which is will be filled by pardefault subfiltered:@[value;`subfiltered;0b]; //allows subscription filters to be loaded and applied in the rdb pardefault:@[value;`pardefault;.z.D]; //if the src defined in parvaluesrc returns null, use this default date instead tpcheckcycles:@[value;`tpcheckcycles;0W]; //specify the number of times the process will check for an available tickerplant / - if the timer is not enabled, then exit with error if[not .timer.enabled;.lg.e[`rdbinit;"the timer must be enabled to run the rdb process"]]; / - settings for the common save code (see code/common/dbwriteutils.q) .save.savedownmanipulation:@[value;`.save.savedownmanipulation;()!()] //a dict of table!function used to manipulate tables at EOD save .save.postreplay:@[value;`.save.postreplay;{{[d;p] }}] //post EOD function, invoked after all the tables have been written down /- end of default parameters cleartable:{[t].lg.o[`writedown;"clearing table ",string t]; @[`.;t;0#]} savetable:{[d;p;t] /-flag to indicate if save was successful - must be set to true first incase .rdb.savetables is set to false c:1b; /-save the tables if[savetables; @[.sort.sorttab;t;{[t;e] .lg.e[`savetable;"Failed to sort ",string[t]," due to the follwoing error: ",e]}[t]]; .lg.o[`savetable;"attempting to save ",(string count value t)," rows of table ",(string t)," to ",string d]; c:.[{[d;p;t] (` sv .Q.par[d;p;t],`) set .Q.en[d;.save.manipulate[t;value t]]; (1b;`)};(d;p;t);{(0b;x)}]; /-print the result of saving the table $[first c;.lg.o[`savetable;"successfully saved table ",string t]; .lg.e[`savetable;"failed to save table ",(string t),", error was: ", c 1]]]; /-clear tables based on flags provided earlier $[onlyclearsaved; $[first c;cleartable[t]; .lg.o[`savetable;"table "(string t)," was not saved correctly and will not be wiped"]]; cleartable[t]]; /-garbage collection if specified if[gc;.gc.run[]] } /-historical write down process writedown:{[directory;partition] /-get all tables in to namespace except the ones you want to ignore t:t iasc count each value each t:tables[`.] except ignorelist; savetable[directory;partition] each t; } /-extendable function to pass to all connected hdbs at the end of day routine hdbmessage:{[d] (`reload;d)} /-function to reload an hdb notifyhdb:{[h;d] /-if you can connect to the hdb - call the reload function @[h;hdbmessage[d];{.lg.e[`notifyhdb;"failed to send reload message to hdb on handle: ",x]}]; }; endofday:{[date;processdata] /-add date+1 to the rdbpartition global rdbpartition,:: date +1; .lg.o[`rdbpartition;"rdbpartition contains - ","," sv string rdbpartition]; / Need to download sym file to scratch directory if this is Finspace application if[.finspace.enabled; .lg.o[`createchangeset;"downloading sym file to scratch directory for ",.finspace.database]; .aws.get_latest_sym_file[.finspace.database;getenv[`KDBSCRATCH]]; ]; /-if reloadenabled is true, then set a global with the current table counts and then escape if[reloadenabled; eodtabcount:: tables[`.] ! count each value each tables[`.]; .lg.o[`endofday;"reload is enabled - storing counts of tables at EOD : ",.Q.s1 eodtabcount]; /-set eod attributes on gateway for rdb gateh:exec w from .servers.getservers[`proctype;.rdb.gatewaytypes;()!();0b;0b]; .async.send[0b;;(`setattributes;.proc.procname;.proc.proctype;.proc.getattributes[])] each neg[gateh]; .lg.o[`endofday;"Escaping end of day function"];:()]; t:tables[`.] except ignorelist; /-get a list of pairs (tablename;columnname!attributes) a:{(x;raze exec {(enlist x)!enlist((#);enlist y;x)}'[c;a] from meta x where not null a)}each tables`.; /-save and wipe the tables writedown[hdbdir;date]; /-creates new changeset if this is a finspace application if[.finspace.enabled; changeset:.finspace.createchangeset[.finspace.database]; ]; /-reset timeout to original timeout restoretimeout[]; /-reapply the attributes /-functional update is equivalent of {update col:`att#col from tab}each tables (![;();0b;].)each a where 0<count each a[;1]; rmdtfromgetpar[date]; /-invoke any user defined post replay function .save.postreplay[hdbdir;date]; /-notify all hdbs hdbs:distinct raze {exec w from .servers.getservers[x;y;()!();1b;0b]}'[`proctype`procname;(hdbtypes;hdbnames)]; $[.finspace.enabled; .finspace.notifyhdb[;changeset] each .finspace.hdbclusters; notifyhdb[;date] each hdbs ]; if[.finspace.enabled;.os.hdeldir[getenv[`KDBSCRATCH];0b]] }; reload:{[date] .lg.o[`reload;"reload command has been called remotely"]; /-get all attributes from all tables before they are wiped /-get a list of pairs (tablename;columnname!attributes) a:{(x;raze exec {(enlist x)!enlist((#);enlist y;x)}'[c;a] from meta x where not null a)}each tabs:subtables except ignorelist; /-drop off the first eodtabcount[tab] for each of the tables dropfirstnrows each tabs; rmdtfromgetpar[date]; /-reapply the attributes /-functional update is equivalent of {update col:`att#col from tab}each tables (![;();0b;].)each a where 0<count each a[;1]; /-garbage collection if enabled if[gc;.gc.run[]]; /-reset eodtabcount back to zero for each table (in case this is called more than once) eodtabcount[tabs]:0; /-restore original timeout back to rdb restoretimeout[]; .lg.o[`reload;"Finished reloading RDB"]; }; /-drop date from rdbpartition rmdtfromgetpar:{[date] rdbpartition:: rdbpartition except date; .lg.o[`rdbpartition;"rdbpartition contains - ","," sv string rdbpartition]; } dropfirstnrows:{[t] /-drop the first n rows from a table n: 0^ eodtabcount[t]; .lg.o[`dropfirstnrows;"Dropping first ",(sn:string[n])," rows from ",(st:string t),". Current table count is : ", string count value t]; .[@;(`.;t;n _);{[st;sn;e].lg.e[`dropfirstnrows;"Failed to drop first ",sn," row from ",st,". The error was : ",e]}[st;sn]]; .lg.o[`dropfirstnrows;st," now has ",string[count value t]," rows."]; }; subscribe:{[] if[count s:.sub.getsubscriptionhandles[tickerplanttypes;();()!()];; .lg.o[`subscribe;"found available tickerplant, attempting to subscribe"]; if[subfiltered; @[loadsubfilters;();{.lg.e[`rdb;"failed to load subscription filters"]}];]; /-set the date that was returned by the subscription code i.e. the date for the tickerplant log file /-and a list of the tables that the process is now subscribing for subinfo:.sub.subscribe[subscribeto;subscribesyms;schema;replaylog;first s]; /-setting subtables and tplogdate globals @[`.rdb;;:;]'[`subtables`tplogdate;subinfo`subtables`tplogdate]; /-update metainfo table for the dataaccessapi if[`dataaccess in key .proc.params;.dataaccess.metainfo:.dataaccess.metainfo upsert .checkinputs.getmetainfo[]]; /-apply subscription filters to replayed data if[subfiltered&replaylog; applyfilters[;subscribesyms]each subtables];];}
' Signal¶ Signal an error 'x where x is a symbol atom or string, aborts evaluation and passes x to the interpreter as a string. q)0N!0;'`err;0N!1 0 'err Signal is part of q syntax. It is not an operator and cannot be iterated or projected. The only way to detect a signal is to use Trap. q)f:{@[{'x};x;{"trap:",x}]} q)f`err "trap:err" Trap always receives a string regardless of the type of x . Restrictions¶ q)f 1 / signals a type error indicating ' will not signal a number "trap:stype" q)f"a" /q will not signal a char "trap:stype" Using an undefined word signals the word as an error: q)'word 'word which is indistinguishable from q)word 'word Error-trap modes¶ At any point during execution, the behavior of signal (' ) is determined by the internal error-trap mode: 0 abort execution (set by Trap or Trap At) 1 suspend execution and run the debugger 2 collect stack trace and abort (set by .Q.trp) During abort, the stack is unwound as far as the nearest trap at (@ or . or .Q.trp ). The error-trap mode is always initially set to 1 for console input 0 for sync message processing \e sets the mode applied before async and HTTP callbacks run. Thus, \e 1 will cause the relevant handlers to break into the debugger, while \e 2 will dump the backtrace either to the server console (for async), or into the socket (for HTTP). q)\e 2 q)'type / incoming async msg signals 'type [2] f@:{x*y} ^ [1] f:{{x*y}[x;3#x]} ^ [0] f `a ^ q)\e 1 q)'type [2] f@:{x*y} ^ q)) / the server is suspended in a debug session Trap, Trap At Controlling evaluation, Debugging, Error handling Q for Mortals §10.1.7 Return and Signal signum ¶ signum x signum[x] Where x (or its underlying value for temporals) is - null or negative, returns -1i - zero, returns 0i - positive, returns 1i q)signum -2 0 1 3 -1 0 1 1i q)signum (0n;0N;0Nt;0Nd;0Nz;0Nu;0Nv;0Nm;0Nh;0Nj;0Ne) -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1i q)signum 1999.12.31 -1i Find counts of price movements by direction: select count i by signum deltas price from trade signum is a multithreaded primitive. Implicit iteration¶ signum is an atomic function. q)signum(10;-20 30) 1i -1 1i q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)signum d a| 1 -1 1 b| 1 1 -1 q)signum t a b ----- 1 1 -1 1 1 -1 q)signum k k | a b ---| ----- abc| 1 1 def| -1 1 ghi| 1 -1 Domain and range¶ domain b g x h i j e f c s p m d z n u v t range i . i i i i i i i . i i i i i i i i Range: i Skip to content kdb+ and q documentation Simple Exec – Reference – kdb+ and q documentation Initializing search Ask a question Home kdb+ and q kdb Insights SDK kdb Insights Enterprise KDB.AI PyKX APIs Help kdb+ and q documentation Home kdb+ and q kdb+ and q About Getting Started Getting Started Install Licenses Learn Learn Overview Mountain tour Mountain tour Overview Begin here The q session Tables CSVs Datatypes Scripts IDE Q for quants Q by Examples Q for All (video) Examples from Python Examples from Python Basic Array List Strings Dictionaries Q for Mortals 3 Q by Puzzles Q by Puzzles About 12 Days of Xmas ABC problem Abundant odds Four is magic Name Game Summarize and Say Word wheel Reading room Reading room Information desk Boggle Cats cradle Fizz buzz Klondike Phrasebook Scrabble Application examples Application examples Astronomy Detecting card counters Corporate actions Disaster management Exoplanets Market depth Market fragmentation Option pricing Predicting floods Signal processing Space weather Trading surveillance Transaction-cost analysis Trend indicators Advanced q Advanced q Remarks on Style Shifts & scans Technical articles Views Origins Terminology Starting kdb+ Starting kdb+ Overview The q language IPC Tables Historical database Realtime database Language Language Reference card By topic Iteration Iteration Overview Implicit iteration Iterators Maps Accumulators Guide to iterators Keywords Keywords abs aj, aj0, ajf, ajf0 all, any and asc, iasc, xasc asof attr avg, avgs, mavg, wavg bin, binr ceiling count, mcount cols, xcol, xcols cor cos, acos cov, scov cross csv cut delete deltas desc, idesc, xdesc dev, mdev, sdev differ distinct div dsave each, peach ej ema enlist eval, reval except exec exit exp, xexp fby fills first, last fkeys flip floor get, set getenv, setenv group gtime, ltime hcount hdel hopen, hclose hsym ij, ijf in insert inter inv key keys, xkey like lj, ljf load, rload log, xlog lower lsq max, maxs, mmax md5 med meta min, mins, mmin mmu mod neg next, prev, xprev not null or over, scan parse pj prd, prds prior rand rank ratios raze read0 read1 reciprocal reverse rotate save, rsave select show signum sin, asin sqrt ss, ssr string sublist sum, sums, msum, wsum sv system tables tan, atan til trim, ltrim, rtrim type uj, ujf union ungroup update upsert value var, svar view, views vs where within wj, wj1 xbar xgroup xrank Overloaded glyphs Operators Operators Add Amend Apply, Index, Trap Assign Cast Coalesce Compose Cut Deal, Roll, Permute Delete Display Dict Divide Dynamic Load Drop Enkey, Unkey Enumerate Enumeration Enum Extend Equal Exec File Binary File Text Fill Find Flip Splayed Greater Greater Than Identity, Null Join Less Than Lesser Match Matrix Multiply Multiply Not Equal Pad Select Set Attribute Simple Exec Signal Subtract Take Tok Update Vector Conditional Control constructs Control constructs Cond do if while Namespaces Namespaces .h (markup) .j (JSON) .m (memory backed files) .Q (utils) .z (env, callbacks) Application Atomic functions Comparison Conformability Connection handles Command-line options Datatypes Dictionaries Enumerations Evaluation control Exposed infrastructure File system Function notation Glossary Internal functions Joins Mathematics Metadata Namespaces Pattern matching Parse trees qSQL qSQL qSQL queries Functional qSQL Regular Expressions Syntax System commands Tables Variadic syntax Database Database Tables in the filesystem Populating tables Populating tables Loading from large files Foreign keys Linking columns Data loaders From MDB via ODBC Persisting tables Persisting tables Serializing an object Splayed tables Partitioned tables Segmented databases Multiple partitions Maintenance Maintenance Data management Data-At-Rest Encryption Compression Compression File compression Compression examples FSI case study Permissions Query optimization Query scaling Time-series simplification Compacting HDB sym Working with sym files Developing Developing IPC IPC Overview Listening port Deferred response Async callbacks Named pipes Serialization examples Socket sharding SSL/TLS HTTP WebSockets Tools Tools Code profiler Debugging Errors man.q Unit tests Monitor & control execution Coding Coding Geospatial indexing Linear programming Multithreaded primitives Pivoting tables Precision Programming examples Programming idioms Temporal data Timezones Unicode DevOps DevOps CPU affinity Daemon Firewalling inetd, xinetd Linux production notes Log Files Multi-threading Multiple versions Parallel processing Performance tips Shebang script Surveillance latency Windows service Optane Memory Optane Memory Optane Memory and kdb+ Performance tests Release notes Release notes History Changes in 4.1 Changes in 4.0 Changes in 3.6 Changes in 3.5 Changes in 3.4 Changes in 3.3 Changes in 3.2 Changes in 3.1 Changes in 3.0 Changes in 2.8 Changes in 2.7 Changes in 2.6 Changes in 2.5 Changes in 2.4 Withdrawn Developer tools FAQ Streaming Streaming General architecture General architecture Overview kdb+tick kdb+tick Tickerplant (tick.q) Tickerplant pub/sub (u.q) RDB (r.q) Alternative architecture TP Log (data recovery) RTEs (real-time engines) Gateway design Query routing Load balancing Profiling Disaster recovery Kubernetes Order Book Alternative in-memory layouts Corporate actions Advanced Advanced Distributed systems RDB intraday writedown Interfaces Interfaces Languages Languages C/C++ C/C++ Quick guide API reference C API for kdb+ Extending q with C/C++ Async callbacks (C client) C# Foreign Function Interface (FFI) Java Python R Rust Scala KX libraries Bloomberg Excel FIX messaging GPUs Matlab ODBC ODBC ODBC client ODBC3 server ODBC3 and Tableau Solace pub/sub Open source Machine learning Using kdb+ in the cloud Using kdb+ in the cloud About Amazon Web Services Amazon Web Services Reference architecture Amazon EC2 & Storage Services Amazon EC2 & Storage Services Migrating a kdb+ HDB to Amazon EC2 Elastic Block Store (EBS) EFS (NFS) Amazon Storage Gateway FSx for Lustre AWS Lambda Microsoft Azure Microsoft Azure Reference architecture Google Cloud Google Cloud Reference architecture Auto Scaling Auto Scaling About Amazon Web Services Realtime data cluster Costs and risks Other file systems Other file systems MapR-FS Goofys S3FS S3QL ObjectiveFS WekaIO Matrix Quobyte Academy Discussion Forum White papers About this site kdb Insights SDK kdb Insights Enterprise KDB.AI PyKX APIs Help ? Simple Exec ¶ For functional Simple Exec, see Basics: Functional qSQL Back to top sin , asin ¶ Sine, arcsine sin x sin[x] asin x asin[x] Where x is a numeric, returns sin - the sine of x , taken to be in radians. The result is between-1 and1 , or null if the argument is null or infinity. asin - the arcsine of x ; that is, the value whose sine isx . The result is in radians and lies between \(-\frac{\pi}{2}\) and \(\frac{\pi}{2}\). (The range is approximate due to rounding errors). Null is returned if the argument is not between -1 and 1. q)sin 0.5 / sine 0.4794255 q)sin 1%0 0n q)asin 0.8 / arcsine 0.9272952 sin and asin are multithreaded primitives. Implicit iteration¶ sin and asin are atomic functions. q)sin (.2;.3 .4) 0.1986693 0.2955202 0.3894183 q)asin (.2;.3 .4) 0.2013579 0.3046927 0.4115168 q)sin `x`y`z!3 4#til[12]%10 x| 0 0.09983342 0.1986693 0.2955202 y| 0.3894183 0.4794255 0.5646425 0.6442177 z| 0.7173561 0.7833269 0.841471 0.8912074 Domain and range¶ domain: b g x h i j e f c s p m d z n u v t range: f . f f f f f f f . f f f z f f f f sqrt ¶ Square root sqrt x sqrt[x] Returns as a float where x is numeric and - non-negative, the square root of x - negative or null, null - real or float infinity, 0w - any other infinity, the square root of the largest value for the datatype q)sqrt -1 0n 0 25 50 0n 0n 0 5 7.071068 q)sqrt 12:00:00.000000000 6572671f q)sqrt 0Wh 181.0166 q)sqrt 101b 1 0 1f sqrt is a multithreaded primitive. Implicit iteration¶ sqrt is an atomic function. q)sqrt (10;20 30) 3.162278 4.472136 5.477226 q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 21 3;4 5 6) q)sqrt d a| 3.162278 4.582576 1.732051 b| 2 2.236068 2.44949 q)sqrt t a b ----------------- 3.162278 2 4.582576 2.236068 1.732051 2.44949 q)sqrt k k | a b ---| ----------------- abc| 3.162278 2 def| 4.582576 2.236068 ghi| 1.732051 2.44949 Domain and range¶ domain b g x h i j e f c s p m d z n u v t range f . f f f f f f f . f f f z f f f f Range: fz exp , log , xexp , xlog Mathematics
Mass ingestion through data loaders¶ Receiving a large amount of data in various file formats and from multiple different sources that need to be ingested, processed, persisted, and reported upon within a certain time period is a challenging task. One way of dealing with this is to use a batch-processing model, the requirements and considerations of which will differ from a vanilla tick architecture commonly used for mass data ingestion. The biggest difference being a simple but sometimes difficult problem to solve; how a system can ingest a huge amount of batch data in a short period of time – data that is arriving in a number of files, multiple file formats, varying sizes and at varying times throughout the day. All of this needs to be done while maintaining data integrity, sorting and applying attributes, maximizing HDB availability during ingestion and staying within the confines of the kdb+ model for writing to disk (data sorted on disk, use of enumeration meaning single writes to sym file). In this paper we shall discuss how mass ingestion of data can be done efficiently and quickly through kdb+, using a batch-processing model. This approach aims to optimize I/O, reduce time and memory consumption from re-sorting and maintaining on disk attributes. These challenges will be outlined and a simplified framework, which has been deployed in several KX implementations, will be shown as an example. The framework will show how these issues can be navigated leveraging kdb+ functionality, along with suggested further enhancements. Batch processing¶ What is it?¶ Data ingestion is the process of moving, loading and processing data into a database. kdb+ is a well-known market leader in real-time capturing of data, but this is only one form of data ingestion. Batch data processing can also be an efficient (and cheaper) way of processing high volumes of data. In batch processing, data is collected/stored by the upstream source system and a full intraday file is ingested and processed downstream. One example of batch data files are end-of-day financial-market data provided by financial data vendors. Why go batch?¶ Some reasons to consider batch processing over stream processing: - There is no real-time requirement for the business use case i.e. no need for real-time analytics results - It is often cheaper to subscribe to end-of-day market data files rather than streaming - Data is coming from legacy systems which do not have a real-time streaming component - Real-time data feeds may not come in order and time sync issues need to be avoided - Batch processing is the only option available The business use case is the main consideration when it comes to deciding on real-time vs batch processing. Business use cases where batch processing may be best utilized are: - Post-trade analytics - Internal reporting and auditing - Centralized data storage - Regulatory reporting Problem statement¶ Batches containing multiple large files continuously arrive to the system throughout the day. Batches can contain multiple large files that all need to be saved to the same kdb+ tables on disk. In a simple data-loader system, if a batch has 10 files, file 1 would be read, sorted and saved with attributes to a table. File 2 must then be read but the data from file 1 and now file 2 will need to be merged, sorted and saved with attributes to the table and so on. In this system, sorting and applying attributes in memory is both memory-intensive and time-consuming. The framework outlined below is applicable when volumes and throughput is high and hardware (memory, cores) limitations are present. This approach aims to optimize I/O, reduce time and memory consumption from re-sorting and maintaining on disk attributes. Mass-ingestion framework¶ Framework summary¶ The diagram below illustrates the example batch-processing framework. The main components of this framework are: The orchestrator data loader process¶ The orchestrator has internal tables and variables to track ingestion: - Table of available workers and their statuses - Table of tasks that will be sent to the workers to run and an estimated size of the task - Tables of files that are expected and the associated functions - Variables to track server memory and amount which can be utilized for tasks The orchestrator will ping a staging area for relevant batches of files to ingest and once the orchestrator has recognized that a batch has all its expected files, it will begin to send tasks to its workers asynchronously. Tracking of when all files are available can be done in numerous ways, such as knowing number of files for each specific batch or via naming conventions of files e.g. batchNamefile1of10.csv batchNamefile2of10.csv N number of worker processes¶ The tasks sent from the orchestrator can be broken down to: - read and save task - index task - merge task - move table task Each worker will read, transform, map and upsert data to a relevant schema and to track the index of the file. Each worker will save its own file in a folder in a temporary HDB directory and each file will be enumerated against the same sym file. This enables concurrent writes to disk by the workers. Callbacks will be used so the worker can update the orchestrator with success or failure of its task. In order to maintain sym file integrity each worker returns a distinct list of symbol values, which the orchestrator then aggregates into a distinct list and appends any new syms to the sym file. This ensures a single write occurs to the sym file. Once all files of a batch are loaded, the workers will be tasked with merging a list of specific columns, sorting based on the index from the files saved and if necessary include any existing data from the HDB during the sort and merge. Once the merge is complete the table will be moved to the HDB, during which all queries to the HDB will be temporarily disabled using a lockfile. “Intraday writedown solutions” for similar solutions Workers can be killed after each batch to free up memory rather than each worker running garbage collection, which can be time-consuming. The number of workers to utilize will be use-case specific but some key factors to consider include: - Available memory and cores on the server - Number of expected files per batch and their sizes in memory - Degree of data manipulation and computation required - Size of overall expected datasets - Delivery times of the data batches Methods of task allocation¶ Task allocation of worker processes will depend on system architecture and how dynamic or static the file loading allocations should be, based on the complexity of the use case. Static approach¶ For simple cases, where perhaps more factors are known (exact file sizes and arrival times) a simple static method may be used where workers can be mapped to read specific files. For example, worker 1 will read files 1, 2 and 3, worker 2 will read files 4, 5 and 6, etc. This is a very simplistic approach and does not allow for optimal load balancing, inhibits scalability and will be generally less efficient than a dynamic approach. Dynamic approach¶ For use cases where there are known unknowns or variables that cannot be guaranteed, such as the number of files per day, file sizes, etc., a dynamic approach is much more beneficial and will allow for dynamic load balancing, performance and memory management. In this approach, the orchestrator would allocate tasks based on factors including: - Availability of workers to process the next file - managed by tracking the status of each worker - File size - managed by checking file sizes of a batch to ensure the largest files are allocated first and allocated to a worker which has the memory to process it i.e. certain workers may be set to have higher memory limits so will be dedicated to processing the larger files - Server memory - managed by checking available server memory (free –g) before allocating tasks. This is to determine if enough memory is available to process the file and also retain a buffer for other processes on the server - Necessity to throttle ingestion - managed by holding back files from ingestion in scenarios where available free memory is below a certain threshold This dynamic method of allocation allows for each worker to be optimally utilized, improving the speed of ingestion and utilizing the advantage of concurrent writes to its fullest. It will also act as protection for server memory resources e.g. spikes in batch data volumes due to economic events will be accounted for and loading will be throttled if necessary to ensure ingestion does not impact the rest of the system. The following section has an example of a dynamic approach. Ingestion-flow example¶ kxcontrib/massIngestionDataloader The orchestrator process will act as a file watcher. It will check for specific batches of files and add any valid files to the .mi.tasks table along with the relevant tasks to apply. The orchestrator process starts up and pings the staging area for batches and constituent files. It has view of all available workers in the .mi.workers table: q).mi.workers worker hostport handle status ----------------------------------- mi_dl_1_a :tcps::3661 18 free mi_dl_2_a :tcps::3663 20 free mi_dl_3_a :tcps::3665 23 free In order to estimate the memory required for reading each file, hcount is applied to the source file. This will later be utilized when checking how many tasks to distribute based on overall task size and server memory available. q).mi.tasks:update taskSize:7h$%[;1e6]hcount each files from .mi.tasks q)select batchID,files,status,task,readFunction,postread,saveFunction,taskSize from .mi.tasks batchID files status task readFunction postRead saveFunction taskSize -------------------------------------------------------------------------------------------------------- "07d31" marketData1of3.csv queued .mi.readAndSave .mi.read .mi.postRead .mi.saveTableByFile 1800 "07d31" marketData2of3.csv queued .mi.readAndSave .mi.read .mi.postRead .mi.saveTableByFile 900 "07d31" marketData3of3.csv queued .mi.readAndSave .mi.read .mi.postRead .mi.saveTableByFile 900 Once the orchestrator recognizes that a batch has arrived (through either file naming conventions e.g. file1of3.csv , or a trigger file) it updates the .mi.tasks table and begins to process the batch. What functions to apply to specific batches/files can be managed with configuration based on the filename. Step 1: Orchestrator distributes read and save task per file¶ Upon the orchestrator recognizing that a batch with its required files is available (files 1, 2, and 3), the following is executed within the .mi.sendToFreeWorker function. It checks if any of the workers are available. if[count workers:0!select from .mi.workers where null task,not null handle; .. Available memory is checked before files are distributed. A check is then done to estimate the memory required for reading the files, assuming all the workers are utilized. Assuming memory is within the server memory limits and memory buffer (variable set on initialization which is then monitored) ingestion is allowed to continue. If memory available is twice that of the required memory buffer the file size limit is increased, if it is less, then the file size limit is reduced. mem:7h$mi.fileSizeLimit * .95 1 1.05 sum(and)scan .mi.freeMemoryFree>.mi.memoryBuffer*1 2 It then proceeds to check how many tasks can be distributed based on memory available. toRest:workerInfo except toLargeWorker:select from workerInfo where not null worker toRest:a neg[n]sublist where mem > (n:count[.mi.workers] - count toLargeWorker) msum (a:reverse toRest)`taskSize toRest:count[workers]sublist toRest The orchestrator then sends an asynchronous call, via the .mi.send function, to its workers to read and save each file. toRest:update worker:count[toRest]# workers`worker from toRest .mi.send each (toLargeWorker,toRest)lj delete taskID,taskSize from .mi.workers The .mi.send function sends async call of .mi.runTask neg[h:x`handle](`.mi.runTask;(`task`args#taskInfo),(1#`taskID)#x)) and updates in memory tables for tracking. .mi.workers:update task:x`task, taskID:x`taskID from .mi.workers where worker=x`worker .mi.tasks:update taskIDstartTime:.z.p, status:`processing from .mi.tasks where taskID=x`taskID Step 2: Worker receives task to read and write a file¶ The worker receives a run task command .mi.runTask from the orchestrator: .mi.runTask:{[taskDic] neg[.z.w]( `.mi.workerResponse; (`taskID`mb!(taskDic`taskID;7h$%[.Q.w[]`heap;1e6])), `success`result!@[{(1b;x[`task]@x`args)};taskDic;{(0b;x)}]); neg[.z.w](::); } .mi.runTask takes a dictionary of parameters: the task to run and the arguments of the assigned task. An example parameter dictionary of .mi.runTask : q)d /args to .mi.runTask task | `.mi.readAndSave args | `file`readFunction`postRead`batchID! (`:/mnt/mts/data/market.. taskID | e552aec7-5c9d-69c6-846b-b4e178dcc042 q)d`args /args to assigned task file | `:/mnt/mts/data/marketData3of3.csv readFunction| `.mi.read postRead | `.mi.postRead batchID | 07d312e0-bd18-092d-06a3-1707ab9cd7f1 The worker then applies the arguments to the assigned task. During this stage, the worker reads, transforms and saves each column to its subdirectory based on the batch ID and filename, e.g. /<batchID>/<filename>. .mi.readAndSave:{[x] file:x`file; data:x[`readFunction]@file; data:x[`postReadFunction]@data; .mi.writeTablesByFile[x;data] } The save function creates the filepath to save to, based on the batch ID and the file name. file:`$last "/" vs 1_string x`file batchID:`$string x`batchID db:` sv .mi.hdbTmp,batchID,file The worker then saves the table splayed but without enumeration and tracks symbol type and non-symbol type columns. symbolCols:where 11h=type each f nonSymCols:(c:key f)except symbolCols,`date colSizes:.mi.noEnumSplay[apath:` sv db,t;c;nonSymCols;symbolCols;tab] During the write, the size of the column written to disk is tracked. set'[` sv'path,'nonSymCols;flip nonSymCols#x] colSizes,:nonSymCols!hcount each` sv'path,'nonSymCols] ... colSizes,:hcount each set'[` sv'path,'key f;f:flip symCols#x]] The unique symbol values within the file are also tracked for later use. written:update t:data 0, symCol:`sym, sortCol:`time, symbolCols:count[written]#enlist[symbolCols] from written res:`t`written`uniqueSymbolsAcrossCols!(t;written;distinct raze symbolCols#f) Once the read and save task is complete, the tracked information i.e. the name of table saved, column names of the table, the unique symbol values and the column memory statistics, are returned to the orchestrator via the callback function .mi.workerResponse within .mi.runTask . Column memory statistics The column memory statistics are available so memory required for any further jobs on these columns could be estimated as part of distributing tasks to extend memory management. Step 3: Orchestrator appends to sym and sends next task¶ After a success message is received from a worker via .mi.workerResponse , the orchestrator updates the tasks and workers tables. .mi.tasks:update status:stat, endTime:.z.p, result:enlist x[`result], success:first x[`success] from .mi.tasks where taskID=first x[`taskID] It combines the distinct symbols as they are returned by each worker based on the batch ID. .[ `.mi.uniqueSymbols; (taskInfo`batchID;`uniqueSymbolsAcrossCols); {distinct y,x}raze res[`rvalid;`uniqueSymbolsAcrossCols] ] The orchestrator then checks if all relevant read and save tasks are complete. $[all `complete=exec status from .mi.tasks where batchID=batch; readWrites:0!select from .mi.tasks where task=`.mi.readAndSave, batchID=batch, status=`complete, endTime = (last;endTime) fby args[;`file]; :()] If read and save tasks are complete, the unique symbols for the batch are appended to the sym file and unique symbol cache cleared. if[0<count first us:.mi.uniqueSymbols batch; 0N!"Appending unique syms to the sym file ", string symFile:` sv .mi.hdbDir,`sym; symFile?us`uniqueSymbolsAcrossCols; delete from`.mi.uniqueSymbols where batchID=batch; 0N!"Finished .mi.appendToSymFile"] The orchestrator then creates the required index, merge and move jobs. written:0!select sum colSizes, typ, date, last symCol, last sortCol, allCols:key last colSizes, last symbolCols, paths:path by t from raze result`written 0N!"getting indx tasks" indxJobSizes:{[a] sum a[`colSizes]c where not null c:a`sortCol`symCol} each written 0N!"getting merge tasks" toMerge:(ungroup select t, mergeCol:key each colSizes, colSize:get each colSizes from written) lj 1!select t, symCol, sortCol, typ, paths, date, allCols, symbolCols from written toMerge:b select from toMerge where mergeCol<>symCol, mergeCol<>sortCol toIndx:b select from written toMove:b select t, typ, date from written These tasks are then upserted into the .mi.tasks table and will be distributed and run in sequence. if[count queued:0!select from .mi.tasks where not status=`complete, task in `.mi.index`.mi.merge`.mi.move, i=min i; 0N!"Sending task"; .mi.sendToFreeWorker queued`taskID] if[not count queued;0N!"Nothing to run, all tasks complete";:()] In order to maintain sym file integrity the following method is used to ensure a single write occurs to the sym file: - At the beginning of a new batch, each worker is sent a refresh sym file task to ensure they have the latest sym file - As seen in step 2, during the reading and write down of a file each worker keeps track of the sym columns and their distinct list of values - This symbol information is passed back to the orchestrator by each worker, the orchestrator then aggregates this into a distinct list of symbols for the entire batch - After each worker has finished its individual read/write task, a backup of the current sym is made and the orchestrator then appends the new syms to the sym file in one write .mi.appendToSymFile:{[batch] //checks .mi.uniqueSymbols table for batch and appends to sym file if[0<count first us:.mi.uniqueSymbols batch; 0N!"Appending unique syms to the sym file ", string symFile:` sv .mi.hdbDir,`sym] } Step 4: Indexing¶ The orchestrator sends another .mi.runTask to index the data by the chosen sorting columns (`sym`time in this example) to an available worker. The worker loads the updated sym file. load ` sv .mi.hdbDir,`sym Checks to see if there is existing data in the HDB for the sorting columns, sym and time . //sortCol will be time srt:not null sc:first x`sortCol //grabs sym and time values if exist in the HDB syms:();sorts:() if[ not()~key@ eSymPath:` sv (eroot:.mi.hdbDir,(`$string dt),x`t),x`symCol; syms,:get get eSymPath; if[srt;sorts,:get` sv eroot,sc] ] The worker then gets the values of the sorting columns from disk for each saved file within the batch and combines it with any pre-existing data. syms,:raze get each` sv'(x[`paths]di),'x`symCol sorts,:$[srt;raze get each` sv'(x[`paths]di),'sc;()] The worker then uses iasc , which returns the indexes needed to sort a list. In this case, the list is a table of sym and time , the worker then sets index value to disk. This will be later used to sort during merging. I:iasc $[srt;([]syms;sorts);syms] .mi.getIndexDir[x`batchID;first x[`typ]di;dt;x`t] set .mi.minType[count I]$I To minimize redundancy, as the sym and time values are already in memory and sorted, these columns are now set to disk in a temporary filepath and the parted attribute applied to the sym column. symPath:` sv(mdb:.mi.getMergeDB[x`t;first x[`typ]di;dt]),x`symCol set[symPath;`p#`sym$syms I] if[srt;set[` sv mdb,sc;sorts I]] set[` sv mdb,`.d;key x`colSizes] Step 5: Merge¶ Once the index task is complete, the orchestrator assigns each worker a distinct subset of columns to merge one by one based on the index created in Step 4. During this step the worker checks to see if there is any existing data for the column in the HDB as this also needs to be merged. data:() toPath:` sv .mi.getMergeDB[x`t;first x[`typ]di;dt],mc:x`mergeCol if[not()~key@ epath:` sv .mi.hdbDir,(`$string dt),x[`t],mc; data,:get epath] The worker gets the values for the column for each loaded file within the batch and joins it to any pre-existing data. colData:raze get each ` sv'(x[`paths]di),'x`mergeCol data,:$[mc in x`symbolCols;`sym$colData;colData] The worker then sorts this data utilizing the saved list of indexes from Step 4 and sets it a temporary HDB location. dir:` sv .mi.hdbTmp,`indx,x`batchID data@:get ` sv (dir;`active;`$string first dt;first x`t) set[toPath;data] Step 6: Move table/s¶ After receiving a callback message from each worker that the merge has been completed, a worker is assigned via .mi.runTask to move the table/s to the relevant HDB directory. The merged table is moved using system mv (MOVE on Windows) command and during the move, the main HDB can be temporarily locked from queries. Once the orchestrator receives a success message for the move task the batch is considered complete and the processing of the next batch can commence. Post-batch tasks¶ In order to reduce downtime between batches, each worker is killed and restarted by the orchestrator process after the batch. Killing workers and restarting them has been found to free up memory faster rather than each worker running garbage collection, which can be time-consuming. Once the batch successfully completes, any post-ingestion event-driven tasks can be run. These can include any scheduled reporting, regulatory reporting for surveillance, transaction analysis, or ad-hoc queries. Surveillance techniques to effectively monitor algo- and high-frequency trading Transaction-cost analysis using kdb+ Benefits of proposed framework¶ Key elements and benefits of the proposed framework are: Speed¶ - Each worker can concurrently write its own file instead of waiting for a file to be finished (so that can be re-sorted along with the previous file) - Reduces the number of re-sorts – the merge and use of the indexes also avoids the issue of having to re-sort data for each individual file in the batch and instead reduces it to one sort per batch (This is due to the pre-emptive sorting using iasc and thesym andtime columns). - Reduced down-time between batch ingestions by restarting worker processes instead of running garbage collection Memory¶ - Only needs to read relevant columns that table sorting is to be based on, allowing for memory usage to stay low while indexing is done - Each individual write occurs one column at a time so it is memory efficient - The eventual merge of columns occurs one column at a time (1 column per worker), also reducing memory consumption Efficiency¶ - Concurrent reads of multiple files improve efficiency - Takes advantage of the fact that a column from a splayed table on disk in kdb+ is an individual file - Moving of a merged table means there is a minimum amount of time where the main HDB is not queryable - Maintains parted attribute for the sym column - Maximizes I/O - Capable of ingesting batches that have historical data which cross over multiple dates and which will be saved to the relevant HDB date partition Scalable¶ - Easily scalable with the addition of more workers and memory - Post-ingestion actions can be added – e.g. trigger the running regulatory reports, benchmarks or alerts Author¶ Enda Gildea is a senior kdb+ consultant for KX who has implemented several eFX post-trade analytics and cross-asset surveillance solutions in Singapore and Sydney.
// add to cache add:{[function;id;status] // Don't trap the error here - if it throws an error, we want it to be propagated out res:value function; $[(maxindividual*MB)>size:-22!res; // check if we need more space to store this item [now:.proc.cp[]; if[0>requiredsize:(maxsize*MB) - size+sum exec size from cache; evict[neg requiredsize;now]]; // Insert to the cache table `.cache.cache upsert (id;now;now;size); // and insert to the function and results dictionary funcs[id]:enlist function; results[id]:enlist res; // Update the performance trackperf[id;status;now]]; // Otherwise just log it as an addfail - the result set is too big trackperf[id;`fail;.proc.cp[]]]; // Return the result res} // Drop some ids from the cache drop:{[ids] ids,:(); delete from `.cache.cache where id in ids; .cache.results : ids _ .cache.results; } // evict some items from the cache - need to clear enough space for the new item // evict the least recently accessed items which make up the total size // feel free to write a more intelligent cache eviction policy ! evict:{[reqsize;currenttime] r:select from (update totalsize:sums size from `lastaccess xasc select lastaccess,id,size from cache) where prev[totalsize]<reqsize; drop[r`id]; trackperf[r`id;`evict;currenttime]; } trackperf:{[id;status;currenttime] `.cache.perf insert ((count id)#currenttime;id;(count id)#status)} // check the cache to see if a function exists with a young enough result set execute:{[func;age] // check for a value in the cache which we can use $[count r:select id,lastrun from .cache.cache where .cache.funcs[id]~\:enlist func; // There is a value in the cache. [r:first r; // We need to check the age - if the specified age is greater than the actual age, return it // else delete it $[age > (now:.proc.cp[]) - r`lastrun; // update the cache stats, return the cached result [update lastaccess:now from `.cache.cache where id=r`id; trackperf[r`id;`hit;now]; first results[r`id]]; // value found, but too old - re-run it under the same id [drop[r`id]; add[func;r`id;`rerun]]]]; // it's not in the cache, so add it add[func;getid[];`add]]} // get the cache performance getperf:{update function:.cache.funcs[id] from .cache.perf} \ // examples \d . f:{system"sleep 2";20+x} g:{til x} // first time should be slow -1"calling f ",(-3!f)," first time should be slow"; \t .cache.execute[(`f;2);0D00:01] -1"\nsecond time fast, provided the result value isn't too old (i.e. older than 0D00:01)"; \t .cache.execute[(`f;2);0D00:01] -1"\nNote the access time for f has been updated"; show .cache.cache -1"\nCall g a few times - can cause big result sets"; .cache.execute[(`g;5000000);0D00:01]; .cache.execute[(`g;4000000);0D00:01]; .cache.execute[(`f;2);0D00:01]; -1"\nCalling g with different params causes old results to be removed - need to clear out space"; -1"The results will be cleared out in the order corresponding to their last access time"; -1"\nBefore:"; show .cache.cache .cache.execute[(`g;5100000);0D00:01]; -1"\nAfter:"; show .cache.cache -1"\nCalling f with a very short cache age causes the result to be refreshed"; \t .cache.execute[(`f;2);0D00:00:00.000000001] show .cache.cache -1"\nCan execute strings and adhoc functions"; .cache.execute["20+35";0D00:30]; .cache.execute[({x+y};20;30);1D]; show .cache.cache -1"\nCan track the performance of the cache - see what is sticking for a long time, what gets evicted quickly etc"; show .cache.getperf[] ================================================================================ FILE: TorQ_code_common_checkinputs.q SIZE: 10,131 characters ================================================================================ \d .checkinputs // checkinputs is the main function called when running a query - it checks: // (i) input format // (ii) whether any parameter pairs clash // (iii) parameter specific checks // The input dictionary accumulates some additional table information/inferred info checkinputs:{[dict] dict:isdictionary dict; if[in[`sqlquery;key dict];:isstring[dict;`sqlquery]]; dict:checkdictionary dict; dict:checkinvalidcombinations dict; dict:checkrepeatparams dict; dict:checkeachparam[dict;1b]; dict:checkeachparam[dict;0b]; :@[dict;`checksperformed;:;1b]; }; checkdictionary:{[dict] if[not checkkeytype dict;'`$.schema.errors[`checkkeytype;`errormessage]]; if[not checkrequiredparams dict;'`$.checkinputs.formatstring[.schema.errors[`checkrequiredparams;`errormessage];.checkinputs.getrequiredparams[]except key dict]]; if[not checkparamnames dict;'`$.checkinputs.formatstring[.schema.errors[`checkparamnames;`errormessage];key[dict]except .checkinputs.getvalidparams[]]]; :dict; }; isdictionary:{[dict]$[99h~type dict;:dict;'`$.schema.errors[`isdictionary;`errormessage]]}; checkkeytype:{[dict]11h~type key dict}; checkrequiredparams:{[dict]all .checkinputs.getrequiredparams[]in key dict}; getrequiredparams:{[]exec parameter from .checkinputs.checkinputsconfig where required} checkparamnames:{[dict]all key[dict]in .checkinputs.getvalidparams[]}; getvalidparams:{[]exec parameter from .checkinputs.checkinputsconfig}; checkinvalidcombinations:{[dict] parameters:key dict; xinvalidpairs:select parameter,invalidpairs:invalidpairs inter\:parameters from .checkinputs.checkinputsconfig where parameter in parameters; xinvalidpairs:select from xinvalidpairs where 0<>count'[invalidpairs]; if[0=count xinvalidpairs;:dict]; :checkeachpair[raze each flip xinvalidpairs]; }; checkeachpair:{[invalidpair]'`$.checkinputs.formatstring[.schema.errors[`checkeachpair;`errormessage];invalidpair]}; // function to check if any parameters are repeated checkrepeatparams:{[dict] if[any repeats:1<count each group key dict; '`$.checkinputs.formatstring[.schema.errors[`checkrepeatparams;`errormessage];where repeats]]; :dict;}; // loop thorugh input parameters to execute parameter specific checks checkeachparam:{[dict;isrequired] config:select from .checkinputs.checkinputsconfig where parameter in key dict,required=isrequired; :checkparam/[dict;config]; }; // extract parameter specific function from config to check the input checkparam:{[dict;config] (first config[`checkfunction])[dict;first config`parameter]}; // check tablename parameter is of type symbol checktable:{[dict;parameter]:checktype[-11h;dict;parameter];}; // check that endtime is of temporal type and that it is greater than or equal to starttime checkendtime:{[dict;parameter] dict:checktimetype[dict;parameter]; :checktimeorder dict}; // check that inputted value is of valid type: -12 -14 -15h checktimetype:{[dict;parameter]:checktype[-12 -14 -15h;dict;parameter];}; // check timecolumn is of type symbol // check starttime <= endtime checktimecolumn:{[dict;parameter]:checktype[-11h;dict;parameter];}; // check starttime <= endtime checktimeorder:{[dict] if[dict[`starttime] > dict`endtime;'`$.schema.errors[`checktimeorder;`errormessage]]; :dict;}; // check parameter is of type symbol checksyminput:{[dict;parameter] :checktype[-11 11h;dict;parameter];}; // check parameter is of type checksublist:{[dict;parameter] :checktype[-5 -6 -7h;dict;parameter];}; // check aggregations are of type dictionary, that the dictionary has symbol keys, that // the dictionary has symbol values checkaggregations:{[dict;parameter] dict:checktype[99h;dict;parameter]; input:dict parameter; if[not 11h~abs type key input; '`$.schema.errors[`checkaggregationkey;`errormessage],.schema.examples[`aggregations1;`example]]; if[not all 11h~/:abs raze type''[get input]; '`$.schema.errors[`checkaggregationparameter;`errormessage],.schema.examples[`aggregations1;`example]]; :dict; }; // check that timebar parameter has three elements of respective types: numeric, symbol, // symbol. checktimebar:{[dict;parameter] input:dict parameter; if[not(3=count input)&0h~type input; '`$.schema.errors[`timebarlength;`errormessage]]; input:`size`bucket`timecol!input; if[not any -6 -7h~\:type input`size; '`$.schema.errors[`firsttimebar;`errormessage]]; if[not -11h~type input`bucket; '`$.schema.errors[`secondtimebar;`errormessage]]; if[not -11h~type input`timecol; '`$.schema.errors[`thirdtimebar;`errormessage]]; :dict; }; // check that filters parameter is of type dictionary, has symbol keys, the values are // in (where function;value(s)) pairs and (not)within filtering functions have two values // associated with it. checkfilters:{[dict;parameter] dict:checktype[99h;dict;parameter]; dict[parameter]:@[(dict parameter);where {not all 0h=type each x}each (dict parameter);enlist]; input:dict parameter; if[not 11h~abs type key input; '`$.schema.errors[`filterkey;`errormessage],.schema.examples[`filters1;`example]]; (input`nottest):enlist(not;in;10 30); filterpairs:raze value input; if[any not in[count each filterpairs;2 3]; '`$.schema.errors[`filterpair;`errormessage],.schema.examples[`filters1;`example]]; if[not 15 in value each first each filterpairs where 3=count each filterpairs; '`$.schema.errors[`filternot;`errormessage],.schema.examples[`filters1;`example]]; nots:where(~:)~/:ops:first each filterpairs; notfilters:@\:[;1]filterpairs nots; if[not all in[ops;.schema.allowedops];'`$.schema.errors[`allowedops;`errormessage]]; if[not all in[notfilters;.schema.allowednot];'`$.schema.errors[`allowednot;`errormessage]]; .checkinputs.withincheck'[filterpairs]; .checkinputs.inequalitycheck'[filterpairs]; :dict; }; withincheck:{[pair] if[(("within"~string first pair)| "within[~:]"~string first pair)& 2<>count last pair; '`$.schema.errors[`withincheck;`errormessage],.schema.examples[`filters3;`example]];}; inequalitycheck:{[pair] errmess:.schema.errors[`inequalities;`errormessage],.schema.examples[`filters3;`example]; errmess2:.schema.errors[`equalities;`errormessage],.schema.examples[`filters3;`example]; if[(("~<"~string first pair)|(enlist"<")~string first pair)& 1<>count last pair; '`$errmess2]; if[(("~>"~string first pair)|(enlist">")~string first pair)& 1<>count last pair; '`$errmess2]; if[(((enlist"=")~string first pair)|(enlist"~")~string first pair)&1<>count last pair; '`$errmess2]; if[("~="~string first pair)&1<>count last pair; '`$errmess];};
// find function // s = search string // p = public flag (1b or 0b) // c = context sensitive find:{[s;p;c] // Check the input type if[-11h=type s;$[null s; s:enlist"*";s:"*",(string s),"*"]]; if[not 10h=abs type s:s,(); '"input type must be a symbol or string (character array)"]; // select from the fullapi on the name matches // select by so we only show the table and not the variable (tables appear in both the variable list and the table list) $[c; select from fullapi[] where name like s,public in p,i=(last;i) fby name; select from fullapi[] where lower[name] like lower s,public in p,i=(last;i) fby name]} // find functions // f = find all // p = find public // u = find all public, user defined f:find[;01b;0b] p:find[;1b;0b] u:{[x] delete from p[x] where namespace in `.q`.Q`.h`.o} // search the definition of functions for a specific value. Return a table of ([]function;def) // s = search string // c = context sensitive search:{[s;c] if[not 10h=type s,:(); '"search value must be a string (character array)"]; raze {[f;s;c] res:([]function:enlist f;definition:enlist def:last value value f); $[not 10h=type def; 0#res; $[c; (def like s)#res; (lower[def] like lower[s])#res]]}[;s;c] each raze varnames[;"f";0b] each allns[]} // search function s:.api.search[;0b] // input list of namespaces for exportconfig torqnamespaces:` sv'`,'key[`]except`$'.Q.an; // export the current state of config variables, takes in list of symbols of namespaces (e.g. `.usage`.procs) exportconfig:{ // selects only variables in inpoutted namespaces :?[.api.f`; ((=;`vartype;enlist`variable);(in;`namespace;enlist x)); {x!x}enlist`name; // returns name, value and description `val`descrip!((value';`name);`descrip) ]; } // export all config variables exportallconfig:{exportconfig torqnamespaces} // Approximate memory usage statistics mem:{`size xdesc update sizeMB:`int$size%2 xexp 20 from update size:{-22!value x}each variable from ([]variable:raze varnames[;;0b] .' allns[] cross $[x;"vb";enlist"v"])} m:{mem[1b]} // If in the error trap, find the name of the function you are in // e.g. .api.whereami[.z.s] whereami:{funcs first where x~/:value each funcs:raze varnames[;"f";0b] each reverse allns[]} \ / examples \c 23 200 // add some api entries to have some detail add[`.api.f;1b;"Find a function/variable/table/view in the current process";"[string:search string]";"table of matching elements"] add[`.api.fp;1b;"Find a public function/variable/table/view in the current process";"[string:search string]";"table of matching public elements"] add[`.api.add;1b;"Add a function to the api description table";"[symbol:the name of the function; boolean:whether it should be called externally; string:the description; dict or string:the parameters for the function;string: what the function returns]";"null"] add[`.api.fullapi;1b;"Return the full function api table";"[]";"api table"] show .api.f`ad // search for a value show .api.p`ad // search for a public value show .api.p"*ad" // search for a specific pattern show .api.u`ad // search for a public value, exclude standard namespaces (.q, .Q, .h, .o) show .api.s["*api*"] // search the function definitions for the supplied pattern show .api.memusage[] // show the approximate memory usage of each variable in the process ================================================================================ FILE: TorQ_code_common_apidetails.q SIZE: 16,345 characters ================================================================================ // Add to the api functions \d .api if[not`add in key `.api;add:{[name;public;descrip;params;return]}] // Add each of the api calls to the detail table add[`.api.f;1b;"Find a function/variable/table/view in the current process";"[string:search string]";"table of matching elements"] add[`.api.p;1b;"Find a public function/variable/table/view in the current process";"[string:search string]";"table of matching public elements"] add[`.api.u;1b;"Find a non-standard q public function/variable/table/view in the current process. This excludes the .q, .Q, .h, .o namespaces";"[string:search string]";"table of matching public elements"] add[`.api.s;1b;"Search all function definitions for a specific string";"[string: search string]";"table of matching functions and definitions"] add[`.api.find;1b;"Generic method for finding functions/variables/tables/views. f,p and u are based on this";"[string: search string; boolean (list): public flags to include; boolean: whether the search is context senstive";"table of matching elements"] add[`.api.search;1b;"Generic method for searching all function definitions for a specific string. s is based on this";"[string: search string; boolean: whether the search is context senstive";"table of matching functions and definitions"] add[`.api.add;1b;"Add a function to the api description table";"[symbol:the name of the function; boolean:whether it should be called externally; string:the description; dict or string:the parameters for the function;string: what the function returns]";"null"] add[`.api.fullapi;1b;"Return the full function api table";"[]";"api table"] add[`.api.exportconfig;1b;"Return value table of requested torq variables and descriptions";"[symbol:torq namespace(s) as in namespace column in .api.f table]";"keyed table of name, value, description"] add[`.api.exportallconfig;1b;"Return value table of all current torq variables and descriptions";"[]";"keyed table of name, value, description"] add[`.api.m;1b;"Return the ordered approximate memory usage of each variable and view in the process. Views will be re-evaluated if required";"[]";"memory usage table"] add[`.api.mem;1b;"Return the ordered approximate memory usage of each variable and view in the process. Views are only returned if view flag is set to true. Views will be re-evaluated if required";"[boolean:return views]";"memory usage table"] add[`.api.whereami;1b;"Get the name of a supplied function definition. Can be used in the debugger e.g. .api.whereami[.z.s]";"function definition";"symbol: the name of the current function"] // Process api add[`.lg.o;1b;"Log to standard out";"[symbol: id of log message; string: message]";"null"] add[`.lg.e;1b;"Log to standard err";"[symbol: id of log message; string: message]";"null"] add[`.lg.l;1b;"Log to either standard error or standard out, depending on the log level";"[symbol: log level; symbol: name of process; symbol: id of log message; string: message; dict: extra parameters, used in the logging extension function]";"null"] add[`.lg.err;1b;"Log to standard err";"[symbol: log level; symbol: name of process; symbol: id of log message; string: message; dict: extra parameters, used in the logging extension function]";"null"] add[`.lg.ext;1b;"Extra function invoked in standard logging function .lg.l. Can be used to do more with the log message, e.g. publish externally";"[symbol: log level; symbol: name of process; symbol: id of log message; string: message; dict: extra parameters]";"null"] add[`.err.ex;1b;"Log to standard err, exit";"[symbol: id of log message; string: message; int: exit code]";"null"] add[`.err.usage;1b;"Throw a usage error and exit";"[]";"null"] add[`.err.param;1b;"Check a dictionary for a set of required parameters. Print an error and exit if not all required are supplied";"[dict: parameters; symbol list: the required param values]";"null"] add[`.err.env;1b;"Check if a list of required environment variables are set. If not, print an error and exit";"[symbol list: list of required environment variables]";"null"] add[`.proc.createlog;1b;"Create the standard out and standard err log files. Redirect to them";"[string: log directory; string: name of the log file;mixed: timestamp suffix for the file (can be null); boolean: suppress the generation of an alias link]";"null"] add[`.proc.rolllogauto;1b;"Roll the standard out/err log files";"[]";"null"] add[`.proc.loadf;1b;"Load the specified file if not already loaded";"[string: filename]";"null"] add[`.proc.reloadf;1b;"Load the specified file even if already laoded";"[string: filename]";"null"] add[`.proc.loaddir;1b;"Load all the .q and .k files in the specified directory. If order.txt is found in the directory, use the ordering found in that file";"[string: name of directory]";"null"] add[`.proc.getattributes;1b;"Called by external processes to retrieve the attributes (advertised functionality) of this process";"[]";"dictionary of attributes"] add[`.proc.override;1b;"Override configuration varibles with command line parameters. For example, if you set -.servers.HOPENTIMEOUT 5000 on the command line and call this function, then the command line value will be used";"[]";"null"] add[`.proc.overrideconfig;1b;"Override configuration varibles with values in supplied parameter dictionary. Generic version of .proc.override";"[dictionary: command line parameters. .proc.params should be used]";"null"] // Timer related functions add[`.timer.timer;1b;"The table containing the timer information";"";""]; add[`.timer.repeat;1b;"Add a repeating timer with default next schedule";"[timestamp: start time; timestamp: end time; timespan: period; mixedlist: (function and argument list); string: description string]";"null"]; add[`.timer.once;1b;"Add a one-off timer to fire at a specific time";"[timestamp: execute time; mixedlist: (function and argument list); string: description string]";"null"]; add[`.timer.remove;1b;"Delete a row from the timer schedule";"[int: timer id to delete]";"null"]; add[`.timer.removefunc;1b;"Delete a specific function from the timer schedule";"[mixedlist: (function and argument list)]";"null"]; add[`.timer.rep;1b;"Add a repeating timer - more flexibility than .timer.repeat";"[timestamp: execute time; mixedlist: (function and argument list); short: scheduling algorithm for next timer; string: description string; boolean: whether to check if this new function is already present on the schedule]";"null"]; add[`.timer.one;1b;"Add a one-off timer to fire at a specific time - more flexibility than .timer.once";"[timestamp: execute time; mixedlist: (function and argument list); string: description string; boolean: whether to check if this new function is already present on the schedule]";"null"]; // Caching functions add[`.cache.execute;1b;"Check the cache for a valid result set, return the results if found, execute the function, cache it and return if not";"[mixed: function or string to execute;timespan: maximum allowable age of cache item if found in cache]";"mixed: result of function"] add[`.cache.getperf;1b;"Return the performance statistics of the cache";"[]";"table: cache performance"] add[`.cache.maxsize;1b;"The maximum size in MB of the cache. This is evaluated using -22!, so may be incorrect due to power of 2 memory allocation. To be conservative and ensure it isn't exceeded, set max size to half of the actual max size that you want";"";""] add[`.cache.maxindividual;1b;"The maximum size in MB of an individual item in the cache. This is evaluated using -22!, so may be incorrect due to power of 2 memory allocation. To be conservative and ensure it isn't exceeded, set max size to half of the actual max size that you want";"";""] // timezone add[`.tz.default;1b;"Default timezone";"";""] add[`.tz.t;1b;"Table of timestamp information";"";""] add[`.tz.dg;1b;"default from GMT. Convert a timestamp from GMT to the default timezone";"[timestamp (list): timestamps to convert]";"timestamp atom or list"] add[`.tz.lg;1b;"local from GMT. Convert a timestamp from GMT to the specified local timezone";"[symbol (list): timezone ids;timestamp (list): timestamps to convert]";"timestamp atom or list"] add[`.tz.gd;1b;"GMT from default. Convert a timestamp from the default timezone to GMT";"[timestamp (list): timestamps to convert]";"timestamp atom or list"] add[`.tz.gl;1b;"GMT from local. Convert a timestamp from the specified local timezone to GMT";"[symbol (list): timezone ids; timestamp (list): timestamps to convert]";"timestamp atom or list"] add[`.tz.ttz;1b;"Convert a timestamp from a specified timezone to a specified destination timezone";"[symbol (list): destination timezone ids; symbol (list): source timezone ids; timestamp (list): timestamps to convert]";"timestamp atom or list"] // subscriptions add[`.sub.getsubscriptionhandles;1b;"Connect to a list of processes of a specified type";"[symbol: process type to match; symbol: process name to match; dictionary:attributes of process]";"table of process names, types and the handle connected on"] add[`.sub.subscribe;1b;"Subscribe to a table or list of tables and specified instruments";"[symbol (list):table names; symbol (list): instruments; boolean: whether to set the schema from the server; boolean: wether to replay the logfile; dictionary: procname,proctype,handle";""] // pubsub add[`.ps.publish;1b;"Publish a table of data";"[symbol: name of table; table: table of data]";""] add[`.ps.subscribe;1b;"Subscribe to a table and list of instruments";"[symbol(list): table name. ` for all; symbol(list): symbols to subscribe to. ` for all]";"mixed type list of table names and schemas"] add[`.ps.initialise;1b;"Initialise the pubsub routines. Any tables that exist in the top level can be published";"[]";""]
// @kind function // @category private // @fileoverview Generate boundary marker // @param x {any} Unused // @return {string} Boundary marker gb:{(24#"-"),16?.Q.an} // @kind function // @category private // @fileoverview Build multi-part object // @param b {string} boundary marker // @param d {dict} headers (incl. file to be multiparted) // @return {string} Multipart form mult:{[b;d] "\r\n" sv mkpt[b]'[string key d;value d],enlist"--",b,"--"} //build multipart // @kind function // @category private // @fileoverview Create one part for a multipart form // @param b {string} boundary marker // @param n {string} name for form part // @param v {string} value for form part // @return {string[]} multipart form mkpt:{[b;n;v] f:-11=type v; //check for file t:""; //placeholder for Content-Type if[f;t:"Content-Type: ",$[0<count t:.h.ty last` vs`$.url.sturl v;t;"application/octet-stream"],"\r\n"]; //get content-type for part r :"--",b,"\r\n"; //opening boundary r,:"Content-Disposition: form-data; name=\"",n,"\"",$[f;"; filename=",1_string v;""],"\r\n"; r,:t,"\r\n",$[f;`char$read1 v;v]; //insert file contents or passed value :r; } // @kind function // @category private // @fileoverview Convert a q dictionary to a multipart form // @param d {dict} kdb dictionary to convert to form // @return {(dict;string)} (HTTP headers;body) to give to .req.post multi:{[d] b:gb[]; //get boundary value m:mult[b;d]; //make multipart form from dictionary :((enlist"Content-Type")!enlist"multipart/form-data; boundary=",b;m); //return HTTP header & multipart form } postmulti:{post[x] . @[multi z;0;y,]} //send HTTP POST report with multipart form \d . ================================================================================ FILE: reQ_req_req.q SIZE: 12,484 characters ================================================================================ \d .req // @kind data // @category variable // @fileoverview Flag for verbose mode VERBOSE:@[value;`.req.VERBOSE;0i]; //default to non-verbose output // @kind data // @category variable // @fileoverview Flag for parsing output to q datatypes PARSE:@[value;`.req.PARSE;1b]; //default to parsing output // @kind data // @category variable // @fileoverview Flag for signalling on HTTP errors SIGNAL:@[value;`.req.SIGNAL;1b]; //default to signalling for HTTP errors // @kind data // @category variable // @fileoverview Default headers added to all HTTP requests def:(!/) flip 2 cut ( //default headers "Connection"; "Close"; "User-Agent"; "kdb+/",string .Q.k; "Accept"; "*/*" ) if[.z.K>=3.7;def["Accept-Encoding"]:"gzip"]; //accept gzip compressed responses on 3.7+ query:`method`url`hsym`path`headers`body`bodytype!() //query object template // @kind data // @category variable // @fileoverview Dictionary with Content-Types ty:@[.h.ty;`form;:;"application/x-www-form-urlencoded"] //add type for url encoded form, used for slash commands ty:@[ty;`json;:;"application/json"] //add type for JSON (missing in older versions of q) // @kind data // @category variable // @fileoverview Dictionary with Content-Type encoders tx:@[.h.tx;`form;:;.url.enc] //add encoder for url encoded form tx:@[tx;`json;:;.j.j] //encode with .j.j rather than json lines encoder // @kind data // @category variable // @fileoverview Dictionary with decompress functions for Content-Encoding types decompress:enlist[enlist""]!enlist(::) // use native gzip decompression where available if[.z.K>=3.7;decompress[enlist"gzip"]:-35!]; // @kind function // @category private // @fileoverview Applies proxy if relevant // @param u {dict} URL object // @return {dict} Updated URL object proxy:{[u] p:(^/)`$getenv`$(floor\)("HTTP";"NO"),\:"_PROXY"; //check HTTP_PROXY & NO_PROXY env vars, upper & lower case - fill so p[0] is http_, p[1] is no_ t:max(first ":"vs u[`url]`host)like/:{(("."=first x)#"*"),x}each"," vs string p 1; //check if host is in NO_PROXY env var t:not null[first p]|t; //check if HTTP_PROXY is defined & host isn't in NO_PROXY :$[t;@[;`proxy;:;p 0];]u; //add proxy to URL object if required } // @kind function // @category private // @fileoverview Convert headers to strings & add authorization and Content-Length // @param q {dict} query object // @return {dict} Updated query object addheaders:{[q] d:.req.def; if[count q[`url;`auth];d[$[`proxy in key q;"Proxy-";""],"Authorization"]:"Basic ",.b64.enc q[`url;`auth]]; if[count q`body;d["Content-Length"]:string count q`body]; //if payload, add length header d,:$[11=type k:key q`headers;string k;k]!value q`headers; //get headers dict (convert keys to strings if syms), append to defaults :@[q;`headers;:;d]; } // @kind function // @category private // @fileoverview Convert a KDB dictionary into HTTP headers // @param d {dict} dictionary of headers // @return {string} string HTTP headers enchd:{[d] k:2_@[k;where 10<>type each k:(" ";`),key d;string]; //convert non-string keys to strings v:2_@[v;where 10<>type each v:(" ";`),value d;string]; //convert non-string values to strings :("\r\n" sv ": " sv/:flip (k;v)),"\r\n\r\n"; //encode headers dict to HTTP headers } // @kind function // @category private // @fileoverview Construct full HTTP query string from query object // @param q {dict} query object // @return {string} HTTP query string buildquery:{[q] r:string[q`method]," ",q[`url;`path]," HTTP/1.1\r\n", //method & endpoint TODO: fix q[`path] for proxy use case "Host: ",q[`url;`host],$[count q`headers;"\r\n";""], //add host string enchd[q`headers], //add headers $[count q`body;q`body;""]; //add payload if present :r; //return complete query string }
Get started with q and kdb+¶ kdb+ is a database. You can use it through interfaces such as ODBC, or from Python, but its power and performance are best accessed through its own language, q. Q is a general-purpose programming language. You can write programs for anything in q. You do not need prior programming experience to learn it. If you have some experience with mathematics, functional programming or SQL, you will find in q much that is familiar. In this section we offer different routes into the language. Find one that suits your experience and learning style. Other sections: | section | content | |---|---| | Language | formal definition of language elements | | Database | persisting tables in the filesystem | | Architecture | topics in building systems using kdb+ processes | | White papers | extended treatments of topics in q programming and in building kdb+ systems | Books¶ q言語-ゼロから作るTick ArchitectureならびにC言語による拡張¶ Q and kdb+ from installation to building the tick architecture and using the C API by Mattew Kwon kdb+中文教程¶ kdb+ Tutorial in Chinese by Kdbcn Workshop Fun Q¶ A Functional Introduction to Machine Learning in Q by Nick Psaris Whether you are a data scientist looking to learn q, or a kdb+ developer looking to learn machine learning, there is something for everyone. Machine Learning and Big Data with kdb+/q¶ by Jan Novotny, Paul A. Bilokon, Aris Galiotos, and Frederic Deleze Offers quants, programmers and algorithmic traders a practical entry into the powerful but non-intuitive kdb+ database and q programming language. Q for Mortals¶ Version 3 by Jeffry A. Borror Covers up to kdb+ V3.3. If you are a new kdb+ user, this is the book for you! Amazon HTML edition online Q Tips¶ Fast, scalable and maintainable kdb+ by Nick Psaris There is information that if you were learning by yourself, would take years to work out. See the Archive for older documents. Machine learning¶ Machine-learning capabilities are at the heart of future technology development at KX. Our libraries are released under the Apache 2 license, and are free for all use cases, including 64-bit and commercial use. Machine Learning Toolkit¶ The Machine Learning Toolkit is at the core of our machine-learning functionality. This library contains functions that cover the following areas. - Accuracy metrics to test the performance of constructed machine-learning models. - Pre-processing data prior to the application of machine-learning algorithms. - An implementation of the FRESH algorithm for feature extraction and selection on structured time series data. - Utility functions which are useful in many machine-learning applications but do not fall within the other sections of the toolkit. - Cross-Validation functions, used to verify how robust and stable a machine-learning model is to changes in the data being interrogated and the volume of this data. - Clustering algorithms used to group data points and to identify patterns in their distributions. The algorithms make use of a k-dimensional tree to store points and scoring functions to analyze how well they performed. Example notebooks¶ Example notebooks show FRESH and various aspects of toolkit functionality. Natural Language Processing¶ NLP manages the common functions associated with processing unstructured text. Functions for searching, clustering, keyword extraction and sentiment are included in the library. Automated Machine Learning¶ AutoML is a framework to automate the process of machine learning using kdb+. This is build largely on the machine learning toolkit and handles the following aspects of a traditional machine-learning pipeline: - Data preprocessing - Feature engineering and feature selection - Model selection - Hyperparameter tuning - Report generation and model persistence embedPy¶ EmbedPy loads Python into kdb+/q, allowing access to a rich ecosystem of libraries such as scikit-learn, tensorflow and pytorch. - Python variables and objects become q variables – and either language can act upon them. - Python code and files can be embedded within q code. - Python functions can be called as q functions. Example notebooks using embedPy JupyterQ¶ JupyterQ supports Jupyter notebooks for q, providing - Syntax highlighting, code completion and help - Multiline input (script-like execution) - Inline display of charts Technical papers¶ - NASA FDL: Analyzing social media data for disaster management Conor McCarthy, 2019.10 - NASA FDL: Predicting floods with q and machine learning Diane O’Donoghue, 2019.10 - An introduction to neural networks with kdb+ James Neill, 2019.07 - NASA FDL: Exoplanets Challenge Esperanza López Aguilera, 2018.12 - NASA FDL: Space Weather Challenge Deanna Morgan, 2018.11 - Using embedPy to apply LASSO regression Samantha Gallagher, 2018.10 - K-Nearest Neighbor classification and pattern recognition with q Emanuele Melis, 2017.07 The KX machine-learning libraries are: - well documented, with understandable and useful examples - maintained and supported by KX on a best-efforts basis, at no cost to customers - released under the Apache 2 license - free for all use cases, including 64-bit and commercial use Commercial support is available if required: please email [email protected].
// function to determine the date (in rolltimezone) from UTC timestamp, p getday:{[p] p+:adjtime[p]; // convert date from UTC to rolltimezone "d"$p-rolltimeoffset // adjust day according to rolltimeoffset }; d:getday[.z.p]; // get current date when loading process, store in d nextroll:getroll[.z.p]; // get next roll when loading process, store in nextroll ================================================================================ FILE: TorQ_code_common_finspace.q SIZE: 4,032 characters ================================================================================ // Config for setting Finspace specific parameters \d .finspace enabled:@[value;`enabled;0b]; //whether the application is finspace or on prem - set to false by default database:@[value;`database;"database"]; //name of the finspace database applicable to a certain RDB cluster - Not used if on prem dataview:@[value;`dataview;"finspace-dataview"]; cache:@[value;`cache;()]; hdbreloadmode:@[value;`hdbreloadmode;"ROLLING"]; hdbclusters:@[value;`hdbclusters;enlist `cluster]; //list of clusters to be reloaded during the rdb end of day (and possibly other uses) rdbready:@[value;`rdbready;0b]; //whether or not the rdb is running and ready to take over at the next period- set to false by default // wrapper around the .aws.get_kx_cluster api getcluster:{[cluster] .lg.o[`getcluster;"getting cluster with name ",string[cluster]]; resp:@[.aws.get_kx_cluster;string[cluster];{ msg:"failed to call .aws.get_kx_cluster api due to error: ",-3!x; .lg.e[`getcluster;msg]; `status`msg!("FAILURE";msg)}]; if[`finspace_error_code in key resp; .lg.e[`getcluster;"failed to call .aws.get_kx_cluster api: ",resp[`message]]; :`status`msg!("FAILURE";resp[`message])]; :resp }; / Runs a .aws api until a certain status has been received checkstatus:{[apicall;status;frequency;timeout] res:value apicall; st:.z.t; l:0; while[(timeout>ti:.z.t-st) & not any res[`status] like/: status; if[frequency<=ti-l; l:ti; res:value apicall; .lg.o[`checkstatus;"Status: ", res[`status], " waited: ", string(ti)]; ]; ]; .lg.o[`checkstatus;"Status: ",res[`status]]; :res; }; // Creates a Finspace changeset during the RDB end of day process createchangeset:{[db] .lg.o[`createchangeset;"creating changeset for database: ", db]; details:.aws.create_changeset[db;([]input_path:enlist getenv[`KDBSCRATCH];database_path:enlist "/";change_type:enlist "PUT")]; .lg.o[`createchangeset;("creating changset ",details[`id]," with initial status of ",details[`status])]; :details; }; // Notifies the HDB clusters to repoint to the new changeset once it has finished creating notifyhdb:{[cluster;changeset] .lg.o[`notifyhdb;"Checking status of changeset ",changeset[`id]]; // Ensuring that the changeset has successfully created before doing the HDB reload current:.finspace.checkstatus[(`.aws.get_changeset;.finspace.database;changeset[`id]);("COMPLETED";"FAILED");00:01;0wu]; .lg.o[`notifyhdb;("changeset ",changeset[`id]," ready, bringing up new hdb cluster")]; // TODO - Also need to figure out the ideal logic if a changeset fails to create. Possibly recreate and re-run notifyhd } // function to close connection to TP and remove unwanted data in WDB and RDB's eopdatacleanup:{[dict] // close off each subsription by handle to the tickerplant hclose each distinct exec w from .sub.SUBSCRIPTIONS; // function to parse icounts dict and remove all data after a given index for RDB and WDB's {[t;ind]delete from t where i >= ind}'[key dict;first each value dict]; } //set rdbready to true after signal received from the old rdb, that new processes are running and ready to take over at start of new period newrdbup:{[] .lg.o[`newrdbup;"received signal from next period rdb, setting rdbready to true"]; @[`.finspace;`rdbready;:;1b]; }; deletecluster:{[clustername] if[not any (10h;-11h)=fType:type clustername; .lg.e[`deletecluster;"clustername must be of type string or symbol: 10h -11h, got ",-3!fType]; :(::)]; if[-11h~fType; clustername:string clustername]; .lg.o[`deletecluster;"Going to delete ",$[""~clustername;"current cluster";"cluster named: ",clustername]]; .aws.delete_kx_cluster[clustername]; // calling this on an empty string deletes self // TODO ZAN Error trap // Test this with invalid cluster names and catch to show error messages }; ================================================================================ FILE: TorQ_code_common_grafana.q SIZE: 6,618 characters ================================================================================ \d .grafana // user defined column name of time column timecol:@[value;`.grafana.timecol;`time]; // user defined column name of sym column sym:@[value;`.grafana.sym;`sym]; // user defined date range to find syms from timebackdate:@[value;`.grafana.timebackdate;2D]; // user defined number of ticks to return ticks:@[value;`.grafana.ticks;1000]; // user defined query argument deliminator del:@[value;`.grafana.del;"."]; // json types of kdb datatypes types:.Q.t!`array`boolean,(3#`null),(5#`number),11#`string; // milliseconds between 1970 and 2000 epoch:946684800000; // wrapper if user has custom .z.pp .dotz.set[`.z.pp;{[f;x]$[(`$"X-Grafana-Org-Id")in key last x;zpp;f]x}[@[value;.dotz.getcommand[`.z.pp];{{[x]}}]]]; // return alive response for GET requests .dotz.set[`.z.ph;{[f;x] $[(`$"X-Grafana-Org-Id")in key last x;"HTTP/1.1 200 OK\r\nConnection: close\r\n\r\n";f x] }[@[value;.dotz.getcommand[`.z.ph];{{[x]}}]]]; // retrieve and convert Grafana HTTP POST request then process as either timeseries or table zpp:{ // get API url from request // cuts at first whitespace char to avoid splitting function params r:(0;n?" ")cut n:first x; // convert grafana message to q rqt:.j.k r 1; $["query"~r 0;query[rqt];"search"~r 0;search rqt;`$"Annotation url nyi"] }; query:{[rqt] // retrieve final query and append to table to log rqtype:raze rqt[`targets]`type; :.h.hy[`json]$[rqtype~"timeserie";tsfunc rqt;tbfunc rqt]; }; finddistinctsyms:{?[x;enlist(>;timecol;(-;.z.p;timebackdate));1b;{x!x}enlist sym]sym}; // prefixes string c to each string s, seperated by del prefix:{[c;s] (c,del),/:s}; search:{[rqt] // build drop down case options from tables in port tabs:tables[]; symtabs:tabs where sym in'cols each tabs; timetabs:tabs where timecol in'cols each tabs; rsp:string tabs; if[count timetabs; rsp,:s1:prefix["t";string timetabs]; rsp,:s2:prefix["g";string timetabs]; // suffix names of number columns to graph and other panel options rsp,:raze(s2,'del),/:'c1:string {cols[x] where`number=types (0!meta x)`t}each timetabs; rsp,:raze(prefix["o";string timetabs],'del),/:'c1; if[count symtabs; // suffix distinct syms to timeseries table and other panel options rsp,:raze(s1,'del),/:'c2:string each finddistinctsyms'[timetabs]; rsp,:raze(prefix["o";string timetabs],'del),/:'{x[0] cross del,'string finddistinctsyms x 1}each (enlist each c1),'timetabs; ]; ]; :.h.hy[`json].j.j rsp; }; diskvals:{c:(count[x]-ticks)+til ticks;get'[.Q.ind[x;c]]}; memvals:{get'[?[x;enlist(within;`i;count[x]-ticks,0);0b;()]]}; catchvals:{@[diskvals;x;{[x;y]memvals x}[x]]}; istype:{[targ;char] (char,del)~2#targ}; isfunc:istype[;"f"]; istab:istype[;"t"]; // builds body of table response in Json adaptor schema tabresponse:{[colname;coltype;rqt] .j.j enlist`columns`rows`type!(flip`text`type!(colname;coltype);catchvals rqt;`table)}; // process a table request and return in JSON format tbfunc:{[rqt] rqt: raze rqt[`targets]`target; symname:0b; // if f.t.func, drop first 4 chars rqt:0!value $[isfunc[rqt] & istab 2_rqt; 4_rqt; isfunc rqt; 2_rqt; istab rqt; [rqt: `$del vs rqt; if[2<count rqt; symname: rqt 2]; rqt 1]; rqt]; // get column names and associated types to fit format colname:cols rqt; coltype:types (0!meta rqt)`t; // search rqt for sym if symname was set if[-11h=type symname; rqt:?[rqt;enlist(=;sym;enlist symname);0b;()]]; :tabresponse[colname;coltype;rqt]; }; // process a timeseries request and return in Json format, takes in query and information dictionary tsfunc:{[x] targ: raze x[`targets]`target; / split arguments numargs:count args:$[isfunc targ;(0;1+targ?del)cut targ:2_targ;`$del vs targ]; tyargs:$[10h=abs type args 0;`$1#;]args 0; // manipulate queried table coln:cols rqt:0!value args 1; // function to convert time to milliseconds, takes timestamp mil:{floor epoch+(`long$x)%1000000}; // ensure time column is a timestamp if["p"<>meta[rqt][timecol;`t];rqt:@[rqt;timecol;+;.z.D]]; // get time range from grafana range:"P"$-1_'x[`range]`from`to; // select desired time period only rqt:?[rqt;enlist(within;timecol;range);0b;()]; // form milliseconds since epoch column rqt:@[rqt;`msec;:;mil rqt timecol]; // cases for graph/table and sym arguments $[(2<numargs)and`g~tyargs;graphsym[args 2;rqt]; (2<numargs)and`t~tyargs;tablesym[coln;rqt;args 2]; (2=numargs)and`g~tyargs;graphnosym[coln;rqt]; (2=numargs)and`t~tyargs;tablenosym[coln;rqt]; (4=numargs)and`o~tyargs;othersym[args;rqt]; (3=numargs)and`o~tyargs;othernosym[args 2;rqt]; (2=numargs)and`o~tyargs;othernosym[coln except timecol;rqt]; `$"Wrong input"] }; // build JSON response for graph & other panels with no sym seperation buildnosym:{y,`target`datapoints!(z 0;value each ?[x;();0b;z!z])}; nosymresponse:{[rqt;colname] .j.j buildnosym[rqt]\[();colname]}; // timeserie request on non-specific panel w/ no preference on sym seperation othernosym:{[coln;rqt] // return columns with json number type only colname:coln cross`msec; :nosymresponse[rqt;colname]; };
div ¶ Integer division x div y div[x;y] Returns the greatest whole number that does not exceed x%y . q)7 div 3 2 q)7 div 2 3 4 3 2 1 q)-7 7 div/:\:-2.5 -2 2 2.5 2 3 -4 -3 -3 -4 3 2 Except for char, byte, short, and real, preserves the type of the first argument. q)7f div 2 3f q)6i div 4 1i q)2014.10.13 div 365 2000.01.15 The exceptions are char, byte, short, and real, which get converted to ints. q)7h div 3 2i q)0x80 div 16 8i q)"\023" div 8 2i div is a multithreaded primitive. Implicit iteration¶ div is an atomic function. q)(10;20 30)div(3 4; -5) 3 2 -4 -6 It applies to dictionaries and keyed tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)d div 5 a| 2 -5 0 b| 0 1 -2 q)k div 5 k | a b ---| ----- abc| 2 0 def| -5 1 ghi| 0 -2 Domain and range¶ b g x h i j e f c s p m d z n u v t ---------------------------------------- b | i . i i i i i i i . i i i i i i i i g | . . . . . . . . . . . . . . . . . . x | i . i i i i i i i . i i i i i i i i h | i . i i i i i i i . i i i i i i i i i | i . i i i i i i i . i i i i i i i i j | j . j j j j j j j . j j j j j j j j e | f . f f f f f f f . f f f f f f f f f | f . f f f f f f f . f f f f f f f f c | i . i i i i i i i . i i i i i i i i s | . . . . . . . . . . . . . . . . . . p | p . p p p p p p p . p p p p p p p p m | m . m m m m m m m . m m m m m m m m d | d . d d d d d d d . d d d d d d d d z | z . z z z z z z z . z z z z z z z z n | n . n n n n n n n . n n n n n n n n u | u . u u u u u u u . u u u u u u u u v | v . v v v v v v v . v v v v v v v v t | t . t t t t t t t . t t t t t t t t Range: dfijmnptuvz % Divide, div , reciprocal Mathematics Q for Mortals: §4.8.1 Integer Division div and Modulus mod % Divide¶ x%y %[x;y] Returns the ratio of the underlying values of x and y as a float. Note that this is different from some other programming languages, e.g. C++. q)2%3 0.6666667 q)halve:%[;2] /projection q)halve til 5 0 0.5 1 1.5 2 q)"z"%"a" 1.257732 q)1b%0b 0w q)00:00:10.000000000 % 00:00:05.000000000 /ratio of timespans 2f Dates are represented internally as days after 2000.01.01, so the ratio of two dates is the ratio of their respective number of days since 2000.01.01. q)"i"$2010.01.01 2005.01.01 /days since 2000.01.01 3653 1827i q)(%/)"i"$2010.01.01 2005.01.01 1.999453 q)2010.01.01 % 2005.01.01 1.999453 % is a multithreaded primitive. Implicit iteration¶ Divide is an atomic function. q)(10;20 30)%(2;3 4) 5f 6.666667 7.5 It applies to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)d%2 a| 5 -10.5 1.5 b| 2 2.5 -3 q)d%`b`c!(10 20 30;1000*1 2 3) /upsert semantics a| 10 -21 3 b| 0.4 0.25 -0.2 c| 1000 2000 3000 q)t%100 a b ----------- 0.1 0.04 -0.21 0.05 0.03 -0.06 q)k%k k | a b ---| --- abc| 1 1 def| 1 1 ghi| 1 1 Range and domains¶ b g x h i j e f c s p m d z n u v t ---------------------------------------- b | f . f f f f f f f . f f f f f f f f g | . . . . . . . . . . . . . . . . . . x | f . f f f f f f f . f f f f f f f f h | f . f f f f f f f . f f f f f f f f i | f . f f f f f f f . f f f f f f f f j | f . f f f f f f f . f f f f f f f f e | f . f f f f f f f . f f f f f f f f f | f . f f f f f f f . f f f f f f f f c | f . f f f f f f f . f f f f f f f f s | . . . . . . . . . . . . . . . . . . p | f . f f f f f f f . f f f f f f f f m | f . f f f f f f f . f f f f f f f f d | f . f f f f f f f . f f f f f f f f z | f . f f f f f f f . f f f f f f f f n | f . f f f f f f f . f f f f f f f f u | f . f f f f f f f . f f f f f f f f v | f . f f f f f f f . f f f f f f f f t | f . f f f f f f f . f f f f f f f f Range: f div , Multiply, ratios Mathematics Q for Mortals §4.4 Basic Arithmetic do ¶ Evaluate expression/s some number of times do[count;e1;e2;e3;…;en] Control construct. Where count is a non-negative integere1 ,e2 , …en are expressions the expressions e1 to en are evaluated, in order, count times. The result of do is always the generic null. Continued fraction for \(\pi\), for 7 steps: q)r:() q)t:2*asin 1 q)do[7;r,:q:floor t;t:reciprocal t-q] q)r 3 7 15 1 292 1 1 do is not a function but a control construct. It cannot be iterated or projected. Name scope¶ The brackets of the expression list do not create lexical scope. Name scope within the brackets is the same as outside them. Accumulators – Do, if , while Controlling evaluation Q for Mortals §10.1.5 do
// function to correctly reduce two tables to one mapreduceres:{[options;res] // raze the result sets together res:$[all 99h=type each res; (){x,0!y}/res; (),/res]; aggs:options`aggregations; aggs:flip(key[aggs]where count each value aggs;raze aggs); // distinct may be present as only agg, so apply distinct again if[all`distinct=first each aggs;:?[res;();1b;()]]; // collecting the appropriate grouping argument for map-reduce aggs gr:$[all`grouping`timebar in key options; a!a:options[`timebar;2],options`grouping; `grouping in key options; a!a:(),options`grouping; `timebar in key options; a!a:(),options[`timebar;2]; 0b]; // select aggs by gr from res res:?[res;();gr;raze{mapaggregate[x 0;camel x 1]}'[aggs]]; //apply sublist and postprocesing to map reduced results processres[options;res] }; // Dynamic routing finds all processes with relevant data attributesrouting:{[options;procdict] // Get the tablename and timespan timespan:$[7h~tp:type first value procdict;`long$options[`starttime`endtime];`date$options[`starttime`endtime]]; //if int partitioned adjust rdb partition range to cover all periods up to days end to facilitate correct grouping of partitions if[`rdb in key procdict and 7h~tp; procdict[`rdb]:(first[procdict `rdb];-1+`long$01D00 + last procdict `rdb)]; // See if any of the provided partitions are with the requested ones procdict:{[x;timespan] (all x within timespan) or any timespan within x}[;timespan] each procdict; // Only return appropriate partitions types:(key procdict) where value procdict; // If the partitions are out of scope of processes then error if[0=count types; '`$"gateway error - no info found for that table name and time range. Either table does not exist; attributes are incorect in .gw.servers on gateway, or the date range is outside the ones present" ]; :types; }; // Generates a dictionary of `tablename!mindate;maxdate partdict:{[input] tabname:input[`tablename]; // Remove duplicate servertypes from the gw.servers servers:select from .gw.servers where i=(first;i)fby servertype; // extract the procs which have the table defined servers:select from servers where {[x;tabname]tabname in @[x;`tables]}[;tabname] each attributes; // Create a dictionary of the attributes against servertypes procdict:exec servertype!attributes[;`partition] from servers; // If the response is a dictionary index into the tablename procdict:@[procdict;key procdict;{[x;tabname]if[99h=type x;:x[tabname]];:x}[;tabname]]; // returns the dictionary as min date/ max date procdict:asc @[procdict;key procdict;{:(min x; max x)}]; // prevents overlap if more than one process contains a specified date if[1<count procdict; procdict:{:$[y~`date$();x;$[within[x 0;(min y;max y)];(1+max[y];x 1);x]]}':[procdict]]; :procdict; }; // function to adjust the queries being sent to processes to prevent overlap of // time clause and data being queried on more than one process adjustqueries:{[options;part] // if only one process then no need to adjust if[2>count p:options`procs;:options]; // get the tablename tabname:options[`tablename]; // remove duplicate servertypes from the gw.servers servers:select from .gw.servers where i=(first;i)fby servertype; // extract the procs which have the table defined servers:select from servers where {[x;tabname]tabname in @[x;`tables]}[;tabname] each attributes; // create a dictionary of the attributes against servertypes procdict:exec servertype!attributes[;`partition] from servers; // if the response is a dictionary index into the tablename procdict:@[procdict;key procdict;{[x;tabname]if[99h=type x;:x[tabname]];:x}[;tabname]]; // create list of all available partitions possparts:raze value procdict; //group partitions to relevant process partitions:group key[part]where each{within[y;]each value x}[part]'[possparts]; partitions:possparts{(min x;max x)}'[partitions]; partitions:`timestamp$partitions; // adjust the times to account for period end time when int partitioned c:first[partitions`hdb],-1+ first[partitions`rdb]; d:first[partitions`rdb],options `endtime; partitions:@[@[partitions;`hdb;:;c];`rdb;:;d]; // if start/end time not a date, then adjust dates parameter for the correct types if[not a:-12h~tp:type start:options`starttime; // converts partitions dictionary to timestamps/datetimes partitions:$[-15h~tp;"z"$;]{(0D+x 0;x[1]+1D-1)}'[partitions]; // convert first and last timestamp to start and end time partitions:@[partitions;f;:;(start;partitions[f:first key partitions;1])]; partitions:@[partitions;l;:;(partitions[l:last key partitions;0];options`endtime)]]; // adjust map reducable aggregations to get correct components if[(1<count partitions)&`aggregations in key options; if[all key[o:options`aggregations]in key aggadjust; aggs:mapreduce[o;$[`grouping in key options;options`grouping;`]]; options:@[options;`aggregations;:;aggs]]]; // create a dictionary of procs and different queries :{@[@[x;`starttime;:;y 0];`endtime;:;y 1]}[options]'[partitions]; }; // function to grab the correct aggregations needed for aggregating over // multiple processes mapreduce:{[aggs;gr] // if there is a date grouping any aggregation is allowed if[`date in gr;:aggs]; // format aggregations into a paired list aggs:flip(key[aggs]where count each value aggs;raze aggs); // if aggregations are not map-reducable and there is no date grouping, // then error if[not all aggs[;0]in key aggadjust; '`$"to perform non-map reducable aggregations automatically over multiple processes there must be a date grouping"]; // aggregations are map reducable (with potential non-date groupings) aggs:distinct raze{$[`~a:.dataaccess.aggadjust x 0;enlist x;a x 1]}'[aggs]; :first'[aggs]!last'[aggs]; }; ================================================================================ FILE: TorQ_code_gateway_gatewaylib.q SIZE: 4,008 characters ================================================================================ //functionality loaded in by gateway //functions include: getserverids, getserveridstype, getserverscross, buildcross \d .gw getserverids:{[att] if[99h<>type att; // its a list of servertypes e.g. `rdb`hdb // check if user attributes are a symbol list if[not 11h=abs type att; '" Servertype should be given as either a dictionary(type 99h) or a symbol list (11h)" ]; servertype:distinct att,(); //list of active servers activeservers:exec distinct servertype from .gw.servers where active; //list of all servers allservers:exec distinct servertype from .gw.servers; activeserversmsg:". Available servers include: ",", " sv string activeservers; //check if a null argument is passed if[any null att;'"A null server cannot be passed as an argument",activeserversmsg]; //if any requested servers are missing then: //if requested server does not exist, return error with list of available servers //if requested server exists but is currently inactive, return error with list of available servers if[count servertype except activeservers; '"the following ",$[max not servertype in allservers; "are not valid servers: ",", " sv string servertype except allservers; "requested servers are currently inactive: ",", " sv string servertype except activeservers ],activeserversmsg; ]; :(exec serverid by servertype from .gw.servers where active)[servertype]; ]; serverids:$[`servertype in key att; raze getserveridstype[delete servertype from att] each (),att`servertype; getserveridstype[att;`all]]; if[all 0=count each serverids;'"no servers match requested attributes"]; :serverids; } getserveridstype:{[att;typ] // default values besteffort:1b; attype:`cross; servers:$[typ=`all; exec serverid!attributes from .gw.servers where active; exec serverid!attributes from .gw.servers where active,servertype=typ]; if[`besteffort in key att; if[-1h=type att`besteffort;besteffort:att`besteffort]; att:delete besteffort from att; ]; if[`attributetype in key att; if[-11h=type att`attributetype;attype:att`attributetype]; att:delete attributetype from att; ]; res:$[attype=`independent;getserversindependent[att;servers;besteffort]; getserverscross[att;servers;besteffort]]; serverids:first value flip $[99h=type res; key res; res]; if[all 0=count each serverids;'"no servers match ",string[typ]," requested attributes"]; :serverids; } /- build a cross product from a nested dictionary buildcross:{(cross/){flip (enlist y)#x}[x] each key x} /- given a dictionary of requirements and a list of attribute dictionaries /- work out which servers we need to hit to satisfy each requirement /- we want to satisfy the cross product of requirements - so each attribute has to be available with each other attribute /- e.g. each symbol has to be availble within each specified date getserverscross:{[req;att;besteffort] if[0=count req; :([]serverid:enlist key att)]; s:getserversinitial[req;att]; /- build the cross product of requirements reqcross:buildcross[req]; /- calculate the cross product of data contributed by each source /- and drop it from the list of stuff that is required util:flip `remaining`found!flip ({[x;y;z] (y[0] except found; y[0] inter found:$[0=count y[0];y[0];buildcross x@'where each z])}[req]\)[(reqcross;());value s]; /- check if everything is done if[(count last util`remaining) and not besteffort; '"getserverscross: cannot satisfy query as the cross product of all attributes can't be matched"]; /- remove any rows which don't add value s:1!(0!s) w:where not 0=count each util`found; /- return the parameters which should be queried for (key s)!distinct each' flip each util[w]`found } addserversfromconnectiontable:{ {.gw.addserverattr'[x`w;x`proctype;x`attributes]}[select w,proctype,attributes from .servers.SERVERS where ((proctype in x) or x~`ALL),not w in ((0;0Ni),exec handle from .gw.servers where active)];} ================================================================================ FILE: TorQ_code_gateway_kxdash.q SIZE: 1,934 characters ================================================================================ \d .kxdash enabled:@[value;`enabled;{0b}]; // use this to store the additional params that the kx dashboards seem to send in dashparams:`o`w`r`limit!(0;0i;0i;0W) // function to be called from the dashboards dashexec:{[q;s;j] .gw.asyncexecjpt[(dashremote;q;dashparams);(),s;dashjoin[j];();0Wn] } // execute the request // return a dict of status and result, along with the params // add a flag to the start of the list to stop dictionaries collapsing // to tables in the join function dashremote:{[q;dashparams] (`kxdash;dashparams,`status`result!@[{(1b;value x)};q;{(0b;x)}]) } // join function used for dashboard results dashjoin:{[joinfunc;r] $[min r[;1;`status]; (`.dash.rcv_msg;r[0;1;`w];r[0;1;`o];r[0;1;`r];r[0;1;`limit] sublist joinfunc r[;1;`result]); (`.dash.snd_err;r[0;1;`w];r[0;1;`r];r[0;1;`result])] } dashps:{ // check the query coming in meets the format $[@[{`f`w`r`x`u~first 1_ value first x};x;0b]; // pull out the values we need to return to the dashboards [dashparams::`o`w`r`limit!(last value x 1;x 2;x 3;x[4;0]); // execute the query part, which must look something like // .kxdash.dashexec["select from t";`rdb`hdb;raze] value x[4;1]; ]; // value x] }
vpw:{[x;y] $[defaultuser x;validhost .z.a;0b]} vpg:{validcmd[.z.u;x]} / vps:{$[0=.z.w;1b;poweruser .z.u;validcmd[.z.u;x];0b]} vps:{$[0=.z.w;1b;validcmd[.z.u;x]]} vpi:{$[0=.z.w;1b;superuser .z.u]} vph:{superuser .z.u} vpp:{superuser .z.u} vws:{defaultuser .z.u} / not clear what/how to restrict yet \d . .lg.o[`access;"access controls are ",("disabled";"enabled").access.enabled] if[.access.enabled; // Read in the permissions .access.readpermissions each string reverse .proc.getconfig["permissions";1]; .dotz.set[`.z.pw;{$[.access.vpw[y;z];x[y;z];0b]}value .dotz.getcommand[`.z.pw]]; / .z.po - untouched, .z.pw does the checking / .z.pc - untouched, close is always allowed if[not .access.openonly; .dotz.set[`.z.pg;{$[.access.vpg[y];.access.validsize[;`pg.size;y]x y;.access.invalidpt y]}value .dotz.getcommand[`.z.pg]]; .dotz.set[`.z.ps;{$[.access.vps[y];x y;.access.invalidpt y]}value .dotz.getcommand[`.z.ps]]; .dotz.set[`.z.ws;{$[.access.vws[y];x y;.access.invalidpt y]}value .dotz.getcommand[`.z.ws]]; .dotz.set[`.z.pi;{$[.access.vpi[y];x y;.access.invalidpt y]}value .dotz.getcommand[`.z.pi]]; .dotz.set[`.z.ph;{$[.access.vph[y];x y;.h.hn["403 Forbidden";`txt;"Forbidden"]]}value .dotz.getcommand[`.z.ph]]; .dotz.set[`.z.pp;{$[.access.vpp[y];x y;.h.hn["403 Forbidden";`txt;"Forbidden"]]}value .dotz.getcommand[`.z.pp]]]]; \ note that you can put global restrictions on the amount of memory used, and the maximum time a single interaction can take by setting command line parameters: -T NNN (where NNN seconds is the maximum duration) - q will signal 'stop -w NNN (where NNN MB is the maximum memory) - q will *EXIT* with wsfull could use .z.po+.z.pc to track clients (.z.a+u+w, .z.z + active) - simplest is to use trackclients.q directly ================================================================================ FILE: TorQ_code_handlers_dotz.q SIZE: 3,054 characters ================================================================================ / taken from http://code.kx.com/wsvn/code/contrib/simon/dotz/ / set state and save the original values in .z.p* so we can <revert> \d .dotz if[not@[value;`SAVED.ORIG;0b]; / onetime save only SAVED.ORIG:1b; IPA:(.z.a,.Q.addr`localhost)!.z.h,`localhost; ipa:{$[`~r:IPA x;IPA[x]:$[`~r:.Q.host x;`$"."sv string"i"$0x0 vs x;r];r]}; livehx:{y in x,key .z.W}; liveh:livehx(); livehn:livehx 0Ni; liveh0:livehx 0i; HOSTPORT:`$":",(string .z.h),":",string system"p"; .access.FILE:@[.:;`.access.FILE;`:invalidaccess.log]; .clients.AUTOCLEAN:@[.:;`.clients.AUTOCLEAN;1b]; / clean out old records when handling a close .clients.INTRUSIVE:@[.:;`.clients.INTRUSIVE;0b]; .clients.RETAIN:@[.:;`.clients.RETAIN; `long$`timespan$00:05:00]; / 5 minutes .clients.MAXIDLE:@[.:;`.clients.MAXIDLE; `long$`timespan$00:15:00]; / 15 minutes .servers.HOPENTIMEOUT:@[.:;`.servers.HOPENTIMEOUT;`long$`time$00:00:00.500]; / half a second timeout .servers.RETRY:@[.:;`.servers.RETRY; `long$`time$00:05:00]; / 5 minutes .servers.RETAIN:@[.:;`.servers.RETAIN; `long$`timespan$00:11:00]; / 11 minutes .servers.AUTOCLEAN:@[.:;`.servers.AUTOCLEAN;1b]; / clean out old records when handling a close .tasks.AUTOCLEAN:@[.:; `.tasks.AUTOCLEAN;1b]; / clean out old records when handling a close .tasks.RETAIN:@[.:;`.tasks.RETAIN; `long$`timespan$00:05:00]; / 5 minutes .usage.FILE:@[.:;`.usage.FILE; `:usage.log]; .usage.LEVEL:@[.:;`.usage.LEVEL;2]; / 0 - nothing; 1 - errors only; 2 - all @[value;"\\l saveorig.custom.q";::]; err:{"dotz: ",x}; txt:{[width;zcmd;arg]t:$[10=abs type arg;arg,();-3!arg];if[zcmd in`ph`pp;t:.h.uh t];$[width<count t:t except"\n";(15#t),"..",(17-width)#t;t]}; txtc:txt[neg 60-last system"c"];txtC:txt[neg 60-last system"C"]; pzlist:` sv'`.z,'`pw`po`pc`pg`ps`pi`ph`pp`ws`exit; .dotz.undef:pzlist where not @[{not(::)~value x};;0b] each pzlist; .dotz.set[`.z.pw;.dotz.pw.ORIG:@[.:;.dotz.getcommand[`.z.pw];{{[x;y]1b}}]]; .dotz.set[`.z.po;.dotz.po.ORIG:@[.:;.dotz.getcommand[`.z.po];{;}]]; .dotz.set[`.z.pc;.dotz.pc.ORIG:@[.:;.dotz.getcommand[`.z.pc];{;}]]; .dotz.set[`.z.wo;.dotz.wo.ORIG:@[.:;.dotz.getcommand[`.z.wo];{;}]]; .dotz.set[`.z.wc;.dotz.wc.ORIG:@[.:;.dotz.getcommand[`.z.wc];{;}]]; .dotz.set[`.z.ws;.dotz.ws.ORIG:@[.:;.dotz.getcommand[`.z.ws];{{neg[.z.w]x;}}]]; / default is echo .dotz.set[`.z.pg;.dotz.pg.ORIG:@[.:;.dotz.getcommand[`.z.pg];{.:}]]; .dotz.set[`.z.ps;.dotz.ps.ORIG:@[.:;.dotz.getcommand[`.z.ps];{.:}]]; .dotz.set[`.z.pi;.dotz.pi.ORIG:@[.:;.dotz.getcommand[`.z.pi];{{.Q.s value x}}]]; .dotz.set[`.z.pp;.dotz.pp.ORIG:@[.:;.dotz.getcommand[`.z.pp];{;}]]; / (poststring;postbody) .dotz.set[`.z.exit;.dotz.exit.ORIG:@[.:;.dotz.getcommand[`.z.exit];{;}]]; .dotz.set[`.z.ph;.dotz.ph.ORIG:.z.ph]; / .z.ph is defined in q.k revert:{ .dotz.unset each `.z.pw`.z.po`.z.pc`.z.pg`.z.ps`.z.pi`.z.ph`.z.pp`.z.ws`.z.exit; .dotz.SAVED.ORIG:0b;} ] ================================================================================ FILE: TorQ_code_handlers_finspaceservers.q SIZE: 761 characters ================================================================================ .servers.FINSPACEDISC:@[value; `.servers.FINSPACEDISC; 0b]; .servers.FINSPACECLUSTERSFILE:@[value; `.servers.FINSPACECLUSTERSFILE; hsym `]; .servers.listfinspaceclusters:{ :@[.aws.list_kx_clusters; `; {.lg.e[`listfinspaceclusters; "Failed to get finspace clusters using the finspace API - ",x]}]; }; .servers.getfinspaceconn:{[pname] id:.Q.s1 pname; cluster:first exec `$cluster_name from .servers.listfinspaceclusters[] where status like "RUNNING",(`$cluster_name)=pname; if[null cluster; .lg.w[`finspaceconn; "no available finspace cluster found for ",id]; :`]; conn:@[.aws.get_kx_connection_string; cluster; {[id;e] .lg.e[`finspaceconn; "failed to get connection string for ",id," via aws api - ",e]; :`}[id;]]; :`$conn; }; ================================================================================ FILE: TorQ_code_handlers_ldap.q SIZE: 5,528 characters ================================================================================ // Functionality to aunthenticate user against LDAP server // User attempts are cached // This is used to allow .z.pw to be integrated with ldap \d .ldap enabled: @[value;`enabled;.z.o~`l64] / whether authentication is enabled lib: `$getenv[`KDBLIB],"/",string[.z.o],"/kdbldap"; / ldap library location debug: @[value;`debug;0i] / debug level for ldap library: 0i = none, 1i=normal, 2i=verbose servers: @[value;`servers; enlist `$"ldap://localhost:0"]; / symbol-list of <schema>://<host>:<port> blocktime: @[value;`blocktime; 0D00:30:00]; / time before blocked user can attempt authentication checklimit: @[value;`checklimit;3]; / number of attempts before user is temporarily blocked checktime: @[value;`checktime;0D00:05]; / period for user to reauthenticate without rechecking LDAP server buildDNsuf: @[value;`buildDNsuf;""]; / suffix used for building bind DN buildDN: @[value;`buildDN;{{"uid=",string[x],",",buildDNsuf}}]; / function to build bind DN version: @[value;`version;3]; / ldap version number out:{if[debug;:.lg.o[`ldap] x]}; err:{if[debug;:.lg.e[`ldap] x]}; initialise:{[lib] / initialise ldap library .ldap.init:lib 2:(`kdbldap_init;2); .ldap.setOption:lib 2:(`kdbldap_set_option;3); .ldap.bind_s:lib 2:(`kdbldap_bind_s;4); .ldap.err2string:lib 2:(`kdbldap_err2string;1); .ldap.startTLS:lib 2:(`kdbldap_start_tls;1); .ldap.setGlobalOption:lib 2:(`kdbldap_set_global_option;2); .ldap.getOption:lib 2:(`kdbldap_get_option;2); .ldap.getGlobalOption:lib 2:(`kdbldap_get_global_option;1); .ldap.interactive_bind_s:lib 2:(`kdbldap_interactive_bind_s;5); .ldap.search_s:lib 2:(`kdbldap_search_s;8); .ldap.unbind_s:lib 2:(`kdbldap_unbind_s;1); r:.ldap.init[.ldap.sessionID; .ldap.servers]; if[0<>r;.ldap.err "Error initialising LDAP: ",.ldap.err2string[r]]; s:.ldap.setOption[.ldap.sessionID;`LDAP_OPT_PROTOCOL_VERSION;.ldap.version]; if[0<>s;.ldap.err "Error setting LDAP option: ",.ldap.err2string[s]]; }; sessionID:0i cache:([user:`$()]; pass:(); server:`$(); port:`int$(); time:`timestamp$(); attempts:`long$(); success:`boolean$(); blocked:`boolean$()); / create table to store login attempts unblock:{[usr] if[-11h<>type usr; :.ldap.out"username must be passed as a symbol"]; if[.ldap.cache[usr;`blocked]; update attempts:0, success:0b, blocked:0b from `.ldap.cache where user=usr; :.ldap.out "unblocked user ",string usr; ]; };
Precision¶ Float precision¶ Precision of floats is a complex issue because floats (known as doubles in other programming languages) are actually binary rational approximations of real numbers. If you are concerned with precision, make sure to set \P 0 before proceeding with anything else. This helps you understand what's really happening with your data. Due to the finite accuracy of the binary representation of floating-point numbers, the last decimal digit of a float is not reliable. This is not peculiar to kdb+. q)\P 0 q)1%3 0.33333333333333331 Efficient algorithms for complex calculations such as log and sine introduce imprecision. Moreover, even basic calculations raise issues of rounding. The IEEE floating-point spec addresses many such issues, but the topic is complex. Q takes this into account in its implementation of the equality operator = , which should actually be read as “tolerantly equal.” Roughly speaking, this means that the difference is relatively small compared to some acceptable representation error. This makes the following hold: q)r7:1%7 q)sum 7#r7 0.99999999999999978 q)1.0=sum 7#r7 1b Only zero is tolerantly equal to zero and you can test any two numbers for intolerant equality with 0=x-y . Thus, we find: q)0=1.0-sum 7#r7 0b The following example appears inconsistent with this: q)r3:1%3 q)1=r3+r3+r3 1b q)0=1-r3+r3+r3 1b It is not. The quantity r3+r3+r3 is exactly 1.0. This is part of the IEEE spec, not q, and seems to be related to rounding conventions for binary floating point operations. The = operator uses tolerant equality semantics. Not all primitives do. q)96.100000000000009 = 96.099999999999994 1b q)0=96.100000000000009-96.099999999999994 0b q)deltas 96.100000000000009 96.099999999999994 96.100000000000009 -1.4210854715202004e-014 q)differ 96.100000000000009 96.099999999999994 10b q)96.100000000000009 96.099999999999994 ? 96.099999999999994 1 q)group 96.100000000000009 96.099999999999994 96.100000000000009| 0 96.099999999999994| 1 Not transitive Tolerant equality does not obey transitivity: q)a:96.099999999999994 q)b:96.10000000001 q)c:96.10000000002 q)a 96.099999999999994 q)b 96.100000000009999 q)c 96.100000000020003 q)a=b 1b q)b=c 1b q)a=c 0b The moral of this story is that we should think of floats as being “fuzzy” real values and never use them as keys or where precise equality is required – e.g., in group or ? . For those interested in investigating these issues in depth, we recommend the excellent exposition by David Goldberg “What Every Computer Scientist Should Know about Floating Point Arithmetic’. Q SIMD sum¶ The l64 builds of kdb+ now have a faster SIMD sum implementation using SSE. With the above paragraph in mind, it is easy to see why the results of the older and newer implementation may not match. Consider the task of calculating the sum of 1e-10*til 10000000 . The SIMD code is equivalent to the following (\P 0 ): q){x+y}over{x+y}over 0N 8#1e-10*til 10000000 4999.9995000000017 While the older, “direct” code yields: q){x+y}over 1e-10*til 10000000 4999.9994999999635 The observed difference is due to the fact that the order of addition is different, and floating-point addition is not associative. Worth noting is that the left-to-right order is not in some way “more correct” than others, seeing as even reversing the order of the elements yields different results: q){x+y}over reverse 1e-10*til 10000000 4999.9995000000026 If you need to sum numbers with most precision, you can look into implementing a suitable algorithm, like the ones discussed in “Accurate floating point summation” by Demmel et al. Comparison tolerance¶ Comparison tolerance is the precision with which two numbers are determined to be equal. It applies only where one or the other is a finite floating-point number, i.e. types real, float, and datetime (see Dates below). It allows for the fact that such numbers may be approximations to the exact values. For any other numbers, comparisons are done exactly. Formally, there is a comparison tolerance t such that if x or y is a finite floating-point number, then x=y is 1 if the magnitude of x-y does not exceed t times the larger of the magnitudes of x and y . t is set to 2-43, and cannot be changed. In practice, the implementation is an efficient approximation to this test. Note that a non-zero value cannot equal 0, since for any non-zero x , the magnitude of x is greater than t times the magnitude of x . Thus 0=a-b tests for strict equality between a and b . Comparison tolerance is not transitive, and can cause problems for find and distinct . Thus, floats should not be used for database keys. For example: q)t:2 xexp -43 / comparison tolerance q)a:1e12 q)a=a-1 / a is not equal to a-1 0b q)t*a / 1 is greater than t*a 0.1136868 q)a:1e13 q)a=a-1 / a equals a-1 1b q)t*a / 1 is less than t*a 1.136868 q)0=a-(a-1) / a is not strictly equal to a-1 0b To see how this works, first set the print precision so that all digits of floating-point numbers are displayed. \P 18 The result of the following computation is mathematically 1.0, but the computed value is different because the addend 0.001 cannot be represented exactly as a floating-point number. q)x: 0 / initialize x to 0 q)do[1000;x+:.001] / increment x one thousand times by 0.001 q)x / the resulting x is not quite 1.000 1.0000000000000007 q)x=1 / does x equal 1? 1b However, the expression x = 1 has the value 1b , and x is said to be tolerantly equal to 1: q)x=1 / does x equal 1? 1b Moreover, two distinct floating-point values x and y for which x = y is 1 are said to be tolerantly equal. No non-zero value is tolerantly equal to 0. Formally, there is a system constant \(E\) called the comparison tolerance such that two non-zero values \(a\) and \(b\) are tolerantly equal if: \(|a-b| ≤ E × max(|a|, |b|)\) but in practice the implementation is an efficient approximation to this test. Note that according to this inequality, no non-zero value is tolerantly equal to 0. That is, if a=0 is 1 then a must be 0. To see this, substitute 0 for b in the above inequality and it becomes: \(| a | ≤ E ×| a |\) which, since \(E\) is less than 1, can hold only if a is 0. Use¶ Besides Equal, comparison tolerance is used in the operators = < <= >= > ~ differ within And prior to V3.0 floor ceiling It is also used by the iterators Converge, Do and While. It is not used by other keywords that have tests for equality: ? distinct except group in inter union xgroup Sort keywords: asc desc iasc idesc rank xasc xdesc Examples¶ q)a:1f q)b:a-10 xexp -13 In the following examples, b is treated equal to a , i.e. equal to 1 : q)a=b 1b q)a~b 1b q)a>b 0b q)floor b /before V3.0, returned 1 0 In the following examples, b is treated not equal to a : q)(a,a)?b 2 q)(a,a) except b 1 1f q)distinct a,b 1 0.99999999999989997 q)group a,b 1 | 0 0.99999999999989997| 1 q)iasc a,b 1 0 Dates¶ The datetime type is based on float, and hence uses comparison tolerance, for example: q)a:2000.01.02 + sum 1000#1%86400 / add 1000 seconds to a date q)a 2000.01.02T00:16:40.000 q)b:2000.01.02T00:16:40.000 / enter same datetime q)a=b / values are tolerantly equal 1b q)0=a-b / but not strictly equal 0b Other temporal types, including the new timestamp and timespan types in V2.6, are based on int or long. These do not use comparison tolerance, and are therefore appropriate for database keys.
_ Drop¶ Drop items from a list, entries from a dictionary or columns from a table. x _ y _[x;y] _ (drop) is a multithreaded primitive. Drop leading or trailing items¶ Where x is an int atomy a list or dictionary returns y without the first or last x items. q)5_0 1 2 3 4 5 6 7 8 /drop the first 5 items 5 6 7 8 q)-5_0 1 2 3 4 5 6 7 8 /drop the last 5 items 0 1 2 3 q)1 _ `a`b`c!1 2 3 b| 2 c| 3 Drop from a string¶ q)b:"apple: banana: cherry" q)(b?":") _ b / find the first ":" and remove the prior portion of the sentence ": banana: cherry" Drop selected items¶ Where x is a list or dictionaryy is an index or key ofx returns x without the items or entries at y . q)0 1 2 3 4 5 6 7 8_5 /drop the 5th item 0 1 2 3 4 6 7 8 q)(`a`b`c!1 2 3)_`a /drop the entry for `a b| 2 c| 3 Drop keys from a dictionary¶ Where x is an atom or vector of keys toy y is a dictionary returns y without the entries for x . q)`a _ `a`b`c!1 2 3 b| 2 c| 3 q)`a`b _ `a`b`c!1 2 3 c| 3 q)(`a`b`c!1 2 3) _ `a`b 'type Q for Mortals: §5. Dictionaries Dropping dictionary entries with integer arguments With dictionaries, distinguish the roles of integer arguments to drop. q)d:100 200!\`a\`b q)1 _ d /drop the first entry 200| b q)d _ 1 /drop where key=1 100| a 200| b q)d _ 100 /drop where key=100 200| b q)enlist[1] _ d /drop where key=1 100| a 200| b q)enlist[100] _ d /drop where key=100 200| b q)100 _ d /drop first 100 entries Drop columns from a table¶ Where x is a symbol vector of column namesy is a table returns y without columns x . q)t:([]a:1 2 3;b:4 5 6;c:`d`e`f) q)`a`b _ t c - d e f q)t _ `a`b 'type q)`a _ t 'type q)t _ `a 'type Drop in place Assign through Drop to delete in place. q)show d:`a`b`c`x!(1;2 3;4;5) a| 1 b| 2 3 c| 4 x| 5 q)d _:`x q)d a| 1 b| 2 3 c| 4 dsave ¶ Write global tables to disk as splayed, enumerated, indexed kdb+ tables. x dsave y dsave[x;y] Where x is the save path as a file symbol atom or vectory is one or more table names as a symbol atom or vector save the table/s and returns the list of table names. (Since V3.2 2014.05.07.) The first column of each table saved has the parted attribute applied to it. If the save path is a list, the first item is the HDB root (where the sym file, if any, will be stored), while the remaining items are a path within the HDB (e.g. a partition). Roughly the same functionality as the combination of .Q.en and set or .Q.dpft , but in a simpler form. q)t:flip`sym`price`size!100?'(-10?`3;1.0;10) q)q:flip`sym`bid`ask`bsize`asize!900?'(distinct t`sym;1.0;1.0;10;10) q)meta t c | t f a -----| ----- sym | s price| f size | j q)meta q c | t f a -----| ----- sym | s bid | f ask | f bsize| j asize| j q)type each flip t sym | 11 price| 9 size | 7 q)type each flip q sym | 11 bid | 9 ask | 9 bsize| 7 asize| 7 q)`:/tmp/db1 dsave`sym xasc'`t`q `t`q q)\l /tmp/db1 q)meta t c | t f a -----| ----- sym | s p price| f size | j q)meta q c | t f a -----| ----- sym | s p bid | f ask | f bsize| j asize| j q)type each flip t sym | 20 price| 9 size | 7 q)type each flip q sym | 20 bid | 9 ask | 9 bsize| 7 asize| 7 In the following, the left argument is a list, of which the second item is a partition name. q)t:flip`sym`price`size!100?'(-10?`3;1.0;10) q)q:flip`sym`bid`ask`bsize`asize!900?'(distinct t`sym;1.0;1.0;10;10) q)meta t c | t f a -----| ----- sym | s price| f size | j q)meta q c | t f a -----| ----- sym | s bid | f ask | f bsize| j asize| j q)type each flip t sym | 11 price| 9 size | 7 q)type each flip q sym | 11 bid | 9 ask | 9 bsize| 7 asize| 7 q)`:/tmp/db2`2015.01.01 dsave`sym xasc'`t`q `t`q q)\l /tmp/db2 q)meta t c | t f a -----| ----- date | d sym | s p price| f size | j q)meta q c | t f a -----| ----- date | d sym | s p bid | f ask | f bsize| j asize| j 2: Dynamic Load¶ Load C shared objects fs 2: (cfn;rnk) 2:[fs;(cfn;rnk)] Where fs is a file symbolcfn is the name of a C function (symbol)rnk its rank (int) returns a function that calls it. Suppose we have a C function in cpu.so with the prototype K q_read_cycles_of_this_cpu(K x); assign it to read_cycles : read_cycles:`cpu 2:(`q_read_cycles_of_this_cpu;1) If the shared library, as passed, does not exist, kdb+ will try to load it from $QHOME/os , where os is the operating system and architecture acronym, e.g. l64 , w64 , etc. If using a relative path which does not resolve to reside under $QHOME/os , ensure that LD_LIBRARY_PATH contains the required absolute search path for that library. (On Windows, use PATH instead of LD_LIBRARY_PATH .) Since 3.6 2018.08.24 loading shared libraries via 2: resolved to a canonical path prior to load via the OS. This caused issues for libs whose run-time path was relative to a sym-link. From 4.1t 2024.01.11 it resolves to an absolute path only, without resolving sym-links. each , peach ¶ Iterate a unary v1 each x each[v1;x] v1 peach x peach[v1;x] (vv)each x each[vv;x] (vv)peach x peach[vv;x] Where v1 is a unary applicable valuevv is a variadic applicable value applies v1 or vv as a unary to each item of x and returns a result of the same length. That is, the projections each[v1;] , each[vv;] , peach[v1;] , and peach[vv;] are uniform functions. q)count each ("the";"quick";" brown";"fox") 3 5 5 3 q)(+\)peach(2 3 4;(5 6;7 8);9 10 11 12) 2 5 9 (5 6;12 14) 9 19 30 42 each and peach perform the same computation and return the same result. peach will divide the work between available secondary tasks. Changes since 4.1t 2024.01.04 peach workload distribution methodology changed to dynamically redistribute workload and allow nested invocation. The limitations on nesting have been removed, so peach (and multi-threaded primitives) can be used inside peach. To facilitate this, round-robin scheduling has been removed. Even though the initial work is still distributed in the same manner as before for compatibility, the workload is dynamically redistributed if a thread finishes its share before the others. each is a wrapper for the Each iterator. peach is a wrapper for the Each Parallel iterator. It is good q style to use each and peach for unary values. each is redundant with atomic functions. (Common qbie mistake.) Maps for uses of Each with binary and higher-rank values .Q.fc parallel on cut Parallel processing Table counts in a partitioned database Q for Mortals A.68 peach Higher-rank values¶ peach applies only unary values. For a values of rank ≥2, use Apply to project v as a unary value. For example, suppose m is a 4-column matrix and each row has values for the arguments of v4 . Then .[v4;]peach m will apply v4 to each list of arguments. Alternatively, suppose t is a table in which columns b , c , and a are arguments of v3 . Then .[v3;]peach flip t `b`c`a will apply v3 to the arguments in each row of t . Blocked within peach ¶ hopen socket websocket open socket broadcast (25!x) amending global variables load master decryption key (-36!) And any system command which might cause a change of global state. Generally, do not use a socket within peach , unless it is encapsulated via one-shot sync request or HTTP client request (TLS/SSL support added in 4.1t 2023.11.10). Erroneous socket usage is blocked and signals a nosocket error. If you are careful to manage your file handles/file access so that there is no parallel use of the same handle (or file) across threads, then you can open and close files within peach . Streaming execute (-11! ) should also be fine. However updates to global variables are not possible, so use cases might be quite restricted within peach . ej ¶ Equi join ej[c;t1;t2] Where c is a list of column names (or a single column name)t1 andt2 are tables returns t1 and t2 joined on column/s c . The result has one combined record for each row in t2 that matches t1 on columns c . q)t:([]sym:`IBM`FDP`FDP`FDP`IBM`MSFT;price:0.7029677 0.08378167 0.06046216 0.658985 0.2608152 0.5433888) q)s:([]sym:`IBM`MSFT;ex:`N`CME;MC:1000 250) q)t sym price --------------- IBM 0.7029677 FDP 0.08378167 FDP 0.06046216 FDP 0.658985 IBM 0.2608152 MSFT 0.5433888 q)s sym ex MC ------------- IBM N 1000 MSFT CME 250 q)ej[`sym;s;t] sym price ex MC ----------------------- IBM 0.7029677 N 1000 IBM 0.2608152 N 1000 MSFT 0.5433888 CME 250 Duplicate column values are filled from t2 . q)t1:([] k:1 2 3 4; c:10 20 30 40) q)t2:([] k:2 2 3 4 5; c:200 222 300 400 500; v:2.2 22.22 3.3 4.4 5.5) q)ej[`k;t1;t2] k c v ----------- 2 200 2.2 2 222 22.22 3 300 3.3 4 400 4.4 Joins Q for Mortals §9.9.5 Equi Join ema ¶ Exponential moving average x ema y ema[x;y] Where y is a numeric listx is a numeric atom or list of lengthcount y returns the exponentially-weighted moving averages (EWMA, also known as exponential moving average , EMA) of y , with x as the smoothing parameter. ema is a uniform function. Example: An impulse response with decay of ⅓. q)ema[1%3;1,10#0] 1 0.6666667 0.4444444 0.2962963 0.1975309 0.1316872 0.0877915 0.05852766 0.03901844 0.02601229 0.01734153 Example: 10-day EMA on price, as at stockcharts.com. Smoothing parameter for EMA over \(N\) points is defined as \(\frac{2}{1+N}\). q)p:22.27 22.19 22.08 22.17 22.18 22.13 22.23 22.43 22.24 22.29 22.15 22.39 22.38 22.61 23.36 24.05 23.75 23.83 23.95 23.63 23.82 23.87 23.65 23.19 23.1 23.33 22.68 23.1 22.4 22.17 q)(2%1+10)ema p 22.27 22.25545 22.22355 22.21382 22.20767 22.19355 22.20017 22.24196 22.2416 22.2504 22.23215 22.26085 22.28251 22.34206 22.52714 22.80402 22.97602 23.13129 23.28014 23.34375 23.43034 23.51028 23.53568 23.47283 23.40505 23.3914 23.26206 23.23259 23.08121 22.91554 ! Enkey, Unkey¶ Simple to keyed table and vice-versa ! Enkey¶ Make a keyed table from a simple table. i!t ![i;t] Where i is a positive integert is a simple table, or a handle to one returns t with the first i columns as key q)t:([]a:1 2 3;b:10 20 30;c:`x`y`z) q)2!t a b | c ----| - 1 10| x 2 20| y 3 30| z ! Unkey¶ Remove the key/s from a table. 0!t ![0;t] Where t is a keyed table, or a handle to one, returns t as a simple table, with no keys. q)t:([a:1 2 3]b:10 20 30;c:`x`y`z) q)0!t a b c ------ 1 10 x 2 20 y 3 30 z Amending in place¶ For both Enkey and Unkey, if t is a table-name, ! amends the table and returns the name. q)t:([a:1 2 3]b:10 20 30;c:`x`y`z) q)0!`t `t q)t a b c ------ 1 10 x 2 20 y
// ### qclone // NOTE: offloadHttp doesn't work properly until the // hclose fix of 2.8.20120420. .finos.sys.errorTrapAt:@[;;] // Add help. .help.DIR[`qclone]:`$"offload clients/tasks to clone process(es)" .finos.qclone.priv.help: enlist "Support for offloading work to clone processes." // Select which kind of serialization to use. // Compressed only works with clients that speak the kdb+2.6 protocol. .finos.qclone.compressedSerialization:0b // Known event types. .finos.qclone.EVENT_TYPES:`zpo`zph`zpg`spawn // Track what we're activating. // Don't want multiple layers of the same shim. // zpo and zpg handling are mutually exclusive. .finos.qclone.priv.activated:`symbol$() // Called in child when child fork is complete. // Shim to hook in additional actions. .finos.qclone.childZpoHandler:{[]} // Called in child when child will exit. // Shim to hook in additional actions. .finos.qclone.childZpcHandler:{[]} // Called in child right before evaluating expression. // Shim to hook in additional actions. .finos.qclone.childZphHandler:{[]} // Called in child right before evaluating expression. // Shim to hook in additional actions. .finos.qclone.childZpgHandler:{[]} // Called in child right before evaluating lambdaThatReturnsStatusCode. // Shim to hook in additional actions. .finos.qclone.childSpawnHandler:{[]} // Called in child right before evaluating lambdaThatReturnsString. // Shim to hook in additional actions. .finos.qclone.childOffloadHttpHandler:{[]} // Functions in parent for child events. // Useful for maintaining a pool of children in case some vanish. // Called in parent when child process is created. // Shim to hook in additional actions. // @param newChildPid PID of newly-created child. // @param eventType One of `zpo`zph`zpg to indicate event which triggered fork(2). // @return Nothing. .finos.qclone.newChildHandler:{[newChildPid;eventType]} // Called in parent when child process is reaped. // Shim to hook in additional actions. // @param oldChildWaitDict Dictionary like .finos.clib.wait_PROTO with child termination information. // @param eventType One of `zpo`zph`zpg`spawn to indicate event which triggered fork(2). // @return Nothing. .finos.qclone.oldChildHandler:{[oldChildWaitDict;eventType]} // Table for tracking child processes created. .finos.qclone.priv.childProcesses:([pid:`int$()]eventType:`symbol$();startTime:`timestamp$()) // Function to return childProcesses table to reduce likelihood // of accidental mutation. // @return Value of .finos.qclone.priv.childProcesses. .finos.qclone.getChildProcesses:{[] .finos.qclone.priv.childProcesses } // Function which receives the table of possibly-live children. // The wrapper functions on .z.po, .z.pc., .z.ph, .z.pg aren't // going to be removed. So further connections will mess things up. // Only useful for cleanup on process exit. // @param childProcessesTable Last-known state of .finos.qclone.priv.childProcesses . // @return Nothing. .finos.qclone.unloadHandler:{[childProcessesTable]} // Track children created and fire user event handler. // @param newChildPid PID of child process created by fork(2). // @return Nothing. .finos.qclone.priv.newChild:{[newChildPid;eventType] `.finos.qclone.priv.childProcesses upsert (newChildPid;eventType;.z.P); .[.finos.qclone.newChildHandler ;(newChildPid;eventType) ;{[x].finos.log.error".finos.qclone.newChildHandler: ", " newChildPid=",string[newChildPid],", eventType=",string[eventType], ", signaled: ",-3!x} ]; } // Track children reaped and fire user event handler. // @param oldChildWaitDict Dictionary like .finos.clib.wait_PROTO with child termination information. // @return Nothing. .finos.qclone.priv.oldChild:{[oldChildWaitDict] oldChildPid:oldChildWaitDict`pid; eventType:.finos.qclone.priv.childProcesses[oldChildPid]`eventType; .finos.log.debug".finos.qclone.priv.oldChild: oldChildPid=",string[oldChildPid],", eventType=",string eventType; delete from`.finos.qclone.priv.childProcesses where pid=oldChildPid; .[.finos.qclone.oldChildHandler ;(oldChildWaitDict;eventType) ;{[oldChildWaitDict;eventType;signal].finos.log.error".finos.qclone.oldChildHandler: ", " oldChildWaitDict=",(-3!oldChildWaitDict),", eventType=",string[eventType], ", signaled: ",-3!signal}[oldChildWaitDict;eventType;] ]; } // Dummy dictionary in case waitpid(...) fails. .finos.qclone.priv.DUMMY_WAIT_NOHANG_DICT:enlist[`pid]!enlist -1 // Take the opportunity to clean up zombie children. // @return Nothing. .finos.qclone.reap:{[] while[((oldChildWaitDict:@[.finos.clib.waitNohang;(::);.finos.qclone.priv.DUMMY_WAIT_NOHANG_DICT])`pid) > 0 ;.finos.qclone.priv.oldChild oldChildWaitDict]; } .finos.qclone.priv.setupChildContextCommon:{ // hclose all non-client handles so we don't // consume anything destined for the parent. .finos.log.debug".finos.qclone.priv.setupChildContext: .z.W=",(-3!.z.W),", .z.w=",(-3!.z.w); .finos.qclone.isClone:1b; fds:except[;0 1 2i]"I"$string key `$":/proc/",string[.z.i],"/fd"; @[hclose;;(::)]each except[;.z.w]fds,key .z.W; // Clear the list of children, since they're my siblings now. delete from `.finos.qclone.priv.childProcesses; }; // Close all file descriptors except the one to the client. // Prevents interference with I/O streams on parent process. // @return Nothing. .finos.qclone.priv.setupChildContext:{[] system"p 0"; // Don't interfere with incoming connections. .finos.qclone.priv.setupChildContextCommon[]; }; // Do some accounting and fire off event handlers. // Close off .z.w to avoid interfering with the child's communication with the client. // @param newChildPid PID of newly-created child. // @param eventType One of `zpo`zpg since this function is shared by both kinds of events. // @return Nothing. .finos.qclone.priv.forkedParent:{[newChildPid;eventType] info:".z.i=",string[.z.i],", .z.w=",string[.z.w], ", newChildPid=",string[newChildPid],", eventType=",string[eventType]; .finos.log.debug".finos.qclone.priv.forkedParent0: ",info; .finos.qclone.priv.newChild[newChildPid;eventType]; // hclose .z.w only for .z.po and .z.pg handling. // Returning anything (even generic null (::)) results in serializable // data which could corrupt the stream between the child and the client. // Closing .z.w on .z.ph confuses the parent for the next HTTP connection. // (Probably a q bug.) // Spawn doesnt make use of .z.w. // offloadHttp doesn't hclose properly until 2.8.20120420. if[eventType in`zpo`zpg`offloadHttp ; @[hclose;.z.w;(::)] ]; .finos.log.debug".finos.qclone.priv.forkedParent1: ",info; } // Handler for .z.pc to make child to exit when client disconnects. // @return Never. .finos.qclone.priv.forkedChildZpc:{[] .finos.log.debug".finos.qclone.priv.forkedChildZpc: .z.i=",string .z.i; @[.finos.qclone.childZpcHandler;(::);{[x].finos.log.error".finos.qclone.childZpcHandler signaled: ",-3!x}]; exit 0; } // After fork, child is handed off to this function to manage // file descriptors, do some accounting, and fire off user handlers. // @returns Nothing. .finos.qclone.priv.forkedChildZpo:{[] info:".z.i=",string[.z.i],", .z.w=",string[.z.w]; .finos.log.debug".finos.qclone.priv.forkedChildZpo0: ",info; .finos.qclone.priv.setupChildContext[]; // Install a handler to exit on close. $[-11h=type key`.z.pc // Handler exists? // Shim. Do forkedChildZpc last because it exits. ;.z.pc:{[oldZpc;w]@[oldZpc;w;(::)];.finos.qclone.priv.forkedChildZpc .z.w}[.z.pc;] // Assign. ;.z.pc:.finos.qclone.priv.forkedChildZpc ]; // Call handler after handles are all set up. @[.finos.qclone.childZpoHandler;(::);{[x].finos.log.error".finos.qclone.childZpoHandler signaled: ",-3!x}]; .finos.log.debug".finos.qclone.priv.forkedChildZpo1: ",info; } // Handler to call from .z.po to associate client session with a clone. // @param ignoredW Handler on .z.po receives handle for client. But we don't use it. // @return Nothing. .finos.qclone.priv.forkConnectionZpo:{[ignoredW] rc:.finos.clib.fork[]; $[rc>0 ;.finos.qclone.priv.forkedParent[rc;`zpo] ;.finos.qclone.priv.forkedChildZpo[] ]; // .z.po doesn't return anything. } .finos.qclone.priv.help,:( ".finos.qclone.activateZpo[]"; " Hooks up .z.po handler for clone-per-session capability.") // Hook up .z.po handler for clone-per-session capability. // @return Nothing. .finos.qclone.activateZpo:{[] if[`zpo in .finos.qclone.priv.activated ; : (::) // Already activated. ]; if[`zpg in .finos.qclone.priv.activated ; '"activateZpg already active and mutually exclusive" ]; $[-11h=type key `.z.po // Handler exists? ;.z.po:{[oldZpo;w]@[oldZpo;w;(::)];.finos.qclone.priv.forkConnectionZpo w}[.z.po;] // Assign. ;.z.po:.finos.qclone.priv.forkConnectionZpo ]; .finos.qclone.priv.activated,:`zpo; } // After fork, child is handed off to this function to manage // file descriptors, do some accounting, and fire off user handlers. // @param oldZph Shimmed http renderer. Want to execute in the child. // @param x Whatever the original .z.ph handler rendered into text. // @return Never. .finos.qclone.priv.forkedChildZph:{[oldZph;x] info:".z.i=",string[.z.i],", .z.w=",string[.z.w],", x=",(-3!x); .finos.log.debug".finos.qclone.priv.forkedChildZph0: ",info; .finos.qclone.priv.setupChildContext[]; // Call handler after handles are all set up. @[.finos.qclone.childZphHandler;(::);{[x].finos.log.error".finos.qclone.childZphHandler signaled: ",-3!x}]; // Process the input. r:@[oldZph;x;{[x]$[10h=type x;x;-3!x]}]; // Can't return the string since it makes it more complicated // to figure out when to exit. // Force feed the string down the handle. .finos.qclone.priv.blockingWriteAndClose r; .finos.log.debug".finos.qclone.priv.forkedChildZph1: ",info; exit 0; } // Handler to call from .z.ph to have query processed by a clone. // @param x Whatever the original .z.ph handler rendered into text. // @return Empty string to avoid interfering with the child's communication with the client. .finos.qclone.priv.forkConnectionZph:{[oldZph;x] rc:.finos.clib.fork[]; $[rc>0 ;.finos.qclone.priv.forkedParent[rc;`zph] ;.finos.qclone.priv.forkedChildZph[oldZph;x] // Will exit. ] "" } .finos.qclone.priv.help,:( ".finos.qclone.activateZph[]"; " Hooks up .z.ph handler for clone-per-request capability.") // Hook up .z.ph handler for clone-per-query capability. // @return Nothing. .finos.qclone.activateZph:{[] if[`zph in .finos.qclone.priv.activated ; : (::) // Already activated. ]; .z.ph::.finos.qclone.priv.forkConnectionZph[.z.ph;]; .finos.qclone.priv.activated,:`zph; } // Take q query result and set the message type to // indicate that this is response to a sync request. // @param x Value to return to the client. // @return Byte vector with serialized representation. .finos.qclone.priv.serialize:{[x] r:$[.finos.qclone.compressedSerialization;-18;-8]!x; // Poke in the byte that says this is a result message. // http://code.kx.com/wiki/Reference/ipcprotocol#serializing_an_integer_of_value_1 r[1]:0x02; r }
// return the details of the current process getdetails:{(.z.f;.z.h;system"p";@[value;`.proc.procname;`];@[value;`.proc.proctype;`];@[value;(`.proc.getattributes;`);()!()])} / add session behind a handle addhw:{[hpuP;W] // Get the information around a process info:`f`h`port`procname`proctype`attributes!(@[W;({$[`getdetails in key`.servers;.servers.getdetails[];(.z.f;.z.h;system"p";`;`;$[`getattributes in key`.proc;.proc.getattributes[];()!()])]};`);(`;`;0Ni;`;`;()!())]); if[0Ni~info`port;'"remote call failed on handle ",string W]; if[null name:info`procname;name:`$last("/"vs string info`f)except enlist""]; if[0=count name;name:`default]; if[null hpuP;hpuP:.servers.formathp[info`h;info`port;`tcp^.servers.SOCKETTYPE info`proctype;info`proctype;info`procname]]; // If this handle already has an entry, delete the old entry delete from `.servers.SERVERS where w=W; addnthawc[name;info`proctype;hpuP;info`attributes;W;0b]} addw:addhw[`] reset:init:{delete from`.servers.SERVERS} checkw:{{x!@[;"1b";0b]each x}exec w from`.servers.SERVERS where .dotz.liveh w,w in x} / after getting new servers run retry to open connections retry:{retryrows exec i from `.servers.SERVERS where not .dotz.liveh0 w,not proctype=`discovery} retrydiscovery:{ if[count d:exec i from `.servers.SERVERS where proctype=`discovery,not ({any .dotz.liveh0 x};w) fby hpup, i=(first;i) fby hpup; .lg.o[`conn;"attempting to connect to discovery services"]; retryrows d; // register with the newly opened discovery services if[DISCOVERYREGISTER and count h:exec w from .servers.SERVERS[d] where .dotz.liveh w; .lg.o[`conn;"registering with discovery services"]; @[;(`..register;`);()] each neg h]; if[CONNECTIONSFROMDISCOVERY and count h; registerfromdiscovery[$[`discovery in CONNECTIONS;(CONNECTIONS,()) except `discovery;CONNECTIONS];0b]]; ]} // Called by the discovery service when it restarts autodiscovery:{if[DISCOVERYRETRY>0; .servers.retrydiscovery[]]} // Attempt to make a connection for specified row ids retryrows:{[rows] //function a checks if the handle passed is empty and also invokes checknontorqattr function //which checks if .proc.getattributes is defined on the nontorqprocess and executes it //only if it is defined a:{$[not null x;@[x;({.proc.getattributes[]};::);()!()];()!()]}; // opencon, amends global tables, cannot be used inside of a select statement handles:.servers.opencon each exec .servers.getconnectionstring'[proctype;procname;hpup] from .servers.SERVERS where i in rows; update lastp:.proc.cp[],w:handles from`.servers.SERVERS where i in rows; update attributes:a each w,startp:?[null w;0Np;.proc.cp[]] from`.servers.SERVERS where i in rows; if[count connectedrows:select from`.servers.SERVERS where i in rows,.dotz.liveh0 w; connectcustom[connectedrows]]} // get the connection string to connect to a given process // in most cases this is just the hpup, unless we are connecting to a finspace process // in this case we need to generate the connection string using the AWS api getconnectionstring:{[proctype;procname;hpup] if[`finspace~ `tcp ^ .servers.SOCKETTYPE proctype; :.servers.getfinspaceconn[procname]]; :hpup; }; // user definable function to be executed when a service is reconnected. Also performed on first connection of that service. // Input is the line(s) from .servers.SERVERS corresponding to the newly (re)connected service connectcustom:@[value;`.servers.connectcustom;{[connectedrows]}] // close handles and remove rows from the table removerows:{[rows] @[hclose;;()] each .servers.SERVERS[rows][`w] except 0 0Ni; @[.z.pc;;()] each .servers.SERVERS[rows][`w] except 0 0Ni; // needed for finspace cleanup delete from `.servers.SERVERS where i in rows} // Create some connections and optionally connect to them register:{[connectiontab;proc;connect] {addnthawc[x`procname;x`proctype;x`hpup;()!();0Ni;0b]}each distinct select from connectiontab where proctype=proc; // automatically connect if[connect; $[`discovery=proc;retrydiscovery[];retry[]]]}; // Query a discovery service, and get the list of available services // Does not attempt to re-open any discovery services querydiscovery:{[procs] if[0=count procs;:()]; .lg.o[`conn;"querying discovery services for processes of types "," " sv string procs,()]; h:exec w from getservers[`proctype;`discovery;()!();0b;0b]; $[0=count h; [.lg.o[`conn;"no discovery services available"];()]; raze @[;(`getservices;procs;SUBSCRIBETODISCOVERY);()] each h]} // register processes from the discovery service // if connect is true, will try to registerfromdiscovery:{[procs;connect] if[`discovery in procs; '"cannot use registerfromdiscovery to locate discovery services"]; .lg.o[`conn;"requesting processes from discovery service"]; res:querydiscovery[procs]; if[0=count res; .lg.o[`conn;"no processes found"]; :()]; // add the processes addprocs[res;procs;connect];} addprocs:{[connectiontab;procs;connect] connectiontab:formatprocs[delete split from update host:hpup^`$last each -1 _' split, port:"I"$last each split from update split:{":" vs string x}each hpup from connectiontab]; // filter out any we already have - same name,type and hpup res:select from connectiontab where not ([]procname;proctype;hpup) in select procname,proctype,hpup from .servers.SERVERS; // we've dropped some items - maybe there are updated attributes if[not count[res]=count connectiontab; if[`attributes in cols connectiontab; .servers.SERVERS:.servers.SERVERS lj 3!select procname,proctype,hpup,attributes from connectiontab where not ([]procname;proctype;hpup) in select procname,proctype,hpup from .servers.SERVERS]] // if we have a match where the hpup is the same, but different name/type, then remove the old details removerows exec i from `.servers.SERVERS where hpup in exec hpup from res; register[res;;connect] each $[procs~`ALL;exec distinct proctype from res;procs,()]; addprocscustom[res;procs]} // addprocscustom is to allow bespoke extensions when adding processes addprocscustom:@[value;`.servers.addprocscustom;{{[connectiontab;procs]}}] // used to handle updates from the discovery service // procupdatecustom is used to extend the functionality - do something when the service has been updated procupdate:{[procs] addprocs[procs;exec distinct proctype from procs;0b];} // refresh the attribute registration with each of the discovery servers // useful for things like HDBs where the attributes may periodically change refreshattributes:{ retrydiscovery[]; (neg exec w from .servers.getservers[`proctype;`discovery;()!();0b;0b])@\:(`..register;`); } // return true if unix domain sockets can be used domainsocketsenabled:{[] // unix domain sockets only works on unix and not windows notwin:not .z.o like "w*"; // v3.4 brought in the first version of unix domain sockets ipc iskdbv:3.4<=.z.K; :notwin and iskdbv; } // format hpup from procs table, take into account ipc type // IPCTYPE [-11h] (`tcp;`tcps;`unix); formathp:{[HOST;PORT;IPCTYPE;PROCTYPE;PROCNAME] ipctype:IPCTYPE; isunixsocket:ipctype = `unix; notsamebox:not any HOST in `localhost,.z.h; host:string $[HOST=`localhost;.z.h;HOST]; port:string PORT; /// Determine whether socket connection is valid // revert socket to tcp; if[isunixsocket and notsamebox; .lg.w[`formathp;"Expects to connect via domain sockets, but host is not on the same machine. Reverting IPC mechanism to TCP"]; ipctype:`tcp; ]; if[isunixsocket and not domainsocketsenabled[]; .lg.w[`formathp;"Domain sockets are not enabled for this system. Reverting IPC mechanism from to TCP"]; ipctype:`tcp; ]; /// Format hpup file handle if[ipctype = `tcp; hpup:lower `$":",host,":",port; ]; if[ipctype = `tcps; hpup:lower `$":tcps://",host,":",port; ]; if[ipctype = `unix; hpup:lower `$":unix://",port; ]; if[ipctype = `finspace; // we don't want the discovery generating connection strings as they expire. Each cluster will generate their own // assuming a unique combination of proctype and procname per cluster hpup:`$":"sv string PROCTYPE,PROCNAME; ]; :hpup; } // do full formatting of proc table formatprocs:{[PROCS] procs:update ipctype:`tcp^.servers.SOCKETTYPE[proctype] from PROCS; procs:update hpup:.servers.formathp'[host;port;ipctype;proctype;procname] from procs; :procs; } // given a hpup, return its ipc type (`tcp;`tcps;`unix) getipctype:{[HPUP] tokens:`unix`tcps!(":unix://*";":tcps://*"); :`tcp^first where string[HPUP] like/: tokens; } // called at start up startup:{ // correctly format procs and hpup procstab::procs:formatprocs .proc.readprocs .proc.file; nontorqprocesstab::formatprocs $[count key NONTORQPROCESSFILE;.proc.readprocs NONTORQPROCESSFILE;0#procs]; // If DISCOVERY servers have been explicity defined if[count .servers.DISCOVERY; if[not null first .servers.DISCOVERY; if[count select from procs where hpup in .servers.DISCOVERY; .lg.e[`startup; "host:port in .servers.DISCOVERY list is already present in data read from ",string .proc.file]]; procs,:([]host:`;port:0Ni;proctype:`discovery;procname:`;hpup:.servers.DISCOVERY)]]; // Remove any processes that have an active connection connectedprocs: select procname, proctype, hpup from SERVERS; procs: delete from procs where ([] procname; proctype; hpup) in connectedprocs; nontorqprocs: delete from nontorqprocesstab where ([] procname; proctype; hpup) in connectedprocs; // if there aren't any processes left to connect to, then escape if[not any count each (procs;nontorqprocs); .lg.o[`conn;"No new processes to connect to. Escaping..."];:()]; if[CONNECTIONSFROMDISCOVERY or DISCOVERYREGISTER; register[procs;`discovery;0b]; retrydiscovery[]]; if[not CONNECTIONSFROMDISCOVERY; register[procs;;0b] each $[CONNECTIONS~`ALL;exec distinct proctype from procs;CONNECTIONS]]; if[TRACKNONTORQPROCESS;register[nontorqprocs;;0b] each $[CONNECTIONS~`ALL;exec distinct proctype from nontorqprocs;CONNECTIONS]]; // try and open dead connections retry[]} // Check if required processes all connected reqprocsnotconn:{[requiredprocs;typeorname] // parse of exec typeorname from .servers.SERVERS where .dotz.liveh[w] not all requiredprocs in ?[`.servers.SERVERS;enlist (`.dotz.liveh;`w);();typeorname] }; // Check if required process types all connected reqproctypesnotconn:reqprocsnotconn[;`proctype];
gwConnected:{ -1"GW connected"; .finos.init.provide`gwConnected; }; doProcess:{ -1"Doing something with both tp and gw..."; }; .finos.init.add[`tpConnected`gwConnected;`doProcess;()]; connectToTp:{ //simulate connecting to an external service .finos.timer.addRelativeTimer[{tpConnected[]};00:00:00.1]; }; connectToGw:{ //simulate connecting to an external service .finos.timer.addRelativeTimer[{gwConnected[]};00:00:00.2]; }; main:{ connectToTp[]; connectToGw[]; }; main[]; ================================================================================ FILE: kdb_tests_inithook_inithook2.q SIZE: 918 characters ================================================================================ \l timer/timer.q \l inithook/inithook.q //Synchronous inithook example. //This can be useful for large projects where the initialization code can be split //across multiple files but there is a dependency between each bit. Inithook makes it //easier to split and move this code around different files while also maintaining //the correct execution order. //These 3 setup steps could be in separate files. globalSetup1:{`..a set params[`a]}; .finos.init.add[`params;`globalSetup1;`globalSetup]; globalSetup2:{`..b set params[`b]}; .finos.init.add[`params;`globalSetup2;`globalSetup]; globalSetup3:{`..c set params[`c]}; .finos.init.add[`params;`globalSetup3;`globalSetup]; //This should be in the main file. doProcess:{ -1"Processing... a=",string[a]," b=",string[b]," c=",string[c]; }; .finos.init.add[`globalSetup;`doProcess;()]; main:{ .finos.init.setGlobal[`params;`a`b`c!1 2 3]; }; main[]; ================================================================================ FILE: kdb_tests_psutil_test.q SIZE: 132 characters ================================================================================ .finos.dep.loadScriptIn["finos/kdb";"psutil/psutil.q"] .finos.psutil.memory_full_info .z.i .finos.psutil.memory_fraction[`rss].z.i ================================================================================ FILE: kdb_tests_timer_test.q SIZE: 282 characters ================================================================================ \l timer/timer.q .test.firstRun:1b; f:{ -1"f: ",string .z.P; if[.test.firstRun; .test.firstRun:0b; -1"running something slow..."; system"sleep 5"; ]; }; t:.finos.timer.addPeriodicTimer[{f[]};00:00:02]; .finos.timer.setCatchUpMode[t;`none]; ================================================================================ FILE: kdb_tests_unzip_test.q SIZE: 270 characters ================================================================================ .finos.dep.loadScriptIn["finos/kdb";"unzip/unzip.q"] .finos.log.debug"pid: ",string .z.i `. upsert .Q.def[(enlist`src)!enlist`$()].Q.opt .z.x; src:hsym each src r:.finos.util.progress[hcount;{.finos.unzip.unzip x;.finos.util.free[]};src] .finos.util.free[] show r ================================================================================ FILE: ml.q_dbscan_dbscan.q SIZE: 1,444 characters ================================================================================ \l ../util.q / * dbscan clustering - returns cluster indices where -1 = noise * See https://en.wikipedia.org/wiki/DBSCAN * * @param t {table} * @param {int} minpts - a point is a core point if at least minpts number of * points are within distance epsilon * @param {float} epsilon - radius of reachability \ dbscan:{[t;minpts;epsilon] dist:xexp[edm[t];0.5]; dist_filt:epsilon >= dist + 0Wj * ident[count dist]; core:(1 + til count dist) * minpts <= 1 + sum each dist_filt; dist_filt*:(1 + til count dist); / Get reachable core points for each point r:except[;0] each inter[core;] each flip dist_filt; / Add index r:(1 + til count dist),'r; / Build dict with index as key d:(first each r)!((1_) each r); / Run disjoint set algo to combine core points. Improve effeciency by running / dj in the inner function, so changes to the dict structure will be seen on / the subsequent invocation leading to fewer overall calls of dj. d:{[d;x] dj over enlist[d],(x,'d[x])} over enlist[d],key d; / Make root keys point to themselves root_keys:key[d] where 1 < count each d each key d; d:d,root_keys!enlist each root_keys; / Make noise cluster and assign as default noise:where 0 = count each d; d,:noise!count[noise]#-1; / Get cluster assignments d:first each d each (1 + til count t); / Normalize cluster numbers to 0, 1, 2 ... k:distinct d except -1; normd:(enlist[-1]!enlist[-1]),(k!til[count k]); normd each d} ================================================================================ FILE: ml.q_dbscan_test.q SIZE: 246 characters ================================================================================ \l dbscan.q / * Test a known collection of points \ test:{ t:value each flip `x`y!("FF";",") 0: `$"test.csv"; all dbscan[t;3;.5] = 0 1 1 1 2 2 1 2 -1 0 0 2 1 0 2 1 -1 -1 1 0} assert:{[c] $[c;1"Passed\n";1"Failed\n"]}; assert test[]; exit 0; ================================================================================ FILE: ml.q_knn_knn.q SIZE: 3,412 characters ================================================================================ \d .knn / * k nearest neighbors * @param {table} t - input data * @param {dict} p - find neighbors close to this point * @param {int} k - number of neighbors to find * * test: * q)t:(`a`b`c!) each {3?100} each til 1000000 * q)\ts knn[t;`a`b`c!1 1 1;5] * 2155 136389072 \ knn:{[t;p;k] dist:sqrt (+/) each xexp[;2] each (p -) each flip t[key p]; min[(count t;k)] # `dist xasc update dist:dist from t} / * kdtree - create a kdtree for faster knn lookup. * * Given an input table we construct the tree as such, at each level: * 1) Pick a column from the table * 2) Find the median of the values in that column * 3) Partition the table into two sets: * - Where value of column < median * - Where value of column >= median * 4) Repeat from 1) on the partitioned tables until target depth reached * * @param {table} t - input data * @param {list} cols_ - subset of cols from t to partition on * @param {int} depth - depth limit for kdtree * @returns {dict} - The return value is a dictionary with keys `meds`leaves. * The `meds value contains a list with medians and columns encountered at * each internal node of the tree. This is a binary heap encoded list, i.e. * the root is at index 1 and its left child is at 2*i and right child at * 2*i + 1. The `leaves value contains a list of tables, which are the * partitioned tables at the lowest depth. The kdtree is a complete binary * tree so the first entry corresponds to the left-most leaf and the last * entry corresponds to the right-most leaf. \ kdtree:{[t;cols_;depth] queue:enlist[t]; / meds is binary heap encoded list, first element is dummy meds:enlist[::]; leaves:(); i:0; while[count queue; i+:1; curdepth:floor 2 xlog i; t2:queue[0]; queue:1_queue; ax:cols_[curdepth mod count cols_]; md:med t2[ax]; meds,:enlist[(ax;md)]; slct:md > t2[ax]; / Create partitions r:(select from t2 where slct;select from t2 where not slct); $[curdepth < depth;queue,:r;leaves,:r]]; `meds`leaves!(meds;leaves)}; / * Recursive helper for kdtree k nearest neighbor * @param {dict} kdt - kdtree * @param {dict} p - target point * @param {int} k * @param {int} i - index of binary heap encoded tree node * @returns {table} \ kdknn_:{[kdt;p;k;i] meds:kdt`meds; leaves:kdt`leaves; / Base case, index pointing to a leaf node if[i >= count meds;:knn[leaves[i-count[meds]];p;k]]; ax:meds[i][0]; md:meds[i][1]; / Set up the first branch to try, based on if current point is < or >= the / median. Also set up the alternate branch in case a closer neighbor exists / on the other side i1:2*i; i2:1+i1; if[p[ax] >= md;i1:i2;i2:-1+i1]; nn:kdknn_[kdt;p;k;i1]; / If not enough candidates found should try alt branch trygt:k > count nn; / If point is closer to splitting plane than any neighbor, try alt branch xspltplane:{[p;ax;md;n] n[`dist] >= abs p[ax] - md }; if[not trygt;trygt:any xspltplane[p;ax;md] each nn]; / Try alt branch if[trygt;nn,:kdknn_[kdt;p;k;i2]]; / Return nearest neighbors min[(count nn;k)] # `dist xasc nn}; / * kdtree k nearest neighbors * @param {dict} kdt - kdtree generated with kdtree function * @param {dict} p - point * @param {int} k * @returns {table} * * test: * q)t:(`a`b`c!) each {3?100} each til 1000000 * q)kdt:kdtree[t;cols t;5] * q)kdknn[kdt;`a`b`c!1 1 1;3] \ kdknn:{[kdt;p;k] kdknn_[kdt;p;k;1]}; ================================================================================ FILE: ml.q_knn_test.q SIZE: 1,373 characters ================================================================================ \l knn.q / * Randomized test case: generate random data points and compare knn lookups * using kdtree and vanilla knn functions. \ test:{ t:(`a`b`c!) each {3?10000.} each til 10000; fn1:.knn.kdknn[.knn.kdtree[t;cols t;5]]; fn2:.knn.knn[t]; points:(`a`b`c!) each {3?100} each til 100; / use a xasc sort since order of points with same dist is not deterministic cmp:{[fn1;fn2;p] (`dist`a`b`c xasc fn1[p;3])~(`dist`a`b`c xasc fn2[p;3])}; all cmp[fn1;fn2] each points}; / * Simple test case: 4 data points in a tree with height 2: * a * / \ * b b * / \ / \ * 0 0 0 1 1 0 1 1 * * At the root we branch depending on the ques.: is the a axis of the target * point < or >= to 1? Likewise at the 2nd level we branch based on the ques.: * is the b axis of the target point < or >= to 1? \ test_simple:{ meds:(::;(`a;1);(`b;1);(`b;1)); leaves:(enlist[`a`b!0 0];enlist[`a`b!0 1];enlist[`a`b!1 0];enlist[`a`b!1 1]); kdt:`meds`leaves!(meds;leaves); result:([] a:0 0 1 1;b:0 1 0 1;dist:`float$(0;1;1;sqrt[2])); fn:.knn.kdknn[kdt;`a`b!0 0]; / use a xasc sort since order of points with same dist is not deterministic cmp:{[fn;result;k] (`dist`a`b xasc fn[k])~k#result}; all cmp[fn;result] each 1 3 4}; assert:{[c] $[c;1"Passed\n";1"Failed\n"]}; assert test[]; assert test_simple[]; exit 0; ================================================================================ FILE: ml.q_ml.q SIZE: 6,524 characters ================================================================================ / * Helper function for k means clustering \ hlpr:{[t;k;means] f:{[t;x] sqrt (+/) each xexp[;2] each (x -) each (value each t)}; r:f[t;] each means; zipped:{[k;x] (til k) ,' x}[k;] each flip r; cluster:first flip ({$[last x < last y;x;y]} over) each zipped; / 1st column keeps count m2::(k;(1+count t[0]))#0; {m2[first x]+:1,1 _ x} each cluster ,' value each t; m2::flip m2; (flip 1 _ m2) % first m2} / * k means clustering * * iris test: * q)iris:flip `sl`sw`pl`pw`class!("FFFFS";",") 0: `:iris.csv * q)\ts kmeans[delete class from iris;3] * 25 77184 \ kmeans:{[t;k] means:t[k?count t]; diff:(k;count t[0])#1; while[any any diff; omeans:means; means:hlpr[t;k;means]; diff:0.01<abs omeans-means]; flip (cols t)!flip means} / * Entropy \ entropy:{ cnt:sum x; p:x%cnt; -1*sum p*xlog[2;p]}
// @private // @kind function // @category savingUtility // @fileoverview Save data as a splayed table // @param config {dict} Any configuration information about the dataset being // saved // @param data {tab} Data which is to be saved // @return {null} Data is saved as a splayed table i.saveFunc.splay:{[config;data] dataName:first` vs filePath:i.saveFileName config; filePath:` sv filePath,`; filePath set .Q.en[dataName]data; } // @private // @kind function // @category savingUtility // @fileoverview Save data in a defined format // @param config {dict} Any configuration information about the dataset being // saved // @param data {tab} Data which is to be saved // @return {null} Data is saved in the defined format i.saveDataset:{[config;data] if[null func:i.saveFunc cfg`typ;'"dataset type not supported"]; func data } // Saving functionality // @kind function // @category saving // @fileoverview Node to save data from a defined source // @return {dict} Node in graph to be used for saving data saveDataset:`function`inputs`outputs!(i.saveDataset;`cfg`dset!"!+";" ") ================================================================================ FILE: ml_ml_graph_pipeline.q SIZE: 2,034 characters ================================================================================ // graph/pipeline.q - Build and execute a pipeline // Copyright (c) 2021 Kx Systems Inc // // Contains createPipeline and execPipeline for // the creation and execution of pipelines. \d .ml // Execution of a pipeline will not default to enter q debug mode but should // be possible to overwrite graphDebug:0b // @kind function // @category pipeline // @desc Update debugging mode // @return {::} Debugging is updated updDebug:{[] graphDebug::not graphDebug } // @kind function // @category pipeline // @desc Generate a execution pipeline based on a valid graph // @param graph {dictionary} Graph originally generated by .ml.createGraph, // which has all relevant input edges connected validly // @return {dictionary} An optimal execution pipeline populated with all // information required to allow its successful execution createPipeline:{[graph] if[not all exec 1_valid from graph`edges;'"disconnected edges"]; outputs:ungroup select sourceNode:nodeId,sourceName:key each outputs from 1_graph`nodes; srcInfo:select sourceNode,sourceName from graph`edges; endPoints:exec distinct sourceNode from outputs except srcInfo; paths:i.getOptimalPath[graph]each endPoints; optimalPath:distinct raze paths idesc count each paths; pipeline:([]nodeId:optimalPath)#graph`nodes; nodeInputs:key each exec inputs from pipeline; pipeline:update inputs:count[i]#enlist(1#`)!1#(::),outputTypes:outputs, inputOrder:nodeInputs from pipeline; pipeline:select nodeId,complete:0b,error:`,function,inputs,outputs:inputs, outputTypes,inputOrder from pipeline; pipeline:pipeline lj select outputMap:([]sourceName;destNode;destName)by nodeId:sourceNode from graph`edges; 1!pipeline} // @kind function // @category pipeline // @desc Execute a generated pipeline // @param pipeline {dictionary} Pipeline created by .ml.createPipeline // @return {dictionary} The pipeline with each node executed and appropriate // outputs populated. execPipeline:{[pipeline] i.execCheck i.execNext/pipeline } ================================================================================ FILE: ml_ml_graph_utils.q SIZE: 5,328 characters ================================================================================ // graph/utils.q - Utility functions for graphs // Copyright (c) 2021 Kx Systems Inc // // Utility functions for implementation of graph library \d .ml // Graphing creation utilities // @private // @kind function // @category pipelineUtility // @desc Connect the output of one node to the input to another // @param graph {dictionary} Graph originally generated by .ml.createGraph, // which has all relevant input edges connected validly // @param edge {dictionary} Contains information about the edge node // @return {dictionary} The graph with the relevant connection made between the // inputs and outputs of two nodes. i.connectGraph:{[graph;edge] edgeKeys:`sourceNode`sourceName`destNode`destName; connectEdge[graph]. edge edgeKeys } // Pipeline creation utilities // @private // @kind function // @category pipelineUtility // @desc Extract the source of a specific node // @param graph {dictionary} Graph originally generated by .ml.createGraph, // which has all relevant input edges connected validly // @param node {symbol} Name associated with the functional node // @return {symbol} Source of the given node i.getDeps:{[graph;node] exec distinct sourceNode from graph[`edges]where destNode=node } // @private // @kind function // @category pipelineUtility // @desc Extract all dependent source nodes needed to run the node // @param graph {dictionary} Graph originally generated by .ml.createGraph, // which has all relevant input edges connected validly // @param node {symbol} Denoting the name to be associated with the functional // node // @return {symbol[]} All sources required for the given node i.getAllDeps:{[graph;node] depNodes:i.getDeps[graph]node; $[count depNodes; distinct node,raze .z.s[graph]each depNodes; node ] } // @private // @kind function // @category pipelineUtility // @desc Extract all the paths needed to run the node // @param graph {dictionary} Graph originally generated by .ml.createGraph, // which has all relevant input edges connected validly // @param node {symbol} Denoting the name to be associated with the functional // node // @return {symbol} All paths required for the given node i.getAllPaths:{[graph;node] depNodes:i.getDeps[graph]node; $[count depNodes; node,/:raze .z.s[graph]each depNodes; raze node ] } // @private // @kind function // @category pipelineUtility // @desc Get the longest path // @param graph {dictionary} Graph originally generated by .ml.createGraph, // which has all relevant input edges connected validly // @param node {symbol} Denoting the name to be associated with the functional // node // @return {symbol} The longest path available i.getLongestPath:{[graph;node] paths:reverse each i.getAllPaths[graph;node]; paths first idesc count each paths } // @private // @kind function // @category pipelineUtility // @desc Extract the optimal path to run the node // @param graph {dictionary} Graph originally generated by .ml.createGraph, // which has all relevant input edges connected validly // @param node {symbol} Denoting the name to be associated with the functional // node // @return {symbol} The optimal path to run the node i.getOptimalPath:{[graph;node] longestPath:i.getLongestPath[graph;node]; distinct raze reverse each i.getAllDeps[graph]each longestPath } // @private // @kind function // @category pipelineUtility // @desc Update input data information within the pipeline // @param pipeline {dictionary} Pipeline created by .ml.createPipeline // @param map {dictionary} Contains information needed to run the node // @return {dictionary} Pipeline updated with input information i.updateInputData:{[pipeline;map] pipeline[map`destNode;`inputs;map`destName]:map`data; pipeline } // @private // @kind function // @category pipelineUtility // @desc Execute the first non completed node in the pipeline // @param pipeline {dictionary} Pipeline created by .ml.createPipeline // @return {dictionary} Pipeline with executed node marked as complete i.execNext:{[pipeline] node:first 0!select from pipeline where not complete; -1"Executing node: ",string node`nodeId; inputs:node[`inputs]node`inputOrder; if[not count inputs;inputs:1#(::)]; resKeys:`complete`error`outputs; resVals:$[graphDebug; .[(1b;`;)node[`function]::;inputs]; .[(1b;`;)node[`function]::;inputs;{[err](0b;`$err;::)}] ]; res:resKeys!resVals; if[not null res`error;-2"Error: ",string res`error]; if[res`complete; res[`inputs]:(1#`)!1#(::); outputMap:update data:res[`outputs]sourceName from node`outputMap; uniqueSource:(exec distinct sourceName from outputMap)_ res`outputs; res[`outputs]:((1#`)!1#(::)),uniqueSource; pipeline:i.updateInputData/[pipeline;outputMap]; ]; pipeline,:update nodeId:node`nodeId from res; pipeline } // @private // @kind function // @category pipelineUtility // @desc Check if any nodes are left to be executed or if any // errors have occured // @param pipeline {dictionary} Pipeline created by .ml.createPipeline // @return {dictionary} Return 0b if all nodes have been completed or if any // errors have occured. Otherwise return 1b i.execCheck:{[pipeline] if[any not null exec error from pipeline;:0b]; if[all exec complete from pipeline;:0b]; 1b } ================================================================================ FILE: ml_ml_init.q SIZE: 418 characters ================================================================================ // init.q - Load ml libraries // Copyright (c) 2021 Kx Systems Inc \d .ml path:{string`ml^`$@[{"/"sv -1_"/"vs ssr[;"\\";"/"](-3#get .z.s)0};`;""]}` system"l ",path,"/","ml.q" loadfile`:util/init.q loadfile`:stats/init.q loadfile`:fresh/init.q loadfile`:clust/init.q loadfile`:xval/init.q loadfile`:graph/init.q loadfile`:optimize/init.q loadfile`:timeseries/init.q loadfile`:mlops/init.q loadfile`:registry/init.q ================================================================================ FILE: ml_ml_ml.q SIZE: 1,711 characters ================================================================================ // ml.q - Setup for ml namespace // Copyright (c) 2021 Kx Systems Inc // // Define version, path, and loadfile \d .ml if[not `e in key `.p; @[{system"l ",x;.pykx.loaded:1b};"pykx.q"; {@[{system"l ",x;.pykx.loaded:0b};"p.q"; {'"Failed to load PyKX or embedPy with error: ",x}]}]]; if[not `loaded in key `.pykx;.pykx.loaded:`import in key `.pykx]; if[.pykx.loaded;.p,:.pykx]; // Coerse to string/sym coerse:{$[11 10h[x]~t:type y;y;not[x]&-11h~t;y;0h~t;.z.s[x] each y;99h~t;.z.s[x] each y;t in -10 -11 10 11h;$[x;string;`$]y;y]} cstring:coerse 1b; csym:coerse 0b; // Ensure plain python string (avoid b' & numpy arrays) pydstr:$[.pykx.loaded;{.pykx.eval["lambda x:x.decode()"].pykx.topy x};::] // Return python library version pygetver:$[.pykx.loaded;{string .pykx.eval["lambda x:str(x)";<].p.import[`$x][`:__version__]};{.p.import[`$x][`:__version__]`}] version:@[{TOOLKITVERSION};`;`development] path:{string`ml^`$@[{"/"sv -1_"/"vs ssr[;"\\";"/"](-3#get .z.s)0};`;""]}` loadfile:{$[.z.q;;-1]"Loading ",x:_[":"=x 0]x:$[10=type x;;string]x;system"l ",path,"/",x;} // The following functionality should be available for all initialized sections of the library // @private // @kind function // @category utility // @fileoverview If set to `1b` deprecation warnings are ignored i.ignoreWarning:0b // @private // @kind function // @category utilities // @fileoverview Change ignoreWarnings updateIgnoreWarning:{[]i.ignoreWarning::not i.ignoreWarning} @[value;".log.initns[]";{::}] logging.info :{@[{log.info x};x;{[x;y] -1 x;}[x]]} logging.warn :{@[{log.warn x};x;{[x;y] -1 x;}[x]]} logging.error:{@[{log.error x};x;{::}];'x} logging.fatal:{@[{log.fatal x};x;{[x;y] -2 x;}[x]];exit 2} ================================================================================ FILE: ml_ml_mlops_init.q SIZE: 766 characters ================================================================================ \d .ml @[system"l ",;"p.q";{::}] // @desc Retrieve initial command line configuration mlops.init:.Q.opt .z.x // @desc Define root path from which scripts are to be loaded mlops.path:{ module:`$"mlops-tools"; string module^`$@[{"/"sv -1_"/"vs ssr[;"\\";"/"](-3#get .z.s)0};`;""] }` // @kind function // @desc Load an individual file // @param x {symbol} '.q/.p/.k' file which is to be loaded into the current // process. Failure to load the file at location 'path,x' or 'x' will // result in an error message // @return {null} mlops.loadfile:{ filePath:_[":"=x 0]x:$[10=type x;;string]x; @[system"l ",; mlops.path,"/",filePath; {@[system"l ",;y;{'"Library load failed with error :",x}]}[;filePath] ]; } mlops.loadfile`:src/q/init.q ================================================================================ FILE: ml_ml_mlops_src_q_check.q SIZE: 7,019 characters ================================================================================ \d .ml
insert ¶ Insert or append records to a table x insert y insert[x;y] Where x is a symbol atom naming a non-splayed tabley is one or more records that match the columns ofx ; or ifx is undefined, a table inserts y into the table named by x and returns the new row indexes. The left argument is the name of a table as a symbol atom. q)show x:([a:`x`y];b:10 20) a| b -| -- x| 10 y| 20 q)`x insert (`z;30) ,2 q)x a| b -| -- x| 10 y| 20 z| 30 q)tnew 'tnew [0] tnew ^ q)`tnew insert ([c1:`a`b];c2:10 20) 0 1 q)tnew c1| c2 --| -- a | 10 b | 20 If the table is keyed, the new records must not match existing keys. q)`x insert (`z;30) 'insert Several records may be appended at once: q)`x insert (`s`t;40 50) 3 4 q)x a| b -| -- x| 10 y| 20 z| 30 s| 40 t| 50 insert can insert to global variables only. If you need to insert to function-local tables, use x,:y or Update instead. Type¶ Values in y must match the type of corresponding columns in x ; otherwise, q signals a type error. Empty columns in x with general type assume types from the first record inserted. q)meta u:([] name:(); age:()) c | t f a ----| ----- name| age | q)`u insert (`tom`dick;30 40) 0 1 q)meta u c | t f a ----| ----- name| s age | j Foreign keys¶ If x has foreign key/s the corresponding values of y are checked to ensure they appear in the primary key column/s pointed to by the foreign key/s. A cast error is signalled if they do not. Errors¶ cast y value not in foreign key insert y key value defined in x type y value wrong type With keyed tables, consider upsert as an alternative. inter ¶ Intersection of two lists or dictionaries x inter y inter[x;y] Where x and y are lists or dictionaries, uses the result of x in y to return items or entries from x . q)1 3 4 2 inter 2 3 5 7 11 3 2 Returns common values from dictionaries. q)show x:(`a`b)!(1 2 3;`x`y`z) a| 1 2 3 b| x y z q)show y:(`a`b`c)!(1 2 3;2 3 5;`x`y`z) a| 1 2 3 b| 2 3 5 c| x y z q) q)x inter y 1 2 3 x y z q) Returns common rows from simple tables. q)show x:([]a:`x`y`z`t;b:10 20 30 40) a b ---- x 10 y 20 z 30 t 40 q)show y:([]a:`y`t`x;b:50 40 10) a b ---- y 50 t 40 x 10 q)x inter y a b ---- x 10 t 40 inv ¶ Matrix inverse inv x inv[x] Returns the inverse of non-singular float matrix x . q)a:3 3#2 4 8 3 5 6 0 7 1f q)inv a -0.4512195 0.6341463 -0.195122 -0.03658537 0.02439024 0.1463415 0.2560976 -0.1707317 -0.02439024 q)a mmu inv a 1 -2.664535e-15 5.828671e-16 -2.664535e-15 1 -1.19349e-15 3.885781e-16 -4.163336e-16 1 q)1=a mmu inv a 100b 010b 001b lsq solves a normal equations matrix via Cholesky decomposition – solving systems is more robust than matrix inversion and multiplication. Since V3.6 2017.09.26 inv uses LU decomposition. Previously it used Cholesky decomposition as well. Iterators¶ --------- maps -------- --------- accumulators ---------- ' Each each / Over over Converge, Do, While ': Each Parallel peach \ Scan scan Converge, Do, While ': Each Prior prior \: Each Left /: Each Right ' Case The iterators (once known as adverbs) are native higher-order operators: they take applicable values as arguments and return derived functions. They are the primary means of iterating in q. Applicable value An applicable value is a q object that can be indexed or applied to arguments: a function (operator, keyword, lambda, or derived function), a list (vector, mixed list, matrix, or table), a file- or process handle, or a dictionary. For example, the iterator Over (written / ) uses a value to reduce a list or dictionary. q)+/[2 3 4] /reduce 2 3 4 with + 9 q)*/[2 3 4] /reduce 2 3 4 with * 24 Over is applied here postfix, with + as its argument. The derived function +/ returns the sum of a list; */ returns its product. (Compare map-reduce in some other languages.) Variadic syntax¶ Each Prior, Over, and Scan applied to binary values derive functions with both unary and binary forms. q)+/[2 3 4] / unary 9 q)+/[1000000;2 3 4] / binary 1000009 Postfix application¶ Like all functions, the iterators can be applied with Apply or with bracket notation. But unlike any other functions, they can also be applied postfix. They almost always are. q)'[count][("The";"quick";"brown";"fox")] / ' applied with brackets 3 5 5 3 q)count'[("The";"quick";"brown";"fox")] / ' applied postfix 3 5 5 3 Only iterators can be applied postfix. Regardless of its rank, a function derived by postfix application is always an infix. To apply an infix derived function in any way besides infix, you can use bracket notation, as you can with any function. q)1000000+/2 3 4 / variadic function applied infix 1000009 q)+/[100000;2 3 4] / variadic function applied binary with brackets 1000009 q)+/[2 3 4] / variadic function applied unary with brackets 9 q)txt:("the";"quick";"brown";"fox") q)count'[txt] / unary function applied with brackets 3 5 5 4 If the derived function is unary or variadic, you can also parenthesize it and apply it prefix. q)(count')txt / unary function applied prefix 3 5 5 4 q)(+/)2 3 4 / variadic function applied prefix 9 Glyphs¶ Six glyphs are used to denote iterators. Some are overloaded. Iterators - in bold type derive uniform functions; - in italic type, variadic functions. Subscripts indicate the rank of the value; superscripts, the rank of the derived function. (Ranks 4-8 follow the same rule as rank 3.) | glyph | iterator/s | |---|---| ' | ₁ Case; Each | \: | ₂ Each Left ² | /: | ₂ Each Right ² | ': | ₁ Each Parallel ¹ ; ₂ Each Prior ¹ ² | / | ₁ Converge ¹ ; ₁ Do ² ; ₁ While ² ; ₂ Reduce ¹ ² ; ₃ Reduce ³ | \ | ₁ Converge ¹ ; ₁ Do ² ; ₁ While ² ; ₂ Accumulate ¹ ² ; ₃ Accumulate ³ | Over and Scan, with values of rank >2, derive functions of the same rank as the value. The overloads are resolved according to the following table of syntactic forms. Two groups of iterators¶ There are two kinds of iterators: maps and accumulators. - Maps - distribute the application of their values across the items of a list or dictionary. They are implicitly parallel. - Accumulators - apply their values successively: first to the entire (left) argument, then to the result of that evaluation, and so on. With values of rank ≥2 they correspond to forms of map reduce and fold in other languages. Application¶ A derived function, like any function, can be applied by bracket notation. Binary derived functions can also be applied infix. Unary derived functions can also be applied prefix. Some derived functions are variadic and can be applied as either unary or binary functions. This gives rise to multiple equivalent forms, tabulated here. Any function can be applied with bracket notation or with Apply. So to simplify, such forms are omitted here in favour of prefix or infix application. For example, u'[x] and @[u';x] are valid, but only (u')x is shown here. (Iterators are applied here postfix only.) The mnemonic keywords each , over , peach , prior and scan are also shown. | value rank | syntax | name | semantics | |---|---|---|---| | 1 2 3+ | (u')x , u each x x b'y v'[x;y;z;…] | Each | apply u to each item of x apply g to corresponding items of x and y apply v to corresponding items of x , y , z … | | 2 | x b\:d | Each Left | apply b to d and items of x | | 2 | d b/:y | Each Right | apply b to d and items of y | | 1 | (u':)x , u peach x | Each Parallel | apply u to items of x in parallel tasks | | 2 | (b':)y ,b prior y ,d b':y | Each Prior | apply b to (d and) successive pairs of items of y | | 1 | int'[x;y;…] | Case | select from [x;y;…] | | 1 | (u/)d , (u\)d | Converge | apply u to d until result converges | | 1 | n u/d , n u\d | Do | apply u to d , n times | | 1 | t u/d , t u\d | While | apply u to d until t of result is 0 | | 1 2 3+ | (b/)y , b over y d b/y vv/[d;y;z;…] | Over | reduce a list or lists | | 1 2 3+ | (g\)y , g scan y d g\y vv\[d;y;z;…] | Scan | scan a list or lists | Key: d: data int: int vector n: int atom ≥0 v: value t: test value u: unary value y: list b: binary value x: list , Join¶ Join atoms, lists, dictionaries or tables x,y ,[x;y] Where x and y are atoms, lists, dictionaries or tables returns x joined to y . q)1 2 3,4 1 2 3 4 q)1 2,3 4 1 2 3 4 q)(0;1 2.5;01b),(`a;"abc") (0;1.00 2.50;01b;`a;"abc") The result is a vector if both arguments are vectors or atoms of the same type; otherwise a mixed list. q)1 2.4 5,-7.9 10 /float vectors 1.00 2.40 5.00 -7.90 10.00 q)1 2.4 5,-7.9 /float vector and atom 1.00 2.40 5.00 -7.90 q)1 2.4 5, -7.9 10e /float and real vectors (1.00;2.40;5.00;-7.90e;10.00e) Cast arguments to ensure vector results. q)v:1 2.34 -567.1 20e q)v,(type v)$789 / cast an int to a real 1.00 2.34 -567.1 20.00 789e q)v,(type v)$1b / cast a boolean to a real 1.00 2.34 -567.1 20 1e q)v,(type v)$0xab 1.00 2.34 -567.1 20.00 171e , (join) is a multithreaded primitive. Dictionaries¶ When both arguments are dictionaries, Join has upsert semantics. q)(`a`b`c!1 2 3),`c`d!4 5 a| 1 b| 2 c| 4 d| 5 Tables¶ Tables can be joined row-wise. q)t:([]a:1 2 3;b:`a`b`c) q)s:([]a:10 11;b:`d`e) q)show t,s a b ---- 1 a 2 b 3 c 10 d 11 e uj union join SQL UNION ALL Tables of the same count can be joined column-wise with ,' (Join Each). q)r:([]c:10 20 30;d:1.2 3.4 5.6) q)show t,'r q)show t,'r a b c d ---------- 1 a 10 1.2 2 b 20 3.4 3 c 30 5.6 Join for keyed tables is strict; both the key and data columns must match in names and datatypes.
Unit tests¶ The goal of unit tests is to check that the individual parts of a program are correct. Unit tests provide a written contract that a piece of code must satisfy. Unit testing Q supports unit testing with the script k4unit.q , which loads test descriptions from CSV files, runs the tests, and writes results to a table. Starting¶ To use k4unit, you first have to load the script file q k4unit.q -p 5001 Test descriptions¶ Writing descriptions¶ Tests descriptions are written in a CSV file, with the following format: action, ms, lang, code The meaning of each column is as follows. | column | description | notes | |---|---|---| | code | code to be executed | if your code contains commas enclose it all in quotes. | | lang | k or q | if empty, default is q | | ms | max milliseconds it should take to run | if 0, ignore | | action | beforeany , beforeeach , before , run , true , fail , after , aftereach , afterall | See below for description | Where the actions have the following meaning: | action | description | |---|---| | beforeany | one time, run code before any tests | | beforeeach | run code before tests in every file | | before | run code before tests in this file | | run | run code, check execution time against ms | | true | run code, check if returns true(1b ) | | fail | run code, it should fail (e.g. 2+two ) | | after | run code after tests in this file | | aftereach | run code after tests in each file | | afterall | one time, run code after all tests | Fail is not false `fail is not "false" , nor the opposite of `true . If your code returns 0b when correct, use not to make it true. Example descriptions¶ comment,0,,this will be ignored before,,k,aa::22 before,0,k,aa::22 before,0,q,aa::22 before,0,q,aa::22 true,0,k,2=+/1 1 true,0,q,2=sum 1 1 true,0,k,2=sum 1 1 true,0,k,(*/2 2)~+/2 2 true,0,k,(*/2 2)~+/2 3 run,10,k,do[100;+/1.1+!10000] fail,0,q,2=`aa after,0,k,bb::33 before,0,k,aa::22 before,0,k,aa::22 Loading descriptions¶ When the script k4unit.q is loaded, it creates the table KUT (KUnit Tests). It’s empty initially: q)KUT action ms lang code file ------------------------ and will contain test descriptions after tests are loaded with KUltf . Invoke the function KUltf (load test file) with a file name as its argument. q)KUltf `:sample.csv 15 q)KUT action ms bytes lang code repeat minver file comment ------------------------------------------------------------------------------------------ comment 0 0 q 1 0 :sample.csv “this will just be ignored” before 0 0 k aa::22 1 0 :sample.csv “” before 0 0 k aa::22 1 0 :sample.csv “” before 0 0 q aa::22 1 0 :sample.csv “” before 0 0 q aa::22 1 0 :sample.csv “comment ” true 0 0 k 2=+/1 1 1 0 :sample.csv “” true 0 0 q 2=sum 1 1 1 0 :sample.csv “” true 0 0 k 2=sum 1 1 1 0 :sample.csv “” true 0 0 k (*/2 2)~+/2 2 1 0 :sample.csv “” true 0 0 k (*/2 2)~+/2 3 1 0 :sample.csv “” run 75 492264 k +/1.1+!10000 1000 0 :sample.csv “a few times” fail 0 0 q 2=`aa 1 0 :sample.csv “” after 0 0 k bb::33 1 0 :sample.csv “” before 0 0 k aa::22 1 0 :sample.csv “” before 0 0 k aa::22 1 0 :sample.csv “” It is possible to load multiple description files in the same directory with KUltd (load test dir). This KUltd `:dirname loads all CSVs in that directory into table KUT . Running unit tests¶ Invoke KUrt (run tests) with an empty argument list q)KUrt[] 2018.11.06T15:31:06.981 start 2018.11.06T15:31:06.981 :sample.csv 7 test(s) 2018.11.06T15:31:07.006 end 7 Test results¶ Inspecting results¶ When k4unit is loaded, it creates the table KUTR (KUnit Test Results). It's empty initially q)KUTR action ms lang code file msx ok okms valid timestamp ---------------------------------------------------- and will contain results of unit tests after KUrt[] is invoked. Results can be inspected by showing the whole table q)KUT q)KUTR action ms bytes lang code repeat file msx bytesx ok okms okbytes valid timestamp --------------------------------------------------------------------------------------------------------------- true 0 0 k 2=+/1 1 1 :sample.csv 0 0 1 1 1 1 2018.11.06T15:31:06.982 true 0 0 q 2=sum 1 1 1 :sample.csv 0 0 1 1 1 1 2018.11.06T15:31:06.982 true 0 0 k 2=sum 1 1 1 :sample.csv 0 0 1 1 1 1 2018.11.06T15:31:06.982 true 0 0 k (*/2 2)~+/2 2 1 :sample.csv 0 0 1 1 1 1 2018.11.06T15:31:06.982 true 0 0 k (*/2 2)~+/2 3 1 :sample.csv 0 0 0 1 1 1 2018.11.06T15:31:06.982 run 75 492264 k +/1.1+!10000 1000 :sample.csv 24 393936 1 1 1 1 2018.11.06T15:31:07.006 fail 0 0 q 2=`aa 1 :sample.csv 0 0 1 1 1 1 2018.11.06T15:31:07.006 or by using q queries. For instance: q)show select from KUTR where not ok q)show select from KUTR where not okms q)show select count i by ok,okms,action from KUTR q)show select count i by ok,okms,action,file from KUTR The fields action , ms , lang , and code are as described above. The rest are as follows: | column | description | notes | |---|---|---| | file | name of test descriptions file | | | msx | milliseconds taken to eXecute code | | | ok | true if the test completes correctly | it is correct for a fail task to fail | | okms | true if msx is not greater than ms, ie if performance is ok | | | valid | true if the code is valid (ie doesn't crash) | fail code is valid if it fails | | timestamp | when test was run | Saving results to disk¶ Invoking the function KUstr[] saves the table KUtr to a file KUTR.csv. Restarting¶ The functions KUit and KUitr initialize the tables KUT and KUTR to empty. Configuration parameters¶ When the script k4unit.q is loaded, two configuration variables are defined in namespace .KU . q).KU | :: VERBOSE| 1 DEBUG | 0 The values allowed for VERBOSE are 0 - no logging to console 1 - log filenames >1 - log tests The values allowed for DEBUG are 0 - trap errors, press on regardless 1 - suspend if errors (except if action=`fail)
// @kind function // @category main // @subcategory update // // @overview // Update the psi details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param model {fn} The model serving the predictions // @param data {table} Data on which to determine historical distribution of the // predictions // // @return {null} registry.update.psi:{[folderPath;experimentName;modelName;version;model;data] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; mlops.update.psi[fpath;model;data]; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the type details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param format {string} Type of the given model // // @return {null} registry.update.type:{[folderPath;experimentName;modelName;version;format] config:registry.util.update.checkPrep [folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; mlops.update.type[fpath;format]; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the supervised metrics of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param metrics {string[]} Supervised metrics to monitor // // @return {null} registry.update.supervise:{[folderPath;experimentName;modelName;version;metrics] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; .ml.mlops.update.supervise[fpath;metrics]; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the schema details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param data {table} The data which provides the new schema // // @return {null} registry.update.schema:{[folderPath;experimentName;modelName;version;data] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; mlops.update.schema[fpath;data]; if[`local<>config`storage;registry.cloud.update.publish config]; } ================================================================================ FILE: ml_ml_registry_q_main_utils_check.q SIZE: 5,866 characters ================================================================================ // check.q - Utilities relating to checking of suitability of registry items // Copyright (c) 2021 Kx Systems Inc // // @overview // Check that the information provided for adding items to the registry is // suitable, this includes but is not limited to checking if the model name // provided already exists, that the configuration is appropriately typed etc. // // @category Model-Registry // @subcategory Utilities // // @end \d .ml // @private // // @overview // Correct syntax for path dependent on OS // // @param path {string} A path name // // @return {string} Path suitable for OS registry.util.check.osPath:{[path] $[.z.o like"w*";{@[x;where"/"=x;:;"\\"]};]path }; // @private // // @overview // Check to ensure that the folder path for the registry is appropriately // typed // // @param folderPath {string|null} A folder path indicating the location the // registry is to be located or generic null to place in the current // directory // // @return {string} type checked folderPath registry.util.check.folderPath:{[folderPath] if[not((::)~folderPath)|10h=type folderPath; logging.error"Folder path must be a string or ::" ]; $[(::)~folderPath;enlist".";folderPath] } // @private // // @overview // Check to ensure that the experiment name provided is suitable and return // an appropriate surrogate in the case the model name is undefined // // @param experimentName {string} Name of the experiment to be saved // // @return {string} The name of the experiment registry.util.check.experiment:{[experimentName] $[""~experimentName; "undefined"; $[10h<>type experimentName; logging.error"'experimentName' must be a string"; experimentName ] ] } // @private // // @overview // Check that the model type that the user is providing to save the model // against is within the list of approved types // // @param config {dict} Configuration provided by the user to // customize the experiment // // @return {null} registry.util.check.modelType:{[config] modelType:config`modelType; approvedTypes:("sklearn";"xgboost";"q";"keras";"python";"torch";"pyspark"); if[10h<>abs type[modelType]; logging.error"'modelType' must be a string" ]; if[not any modelType~/:approvedTypes; logging.error"'",modelType,"' not in approved types for KX model registry" ]; } // @private // // @overview // Check if the registry which is being manipulated exists // // @param config {dict|null} Any additional configuration needed for // initialising the registry // // @return {dict} Updated config dictionary containing registry path registry.util.check.registry:{[config] folderPath:config`folderPath; registryPath:folderPath,"/KX_ML_REGISTRY"; config:$[()~key hsym`$registryPath; [logging.info"Registry does not exist at: '",registryPath, "'. Creating registry in that location."; registry.new.registry[folderPath;config] ]; [modelStorePath:hsym`$registryPath,"/modelStore"; paths:`registryPath`modelStorePath!(registryPath;modelStorePath); config,paths ] ]; config } // @private // // @overview // Check that a list of files that are attempting to be added to the // registry exist and that they are either '*.q', '*.p' and '*.py' files // // @param files {symbol|symbol[]} The absolute/relative path to a file or // list of files that are to be added to the registry associated with a // model. These must be '*.p', '*.q' or '*.py' // // @return {symbol|symbol[]} All files which could be added to the registry registry.util.check.code:{[files] fileExists:{x where {x~key x}each x}$[-11h=type files;enlist;]hsym files; // TO-DO // - Add print to indicate what files couldnt be added fileType:fileExists where any fileExists like/:("*.q";"*.p";"*.py"); // TO-DO // - Add print to indicate what files didn't conform to supported types fileType } // @private // // @overview // Check user provided config has correct format // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param config {dict} Configuration provided by the user to // customize the pipeline // // @returns {dict} Returns config in correct format registry.util.check.config:{[folderPath;config] config:$[any[config~/:(();()!())]|101h=type config; ()!(); type[config]~99h; config; logging.error"config should be null or prepopulated dictionary" ]; loc:$[10h=abs type folderPath; $[like[(),folderPath;"s3://*"]; enlist[`aws]!; like[(),folderPath;"ms://*"]; enlist[`azure]!; like[(),folderPath;"gs://*"]; enlist[`gcp]!; enlist[`local]! ]enlist folderPath; 99h=type folderPath; folderPath; any folderPath~/:((::);()); registry.location; logging.error"Unsupported folderPath provided" ]; locInfo:`storage`folderPath!first@'(key;value)@\:loc; config,locInfo } // @private // // @overview // Define which form of storage is to be used by the interface // // @param cli {dict} Command line arguments as passed to the system on // initialisation, this defines how the fundamental interactions of // the interface are expected to operate. // // @returns {symbol} The form of storage to which all functions are expected // to interact registry.util.check.storage:{[cli] vendorList:`gcp`aws`azure; vendors:vendorList in key cli; if[not any vendors;:`local]; if[1<sum vendors; logging.error"Users can only specify one of `gcp`aws`azure via command line" ]; first vendorList where vendors } ================================================================================ FILE: ml_ml_registry_q_main_utils_copy.q SIZE: 2,002 characters ================================================================================ // copy.q - Functionality for copying items from one location to another // Copyright (c) 2021 Kx Systems Inc // // @overview // Utilities for copying items // // @category Model-Registry // @subcategory Utilities // // @end \d .ml
Syntax¶ It is a privilege to learn a language, a journey into the immediate – Marilyn Hacker, “Learning Distances” The q-SQL query templates select , exec , update , and delete have their own syntax. Elements¶ The elements of q are - functions: operators, keywords, lambdas, and extensions - data structures: atoms, lists, dictionaries, tables, expression lists, and parse trees - attributes of data structures - control words - scripts - environment variables Applicable values Lists, dictionaries, file and process handles, and functions of all kinds are all applicable values. An applicable value is a mapping. A function maps its domains to its range. A list maps its indexes to its items. A dictionary maps its keys to its values. Tokens¶ All the ASCII symbols have syntactic significance in q. Some denote functions, that is, actions to be taken; some denote nouns, which are acted on by functions; some denote iterators, which modify nouns and functions to produce new functions; some are grouped to form names and constants; and others are punctuation that bound and separate expressions and expression groups. The term token is used to mean one or more characters that form a syntactic unit. For instance, the tokens in the expression 10.86 +/ LIST are the constant 10.86 , the name LIST , and the symbols + and / . The only tokens that can have more than one character are constants and names and the following. <= / less-than-or-equal >= / greater-than-or-equal <> / not-equal :: / null, view, set global /: / each-right \: / each-left ': / each-prior, each-parallel When it is necessary to refer to the token to the left or right of another unit, terms like “immediately to the left” and “followed immediately by” mean that there are no spaces between the two tokens. Nouns¶ All data are syntactically nouns. Data include - atomic values - collections of atomic values in lists - lists of lists, and so on - Atomic values - include character, integer, floating-point, and temporal values, as well as symbols, functions, dictionaries, and a special atom :: , called null. All functions are atomic data. - List constants - include several forms for the empty list denoting the empty integer list, empty symbol list, and so on. (One-item lists are displayed using the comma to distinguish them from atoms, as in ,2 the one-item list consisting of the single integer item 2.) - Numerical constants - (integer and floating-point) are denoted in the usual ways, with both decimal and exponential notation for floating-point numbers. A negative numerical constant is denoted by a minus sign immediately to the left of a positive numerical constant. Special atoms for numerical and temporal datatypes (e.g. 0W and0N ) refer to infinities and “not-a-number” (or “null” in database parlance) concepts. - Temporal constants - include timestamps, months, dates, datetimes, timespans, minutes, and seconds. 2017.01 / month 2017.01.18 / date 00:00:00.000000000 / timespan 00:00 / minute 00:00:00 / second 00:00:00.000 / time - Character constants - An atomic character constant is denoted by a single character between double quote marks, as in "a" ; more than one such character, or none, between double quotes denotes a list of characters. - Symbol constants - A symbol constant is denoted by a back-quote to the left of a string of characters that form a valid name, as in `a.b_2 . - Dictionaries - are created from lists of a special form. - Tables - A table is a list of dictionaries, all of which have the same keys. These keys comprise the names of the table columns. - Functions - can be denoted in several ways; see below. Any notation for a function without its arguments denotes a constant function atom, such as + for the Add operator. List notation¶ A sequence of expressions separated by semicolons and surrounded by left and right parentheses denotes a noun called a list. The expression for the list is called a list expression, and this manner of denoting a list is called list notation. For example: (3 + 4; a _ b; -20.45) denotes a list. The empty list is denoted by () , but otherwise at least one semicolon is required. When parentheses enclose only one expression they have the common mathematical meaning of bounding a sub-expression within another expression. For example, in (a * b) + c the product a * b is formed first and its result is added to c ; the expression (a * b) is not list notation. An atom is not a one-item list. One-item lists are formed with the enlist function, as in enlist"a" and enlist 3.1416 . q)3 /atom 3 q)enlist 3 / 1-item list ,3 Vector notation¶ Lists in which all the items have the same datatype play an important role in kdb+. Q gives vector constants a special notation, which varies by datatype. 01110001b / boolean "abcdefg" / character `ibm`aapl`msft / symbol Numeric and temporal vectors separate items with spaces and if necessary declare their type with a suffixed lower-case character. 2018.05 2018.07 2019.01m / month 2 3 4 5 6h / short integer (2 bytes) 2 3 4 5 6i / xxxxx integer (4 bytes) 2 3 4 5 6 / long integer (8 bytes) 2 3 4 5 6j / long integer (8 bytes) 2 3 4 5.6 / float (8 bytes) 2 3 4 5 6f / float (8 bytes) | type | example | |---|---| | numeric | 42 43 44 | | date | 2012.09.15 2012.07.05 | | char | "abc" | | boolean | 0101b | | symbol | `ibm`att`ora | Strings¶ Char vectors are also known as strings. When \ is used inside character or string displays, it serves as an escape character. \" | double quote | \NNN | character with octal value NNN (3 digits) | \\ | backslash | \n | new line | \r | carriage return | \t | horizontal tab | Table notation¶ A table can be written as a list: an expression list followed by one or more expressions. An empty expression list indicates a simple table. q)([]sym:`aapl`msft`goog;price:100 200 300) sym price ---------- aapl 100 msft 200 goog 300 The names assigned become the column names. The values assigned must conform: be lists of the same count, or atoms. The empty brackets indicate that the table is simple: it has no key. You if you specify the column values as variables without specifying column names, the names of the variables will be used. q)sym:`aapl`msft`goog q)price:100 200 300 q)([] sym; price) sym price ---------- aapl 100 msft 200 goog 300 Some columns can be specified as atoms. q)([] sym:`aapl`msft`goog; price: 300) sym price ---------- aapl 300 msft 300 goog 300 But not all. To define a 1-row table, enlist at least one of the column values. q)([] sym:enlist`aapl; price:100) sym price ---------- aapl 100 The initial expression list can declare one or more columns as a key. The values of the key column/s of a table should be unique. q)([names:`bob`carol`bob`alice;city:`NYC`CHI`SFO`SFO]; ages:42 39 51 44) names city| ages ----------| ---- bob NYC | 42 carol CHI | 39 bob SFO | 51 alice SFO | 44 ! Key Dictionaries and tables Q for Mortals §8. Tables Attributes¶ Attributes are metadata that apply to lists of special form. They are often used on a dictionary domain or a table column to reduce storage requirements or to speed retrieval. Set Attribute, Step dictionaries Bracket notation¶ A sequence of expressions separated by semicolons and surrounded by left and right brackets ([ and ] ) denotes either the indexes of a list or the arguments of a function. The expression for the set of indexes or arguments is called an index expression or argument expression, and this manner of denoting a set of indexes or arguments is called bracket notation. For example, m[0;0] selects the element in the upper left corner of a matrix m , and f[a;b;c] evaluates the function f with the three arguments a , b , and c . Unlike list notation, bracket notation does not require at least one semicolon; one expression between brackets – or none – will do. Operators can also be evaluated with bracket notation. For example, +[a;b] means the same as a + b . All operators can be used infix. Bracket pairs with nothing between them also have meaning; m[] selects all items of a list m and f[] evaluates the no-argument function f . The similarity of index and argument notation is not accidental. Indexing tables¶ Tables are indexed first by row; second by column. q)t:([]name:`Tom`Dick`Harry;age:34 42 17) q)t[1;`age] 42 Eliding an index gets all its values. q)t[;`age] 34 42 17 q)t[1;] name| `Dick age | 42 You can elide trailing indexes. (As in projecting a function.) q)t[1] name| `Dick age | 42 Table columns are always indexed as symbols; rows as integers. This permits a shorthand: q)t[`age] / shorthand for t[;`age] 34 42 17 q)t`age 34 42 17 Conditional evaluation and control statements¶ A sequence of expressions separated by semicolons and surrounded by left and right brackets ([ and ] ), where the left bracket is preceded immediately by a $ , denotes conditional evaluation. If the word do , if , or while appears instead of the $ then that word together with the sequence of expressions denotes a control statement. The first line below shows conditional evaluation; the next three show control statements: $[a;b;c] do[a;b;c] if[a;b;c] while[a;b;c] Control words are not functions and do not return results. Function notation¶ A sequence of expressions separated by semicolons and surrounded by left and right braces ({ and } ) denotes a function. The expression for the function definition is called a function expression or lambda, and this manner of defining a function is called function or lambda notation. The first expression in a function expression can be a signature: an argument expression of the form [name1;name2;…;nameN] naming the arguments of the function. Like bracket notation, function notation does not require at least one semicolon; one expression (or none) between braces will do. Within a script, a function may be defined across multiple lines. Prefix, infix, postfix¶ There are various ways to apply a function to its argument/s. f[x] / bracket notation f x / prefix x + y / infix f\ / postfix In the last example above, the iterator \ is applied postfix to the function f , which appears immediately to the left of the iterator. Iterators are the only functions that can be applied postfix. Bracket and prefix notation are also used to apply a list to its indexes. q)"abcdef" 1 0 3 "bad" Infix and prefix notation have long right scope¶ The right argument of a unary function, or a binary function applied infix, is the result of evaluating (subject to parentheses) everything to its right. The left argument of a binary function applied infix is (subject to parentheses) the value immediately to its left. q)count first (2 3 4;5 6) 3 Above, the argument of count is first (2 3 4;5 6) ; that is, 2 3 4 . q)2 3 * 4 5 - 6 7 -4 -6 Above, the left argument of Multiply is 2 3 and its right argument is 4 5-6 7 ; that is, -2 -2 . Postfix yields infix¶ An iterator applied to an applicable value derives a function. For example, Scan applied to Add derives the function Add Scan: +\ . If the iterator is applied postfix, as it almost always is, the derived function has infix syntax. This rule holds regardless of the rank of the derived function For example, counterintuitively, count' is unary but has infix syntax. A common consequence is that many derived functions must be parenthesized to be applied postfix. (See below.) Prefix and vector notation¶ Index and argument notation (i.e. bracket notation) are similar. Prefix expressions evaluate unary functions as in til 3 . This form of evaluation is permitted for any unary. q){x - 2} 5 3 3 1 This form can also be used for item selection. q)(1; "a"; 3.5; `xyz) 2 3.5 Juxtaposition is also used in vector notation. 3.4 57 1.2e20 The items in vector notation bind more tightly than the tokens in function call and item selection. For example, {x - 2} 5 6 is the function {x - 2} applied to the vector 5 6 , not the function {x - 2} applied to 5, followed by 6. Parentheses around a function with infix syntax¶ Parentheses around a function with infix syntax capture it as a value and prevent it being parsed as an infix. Add Scan +\ is variadic and has infix syntax. q)+\[1 2 3 4 5] / unary 1 3 6 10 15 q)+\[1000;1 2 3 4 5] / unary 1001 1003 1006 1010 1015 q)1000+\1 2 3 4 5 / binary, applied infix 1001 1003 1006 1010 1015 Captured as a value by parentheses, it remains variadic, but can be applied postfix as a unary. q)(+\)[1000;1 2 3 4 5] / binary 1001 1003 1006 1010 1015 q)(+\)1 2 3 4 5 / unary, applied postfix 1 3 6 10 15 Captured as a value, a function with infix syntax can be passed as an argument to another function. q)(*) scan 1 2 3 4 5 / * is binary and infix 1 2 6 24 120 q)n:("the ";("quick ";"brown ";("fox ";"jumps ";"over ");"the ");("lazy ";"dog.")) q)(,/) over n / ,/ is variadic and infix "the quick brown fox jumps over the lazy dog." For functions without infix syntax, parentheses are unnecessary. q)raze over n "the quick brown fox jumps over the lazy dog." q){,/[x]}over n "the quick brown fox jumps over the lazy dog." Compound expressions¶ Function expressions, index expressions, argument expressions and list expressions are collectively referred to as compound expressions. Empty expressions¶ An empty expression occurs in a compound expression wherever the place of an individual expression is either empty or all blanks. For example, the second and fourth expressions in the list expression (a+b;;c-d;) are empty expressions. Empty expressions in both list expressions and function expressions actually represent a special atomic value called null. Colon¶ Assign¶ The most common use of colon is to name values. Explicit return¶ Within a lambda (function definition) a colon followed by a value terminates evaluation of the function, and the value is returned as its result. The explicit return is a common form when detecting edge cases, e.g. ... if[type[x]<0; :x]; / if atom, return it ... Colons in names¶ The functions associated with I/O and interprocess communication are denoted by a colon following a digit, as in 0: and 1: . The q operators are all binary functions. They inherit unary forms from k, denoted by a colon suffix, e.g. (#: ). Use of these forms in q programs is deprecated. Colon colon¶ A pair of colons with a name to its left and an expression on the right - within a function expression, denotes global assignment, that is, assignment to a global name ( {… ; x::3 ; …} ) - outside a function expression, defines a view Iterators¶ Iterators are higher-order operators. Their arguments are applicable values (functions, process handles, lists, and dictionaries) and their results are derived functions that iterate the application of the value. Three symbols, and three symbol pairs, denote iterators: | token | semantics | |---|---| ' | Case and Each | ': | Each Prior, Each Parallel | /: and \: | Each Right and Each Left | / and \ | Converge, Do, While, Reduce | Any of these in combination with the value immediately to its left, derives a new function. The derived function is a variant of the value modified by the iterator. For example, + is Add and +/ is sum. q)(+/)1 2 3 4 / sum the list 1 2 3 4 10 q)16 +/ 1 2 3 4 / sum the list with starting value 16 26 Any notation for a derived function without its arguments (e.g. +/ ) denotes a constant function atom. Application for how to apply iterators Names and namespaces¶ Names consist of the upper- and lower-case alphabetic characters, the numeric characters, dot (. ) and underscore (_ ). The first character in a name cannot be numeric or the underscore. Underscores in names While q permits the use of underscores in names, this usage is strongly deprecated because it is easily confused with Drop. q)foo_bar:42 q)foo:3 q)bar:til 6 Is foo_bar now 42 or 3 4 5 ? A name is unique in its namespace. A kdb+ session has a default namespace, and child namespaces, nested arbitrarily deep. This hierarchy is known as the K-tree. Namespaces are identified by a leading dot in their names. kdb+ includes namespaces .h , .j , .q , .Q , and .z . (All namespaces with one-character names are reserved for use by KX.) Names with dots are compound names, and the segments between dots are simple names. All simple names in a compound name have meaning relative to the K-tree, and the dots denote the K-tree relationships among them. Two dots cannot occur together in a name. Compound names beginning with a dot are called absolute names, and all others are relative names. Iterator composition¶ A derived function is composed by any string of iterators with an applicable value to the left and no spaces between any of the iterator glyphs or between the value and the leftmost iterator glyph. For example, +\/:\: composes a well-formed function. The meaning of such a sequence of symbols is understood from left to right. The leftmost iterator (\ ) modifies the operator (+ ) to create a new function. The next iterator to the right of that one (/: ) modifies the new function to create another new function, and so on, all the way to the iterator at the right end. Projecting the left argument of an operator¶ If the left argument of an operator is present but the right argument is not, the argument and operator symbol together denote a projection. For example, 3 + denotes the unary function “3 plus”, which in the expression (3 +) 4 is applied to 4 to give 7. Precedence and order of evaluation¶ All functions in expressions have the same precedence, and with the exception of certain compound expressions the order of evaluation is strictly right to left. a * b +c is a*(b+c) , not (a*b)+c . This rule applies to each expression within a compound expression and, other than the exceptions noted below, to the set of expressions as well. That is, the rightmost expression is evaluated first, then the one to its left, and so on to the leftmost one. For example, in the following pair of expressions, the first one assigns the value 10 to x . In the second one, the rightmost expression uses the value of x assigned above; the center expression assigns the value 20 to x , and that value is used in the leftmost expression: q)x: 10 q)(x + 5; x: 20; x - 5) 25 20 5 The sets of expressions in index expressions and argument expressions are also evaluated from right to left. However, in function expressions, conditional evaluations, and control statements the sets of expressions are evaluated left to right. q)f:{a : 10; : x + a; a : 20} q)f[5] 15 The reason for this order of evaluation is that the function f written on one line above is identical to: f:{ a : 10; :x+ a; a : 20 } It would be neither intuitive nor suitable behavior to have functions executed from the bottom up. (Note that in the context of function expressions, unary colon is Return.) Multiline expressions¶ Individual expressions can occupy more than one line in a script. Expressions can be broken after the semicolons that separate the individual expressions within compound expressions; it is necessary only to indent the continuation with one or more spaces. For example: (a + b; ; c - d) is the 3-item list (a+b;;c-d) . Note that whenever a set of expressions is evaluated left to right, such as those in a function expression, if those expressions occupy more than one line then the lines are evaluated from top to bottom. Spaces¶ Any number of spaces are usually permitted between tokens in expressions, and usually the spaces are not required. The exceptions are: - No spaces are permitted between the symbols ' and: when denoting the iterator': \ and: when denoting the iterator\: / and: when denoting the iterator/: - a digit and : when denoting a function such as0: : and: for assignments of the formname :: value - No spaces are permitted between an iterator glyph and the value or iterator symbol to its left. - No spaces are permitted between an operator glyph and a colon to its right whose purpose is to denote assignment. - If a / is meant to denote the left end of a comment then it must be preceded by a blank (or newline); otherwise it will be taken to be part of an iterator. - Both the underscore character ( _ ) and dot character (. ) denote operators and can also be part of a name. The default choice is part of a name. A space is therefore required between an underscore or dot and a name to its left or right when denoting a function. - At least one space is required between neighboring numeric constants in vector notation. - A minus sign ( - ) denotes both an operator and part of the format of negative constants. A minus sign is part of a negative constant if it is next to a positive constant and there are no spaces between, except that a minus sign is always considered to be the function if the token to the left is a name, a constant, a right parenthesis or a right bracket, and there is no space between that token and the minus sign. The following examples illustrate the various cases: x-1 / x minus 1 x -1 / x applied to -1 3.5-1 / 3.5 minus 1 3.5 -1 / numeric list with two elements x[1]-1 / x[1] minus 1 (a+b)- 1 / (a+b) minus 1 Comments¶ Line, trailing, and multiline comments are ignored by the interpreter. / will comment out the rest of the line. q)/Oh what a lovely day q)2+2 /I know this one 4 unless embedded within a string or preceded by a system command. q)count"2/3" 3 q)\l /data/files Sections of script can be commented out with matching singleton / and \ . / Oh what a beautiful morning Oh what a wonderful day \ When not terminating a multi-line comment, a singleton \ will exit the script. a:42 \ ignore this and what follows the restroom at the end of the universe Special constructs¶ Back-slash, colon and single-quote (/ \ : ' ) all have special meanings outside ordinary expressions, denoting system commands and debugging controls.
Q by examples¶ Simple arithmetic¶ q)2+2 / comment is ' /': left of /: whitespace or nothing 4 q)2-3 / negative numbers -1 q)2*3+4 / no precedence, right to left 14 q)(2*3)+4 / parentheses change order 10 q)3%4 / division 0.75 q){x*x}4 / square 16 q)sqrt 4 / square root 2.0 q)reciprocal 4 / 1/x 0.25 Operations using lists¶ q)2*1 2 3 / numeric list with space separators 2 4 6 q)1 2 3%2 4 6 / list-to-list operations, same size 0.5 0.5 0.5 q)count 1 2 3 / items in a list 3 q)3#1 / generate sequence of same numbers 1 1 1 q)5#1 2 / or from a list of given items 1 2 1 2 1 List items¶ q)first 1 2 3 / first item 1 q)last 1 2 3 / last item 3 q)1_1 2 3 / rest without first item 2 3 q)-1_1 2 3 / rest without last item 1 2 q)reverse 1 2 3 / reverse 3 2 1 Indexing and sorting¶ q)1 2 3@1 / indexing is zero-based 2 q)1 2 3@1 0 / index can be vector too 2 1 q)til 3 / generate zero-based sequence 0 1 2 q)2 4 6?4 / index of given item/s 1 q)iasc 2 1 6 / indexes of sorted order 1 0 2 q)asc 2 1 6 / sort vector `s#1 2 6 List aggregation¶ q)1 2 3,10 20 / join lists 1 2 3 10 20 q)1+2+3 / sum of items 6 q)sum 1 2 3 / insert '+' between items 6 q)sums 1 2 3 / running sum of items 1 3 6 q)1,(1+2),(1+2+3) / same as this 1 3 6 q){1_x+prev x}til 5 / sum running pairs 1 3 5 7 q)sum each{(2*til ceiling .5*count x)_x}1 2 3 4 5 / non-intersecting pairs 3 7 5 q)(1 2;3 4 6;7 6) / list (1 2;3 4 6;7 6) q)first(3 4 6;7 6) / first item in the list 3 4 6 Function combinations¶ q){x+x*x}4 / a + a^2 20 q)(sqrt;{x*x})@\:4 / [sqrt(a), a^2] (2f;16) q){x*x}sum 2 3 / (a +b)^2 25 q)sum{x*x}2 3 / a^2 + b^2 13 q){sum(x*x),2*/x}2 3 / (a + b)^2 = a^2 + b^2 + 2ab 25 q)sqrt sum{x*x}3 4 / sqrt(a^2 + b^2) 5f User-defined functions and arguments¶ q)d1:- / binary projection q)d2:{x-y} / explicit binary q)m1:neg / unary projection q)m2:0- / unary projection q)m3:{neg x} / explicit unary q)(m1;m2;m3)@\:4 / unary functions -4 -4 -4 q)(d1;d2).\:3 4 / binary functions -1 -1 Exponent and logarithm¶ q)(e;2*e;e*e:exp 1) / e, 2e, e squared 2.718282 5.436564 7.389056 q)exp 2 / exponent, e^2 7.389056 q)2 xexp 16 / exponent base 2, 2^16 65536.0 q)log exp 2 / logarithm, ln e^2 2.0 q)2 xlog 65536 / logarithm base 2, log2 65536 16.0 Trigonometry¶ q)a:(pi;2*pi;pi*pi:acos -1) / pi, 2 pi, pi squared 3.141593 6.283185 9.869604 q)cos pi / cosine of pi -1.0 q)(t:sum{x*x}@(cos;sin)@\:)pi / theorem of trigonometry 1.0 q)t a / test theorem at angles 1 1 1.0 Matrixes¶ q)1 2 3*/:1 2 3 / outer product: multiplication table (1 2 3;2 4 6;3 6 9) q){x=/:x}@til 3 / identity matrix (100b;010b;001b) q)2 3#til 6 / generate matrix (0 1 2;3 4 5) q)2 2#0 1 1 1 / reshape vector to matrix (0 1;1 1) Structural transforms¶ q)show N:0 3_/:2 6#til 12 / list of atoms 0 1 2 3 4 5 6 7 8 9 10 11 q)raze/[N] / ravel 0 1 2 3 4 5 6 7 8 9 10 11 q)raze each N / ravel each sub-matrix (0 1 2 3 4 5;6 7 8 9 10 11) q)show M:3 3#"ABC123!@#" / character matrix "ABC" "123" "!@#" q)(::;flip;reverse;reverse each;1 rotate)@\:M "ABC" "123" "!@#" "A1!" "B2@" "C3#" "!@#" "123" "ABC" "CBA" "321" "#@!" "123" "!@#" "ABC" q)M ./:/:f value group sum each f:n cross n:til 3 / secondary diagonals ,"A" "B1" "C2!" "3@" ,"#" q)M ./:a,'a:til count M / main diagonal "A2#" Selection¶ q)N:((0 1 2;3 4 5);(6 7 8;9 10 11)) q)((N 1) 1) 1 / repetitive selection of items From list 10 q)3@[;1]/N / apply select 3 times 10 q)N[1;1;1] / cross sectional select 10 q)N . 1 1 1 / cross sectional select too 10 Factorial and binomial¶ q)each[f:{$[x<0;0;prd 1.+til x]}]1+til 5 / factorial 1 2 6 24 120.0 q)prds 1+til 5 / running product 1 2 6 24 120 q)(b:{til[x]{$[x<y;0;floor f[x]%f[y]*f x-y]}\:/:til x})5 / binomial coeff. (1 1 1 1 1;0 1 2 3 4;0 0 1 3 6;0 0 0 1 4;0 0 0 0 1) q)/ fibonacci: sum of second diagonal of binomial matrix q)1_{sum b[x]./:flip(til x;reverse til x)}each til 16 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 Dot product¶ q)1 2 3 wsum 1 2 3 / dot product wsum=+/* (optimized) 14f q)1 2 3.$1 2 3. / also 14f q)M:(0 1.;1 1.) / assignment q)M$M / matrix squared (optimized) (1 1.;1 2.) q)15$[M]/M / matrix to the power of 15, also fibonacci (610 987.;987 1597.) q)(14$[M]\M)[;0;1] 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610f Randomness and probability¶ q)A:5?1.;A / 5 random floats from 0..1 0.03505812 0.7834427 0.7999031 0.9046515 0.2232866 q)B:10?2;B / coin toss 1 1 1 0 1 0 1 1 0 0 q)B1:10?0b;B1 / with booleans 11110010101b q)C:-3?3;C / deal 3 unique cards out of 3 1 0 2 q)(min;max)@\:A / min and max over the list 0.03505812 0.9046515 q)B?0 / first zero 3 q)avg C~/:1_10000{-3?3}\() / method monte carlo 0.1643836 q)reciprocal f 3 / exact probability of 3 cards in given order 0.1666667 Unique elements¶ q)D:distinct S:"mississippi" / distinct items "misp" q)K:D?S;K / find (?) indexes 0 1 2 2 1 2 2 1 3 3 1 q)S value group K / group by key (enlist"m";"iiii";"ssss";"pp") q)count each group S / frequencies "misp"!1 4 4 2 q)I:(til count S)in first each group S;I / sieve of nub where D is in S 11100000100b q)S where I / filter by sieve to get D "misp" q)sum D=/:S / where items of D are in S 1 4 4 2 Source¶ Source code kxcontrib/avrabecz/qybeg.q Based on J by Example 06/11/2005 © Oleg Kobchenko An introduction to kdb+¶ Q for All is a two-hour introduction to kdb+ and q by Jeffry Borror, author of Q for Mortals. - Introduction - Q console, types and lists - Q operators and operator precedence - Booleans and temporal data types - Casting and date operators - Operations on lists - Defining functions - Functional examples: Newton-Raphson and Fibonacci sequence - Functions example: variables - Tables - qSQL - Complex queries - Interprocess communication - Callbacks - I/O Reading room¶ It is a privilege to learn a language, A journey into the immediate — Marilyn Hacker Reading is an important part of acquiring a language. Here are example programs to read. Many of them are for games and puzzles, because they provide small, well-understood problem domains. If you are familiar with Python, you might also find Examples from Python illuminating. Shifts & scans¶ Shifts¶ Shifts are powerful expressions for finding where the items of a list change in specific ways. Boolean shifts are commonly used to find where in a list the result of a test expression on its elements has changed. q)show x:1h$20?2 01110101001011010000b True following true¶ Syntax: (&)prior x q)(x;(&)prior x) 01110101001011010000b 00110000000001000000b True following false¶ Syntax: (>)prior x q)(x;(>)prior x) 01110101001011010000b 01000101001010010000b False following true¶ Syntax: (<)prior x q)(x;(<)prior x) 01110101001011010000b 00001010100100101000b False following false¶ Syntax: not (|)prior x q)(x;not (|)prior x) 01110101001011010000b 10000000010000000111b Changed or unchanged¶ Syntax: differ x q)(x;differ x) 01110101001011010000b 11001111101110111000b q)(x;not differ x) 01110101001011010000b 00110000010001000111b More than one The above shifts also work on temporal and numeric values. Scans¶ Shifts compare each list item to its neighbor, but scan results relate to the entire list – and can terminate quickly. All false from first false¶ Syntax: mins x q)"?"<>t:"http://example.com?foo=bar" 11111111111111111101111111b q)mins "?"<>t 11111111111111111100000000b q)t where mins "?"<>t "http://example.com" All true from first true¶ Syntax: maxs x q)maxs t="?" 00000000000000000011111111b q)t where maxs t="?" "?foo=bar" Right to left¶ Scans traverse lists from left to right. Use reverse to traverse from right to left. q)f:"path/to/a/file.txt" q)f="." 000000000000001000b q)"."=reverse f 000100000000000000b q)mins"."<>reverse f 111000000000000000b q)f where reverse mins "."<>reverse f "txt" Boolean finite-state machine (toggle state on false)¶ Syntax: (=)scan x q)(x;(=)scan x) 01110101001011010000b 00001100100111001010b Delimiters and what they embrace¶ Syntax: x or(<>)scan x q)t:"gimme {liberty} or gimme {death}" q){x or (<>)scan x} t in "{}" 00000011111111100000000001111111b q)t where {x or (<>)scan x} t in "{}" "{liberty}{death}" What delimiters embrace¶ Syntax: (not x)and(<>)scan x q)t where {(not x) and (<>)scan x} t in "{}" "libertydeath" Beyond strings The scan examples here use strings, but can readily be adapted to tests on numeric or temporal values. Starting kdb+¶ This is a quick-start guide to kdb+, aimed primarily at those learning independently. It covers system installation, the kdb+ environment, IPC, tables and typical databases, and where to find more material. After completing this you should be able to follow the Borror textbook Q for Mortals, and the Reference. kdb+¶ The kdb+ system is both a database and a programming language. - kdb+: the database (k database plus) - q: a general-purpose programming language integrated with kdb+ Resources¶ code.kx.com¶ The best resource for learning q. It includes: - Jeff Borror’s textbook Q for Mortals - a Reference for the built-in functions - interfaces with other languages and processes GitHub¶ - the KxSystems repositories - user-contributed repositories Discussion groups¶ - The main discussion forum is the k4 Topicbox. This is available only to licensed customers – please use a work email address to apply for access. - KX Community discussion forum is used to find answers, ask questions, and connect with our KX Community. Install free system¶ If you do not already have access to a licensed copy, go to Get started to download and install q. Graphical user interface¶ When q is run, it displays a console where you can enter commands and see the results. This is all you need to follow the tutorial, and if you just want to learn a little about q, then it is easiest to work in the console. As you become more familiar with q, you may prefer to work in the interactive development environment KX Developer. (Kx Analyst is the enterprise version of Developer.)
// @kind function // @category node // @desc Save all metadata information needed to predict on new data // @param params {dictionary} All data generated during the preprocessing and // prediction stages // @return {dictionary} All metadata information needed to generate predict // function saveMeta.node.function:{[params] saveOpt:params[`config]`saveOption; if[0~saveOpt;:(::)]; modelMeta:saveMeta.extractModelMeta params; saveMeta.saveMeta[modelMeta;params]; initConfig:params`config; runOutput:k!params k:`sigFeats`symEncode`bestModel`modelName; initConfig,runOutput,modelMeta } // Input information saveMeta.node.inputs:"!" // Output information saveMeta.node.outputs:"!" ================================================================================ FILE: ml_automl_code_nodes_saveModels_funcs.q SIZE: 1,563 characters ================================================================================ // code/nodes/saveModels/funcs.q - Functions called in saveModels node // Copyright (c) 2021 Kx Systems Inc // // Definitions of the main callable functions used in the application // of .automl.saveModels \d .automl // @kind function // @category saveGraph // @desc Save best Model // @param params {dictionary} All data generated during the process // @param savePath {string} Path where images are to be saved // return {::} Save best model to appropriate location saveModels.saveModel:{[params;savePath] modelLib :params[`modelMetaData]`modelLib; bestModel:params`bestModel; modelName:string params`modelName; filePath:savePath,"/",modelName; joblib:.p.import`joblib; $[modelLib in`sklearn`theano; joblib[`:dump][bestModel;pydstr filePath]; `keras~modelLib; bestModel[`:save][pydstr filePath,".h5"]; `torch~modelLib; torch[`:save][bestModel;pydstr filePath,".pt"]; -1"\nSaving of non keras/sklearn/torch models types is not currently ", "supported\n" ]; printPath:utils.printDict[`model],savePath; params[`config;`logFunc]printPath; } // @kind function // @category saveGraph // @desc Save NLP w2v model // @param params {dictionary} All data generated during the process // @param savePath {string} Path where images are to be saved // return {::} Save NLP w2v to appropriate location saveModels.saveW2V:{[params;savePath] extractType:params[`config]`featureExtractionType; if[not extractType~`nlp;:(::)]; w2vModel:params`featModel; w2vModel[`:save][pydstr savePath,"w2v.model"]; } ================================================================================ FILE: ml_automl_code_nodes_saveModels_init.q SIZE: 227 characters ================================================================================ // code/nodes/saveModels/init.q - Load saveModels node // Copyright (c) 2021 Kx Systems Inc // // Load code for saveModels node \d .automl loadfile`:code/nodes/saveModels/funcs.q loadfile`:code/nodes/saveModels/saveModels.q ================================================================================ FILE: ml_automl_code_nodes_saveModels_saveModels.q SIZE: 758 characters ================================================================================ // code/nodes/saveModels/saveModels.q - Save model node // Copyright (c) 2021 Kx Systems Inc // // Save encoded representation of best model retrieved during run of AutoML \d .automl // @kind function // @category node // @desc Save all models needed to predict on new data // @param params {dictionary} All data generated during the preprocessing and // prediction stages // @return {::} All models saved to appropriate location saveModels.node.function:{[params] saveOpt:params[`config]`saveOption; if[0~saveOpt;:(::)]; savePath:params[`config;`modelsSavePath]; saveModels.saveModel[params;savePath]; saveModels.saveW2V[params;savePath]; } // Input information saveModels.node.inputs:"!" // Output information saveModels.node.outputs:"!" ================================================================================ FILE: ml_automl_code_nodes_saveReport_funcs.q SIZE: 1,617 characters ================================================================================ // code/nodes/saveReport/funcs.q - Functions called in saveReport node // Copyright (c) 2021 Kx Systems Inc // // Definitions of the main callable functions used in the application of // .automl.saveReport \d .automl // @kind function // @category saveReport // @desc Create a dictionary with image filenames for report generation // @param params {dictionary} All data generated during the process // @return {dictionary} Image filenames for report generation saveReport.reportDict:{[params] config:params`config; saveImage:config`imagesSavePath; savedPlots:saveImage,/:string key hsym`$saveImage; plotNames:$[`class~config`problemType; `conf`data`impact; `data`impact`reg ],`target; savedPlots:enlist[`savedPlots]!enlist plotNames!savedPlots; params,savedPlots } // @kind function // @category saveReport // @desc Generate and save down procedure report // @param params {dictionary} All data generated during the process // @return {::} Report saved to appropriate location saveReport.saveReport:{[params] savePath:params[`config;`reportSavePath]; modelName:params`modelName; logFunc:params[`config;`logFunc]; filePath:savePath,"Report_",string modelName; savePrint:utils.printDict[`report],savePath; logFunc savePrint; $[0~checkimport 2; @[{saveReport.latexGenerate . x}; (params;filePath); {[params;logFunc;err] errorMessage:utils.printDict[`latexError],err,"\n"; logFunc errorMessage; saveReport.reportlabGenerate . params; }[(params;filePath);logFunc] ]; saveReport.reportlabGenerate[params;filePath] ] } ================================================================================ FILE: ml_automl_code_nodes_saveReport_init.q SIZE: 502 characters ================================================================================ // code/nodes/saveReport/init.q - Load saveReport node // Copyright (c) 2021 Kx Systems Inc // // Load code for saveReport node \d .automl loadfile`:code/nodes/saveReport/saveReport.q loadfile`:code/nodes/saveReport/funcs.q loadfile`:code/nodes/saveReport/reportlab/utils.q loadfile`:code/nodes/saveReport/reportlab/reportlab.q if[0~checkimport[2]; loadfile`:code/nodes/saveReport/latex/latex.p; loadfile`:code/nodes/saveReport/latex/utils.q; loadfile`:code/nodes/saveReport/latex/latex.q ] ================================================================================ FILE: ml_automl_code_nodes_saveReport_latex_latex.q SIZE: 1,172 characters ================================================================================ // code/nodes/saveReport/latex/latex.q - Save latex report // Copyright (c) 2021 Kx Systems Inc // // Save report summarizing automl pipeline results \d .automl // For simplicity of implementation this code is written largely in python // this is necessary as a result of the excessive use of structures such as // with clauses which are more difficult to handle via embedPy // @kind function // @category saveReport // @desc Generate automl report in latex report generation if available // @params {dictionary} All data generated during the process // @filePath {string} Location to save the report // @return {::} Latex report is saved down locally saveReport.latexGenerate:{[params;filePath] dataDescribe:params`dataDescription; hyperParams:params`hyperParams; scoreDict:params[`modelMetaData]`modelScores; describeTab:saveReport.i.descriptionTab dataDescribe; scoreTab:saveReport.i.scoringTab scoreDict; gridTab:saveReport.i.gridSearch hyperParams; pathDict:params[`savedPlots],`fpath`path!(filePath;.automl.path); params:string each params; saveReport.i.latexReportGen[params;pathDict;describeTab;scoreTab;gridTab; utils.excludeList]; } ================================================================================ FILE: ml_automl_code_nodes_saveReport_latex_utils.q SIZE: 1,716 characters ================================================================================ // code/nodes/saveReport/latex/utils.q - Utilities to save latex report // Copyright (c) 2021 Kx Systems Inc // // Utilities used for the generation of a Latex PDF \d .automl // @kind function // @category saveReportUtility // @desc Load in python latex function // @return {<} Python latex function saveReport.i.latexReportGen:.p.get`python_latex // @kind function // @category saveReportUtility // @desc Convert table to a pandas dataframe // @param tab {table} To be converted to a pandas dataframe // @return {<} Pandas dataframe object saveReport.i.tab2dfFunc:{[tab] .ml.tab2df[tab][`:round][3] } // @kind function // @category saveReportUtility // @desc Convert table to a pandas dataframe // @param describe {dictionary} Description of input data // @return {<} Pandas dataframe object saveReport.i.descriptionTab:{[describe] describeDict:enlist[`column]!enlist key describe; describeTab:flip[describeDict],'value describe; saveReport.i.tab2dfFunc describeTab } // @kind function // @category saveReportUtility // @desc Convert table to a pandas dataframe // @param scoreDict {dictionary} Scores of each model // @return {<} Pandas dataframe object saveReport.i.scoringTab:{[scoreDict] scoreTab:flip `model`score!(key scoreDict;value scoreDict); saveReport.i.tab2dfFunc scoreTab } // @kind function // @category saveReportUtility // @desc Convert table to a pandas dataframe // @param hyperParam {dictionary} Hyperparameters used on the best model // @return {<} Pandas dataframe object saveReport.i.gridSearch:{[hyperParams] if[99h=type hyperParams; grid:flip`param`val!(key hyperParams;value hyperParams); hyperParams:saveReport.i.tab2dfFunc grid ]; hyperParams } ================================================================================ FILE: ml_automl_code_nodes_saveReport_reportlab_reportlab.q SIZE: 5,952 characters ================================================================================ // code/nodes/saveReport/reportlab/reportlab.q - Report generation // Copyright (c) 2021 Kx Systems Inc // // Python report generation using reportlab \d .automl
. @ Amend, Amend At¶ Modify one or more items in a list, dictionary or datafile. Amend Amend At values (d . i) or (d @ i) .[d; i; u] @[d; i; u] u[d . i] u'[d @ i] .[d; i; v; vy] @[d; i; v; vy] v[d . i;vy] v'[d @ i;vy] Where d is an atom, list, or a dictionary (value); or a handle to a list, dictionary or datafilei indexes whered is to be amended:- it must be a list for . - if empty (for . ) or the general null:: (for@ ), or ifd is a non-handle atom, the selection \(S\) isd (Amend Entire) - otherwise \(S\) is .[d;i] or@[d;i] - it must be a list for u is a unaryv is a binary, andvy is- in the right domain of v - unless \(S\) is d , conformable to \(S\) and of the same type - in the right domain of the items in d of the selection \(S\) are replaced - in the ternary, by u[ \(S\)] for. and byu'[ \(S\)] for@ - in the quaternary, by v[ \(S\);vy] for. and byv'[ \(S\);vy] for@ and if d is a - value, returns a copy of it with the item/s at i modified - handle, modifies the item/s of its reference at i , and returns the handle If v is Assign (: ) each item in the selection is replaced by the corresponding item in vy . u and v can be replaced with values of higher rank using projection or by enlisting their arguments and using Apply. See also binary and ternary forms of . and @ Apply, Apply At, Index, Index At Examples¶ Amend Entire¶ If i is - the empty list (for . ) - the general null (for @ ) the selection is the entire value in d . .[d;();u] <=> u[d] @[d;::;u] <=> u'[d] .[d;();v;y] <=> v[d;y] @[d;::;v;y] <=> v'[d;y] q).[1 2; (); 3 4 5] 4 5 q).[1 2; (); :; 3 4 5] 3 4 5 q).[1 2; (); ,; 3 4 5] 1 2 3 4 5 q)@[1 2; ::; *; 3 4] 3 8 q)@[(1 2;4 5); ::; ,; 3 6] 1 2 3 4 5 6 q)@[1 2; ::; 3 4*] 'type [0] @[1 2; ::; 3 4*] ^ Single path¶ If i is a non-negative integer vector then the selection is a single item at depth count i in d . q)(5 2.14; "abc") . 1 2 / index at depth 2 "c" q).[(5 2.14; "abc"); 1 2; :; "x"] / replace at depth 2 5 2.14 "abx" Amend At¶ Indices results are accumulated when repeated: q)@[(0 1 2;1 2 3 4;7 8 9) ;1 1; 2*] 0 1 2 4 8 12 16 / equates to 2*2*1 2 3 4 7 8 9 q)@[(0 1 2;1 2 3 4;7 8 9) ;0 1 2 1; 100*] 0 100 200 / equates to 100*0 1 2 10000 20000 30000 40000 / equates to 100*100*1 2 3 4 700 800 900 / equates to 100*7 8 9 q)@[(0 1 2;1 2 3 4;7 8 9) ;0 1 2 1; {x*y};100] 0 100 200 / equates to {x*100}0 1 2 10000 20000 30000 40000 / equates to {x*100}{x*100}1 2 3 4 700 800 900 / equates to {x*100}7 8 9 Cross sections¶ Where the items of i are non-negative integer vectors, they define a cross section. The result can be understood as a series of single-path amends. q)d (1 2 3;4 5 6 7) (8 9;10;11 12) (13 14;15 16 17 18;19 20) q)i:(2 0; 0 1 0) q)y:(100 200 300; 400 500 600) q)r:.[d; i; ,; y] Compare d and r : q)d q)r (1 2 3;4 5 6 7) (1 2 3 400 600;4 5 6 7 500) (8 9;10;11 12) (8 9;10;11 12) (13 14;15 16 17 18;19 20) (13 14 100 300;15 16 17 18 200;19 20) The shape of y is 2 3 , the same shape as the cross-section selected by d . i . The (j;k) th item of y corresponds to the path (i[0;j];i[1;k]) . The first single-path Amend is equivalent to: d: .[d; (i . 0 0; i . 1 0); ,; y . 0 0] (since the amends are being done individually, and the assignment serves to capture the individual results as we go), or: d: .[d; 2 0; ,; 100] and item d . 2 0 becomes 13 14,100 , or 13 14 100 . The next single-path Amend is: d: .[d; (i . 0 0; i . 1 1); ,; y . 0 1] or d: .[d; 2 1; ,; 200] and item d . 2 1 becomes 15 16 17 18 200 . Continuing in this manner: - item d . 2 0 becomes13 14 100 300 , modifying the previously modified value13 14 100 - item d . 0 0 becomes1 2 3 400 - item d . 0 1 becomes4 5 6 7 500 - item d . 0 0 becomes1 2 3 400 600 , modifying the previously modified value1 2 3 400 Replacement¶ d:((1 2 3; 4 5 6 7) (8 9; 10; 11 12) (13 14; 15 16 17 18; 19 20)) i:(2 0; 0 1 0) y:(100 200 300; 400 500 600) r:.[d; i; :; y] Compare d and r : q)d q)r (1 2 3;4 5 6 7) 600 500 / replaced twice; once (8 9;10;11 12) (8 9;10;11 12) (13 14;15 16 17 18;19 20) (300;200;19 20) / replaced twice; once; not Note multiple replacements of some items-at-depth in d , corresponding to the multiple updates in the earlier example. Unary value¶ The ternary replaces the selection with the results of applying u to them. q)d (1 2 3;4 5 6 7) (8 9;10;11 12) (13 14;15 16 17 18;19 20) q)i 2 0 0 1 0 q)y 100 200 300 400 500 600 q)r:.[d; i; neg] Compare d and r : q)d q)r (1 2 3;4 5 6 7) (1 2 3;-4 -5 -6 -7) (8 9;10;11 12) (8 9;10;11 12) (13 14;15 16 17 18;19 20) (13 14;-15 -16 -17 -18;19 20) Note multiple applications of neg to some items-at-depth in d , corresponding to the multiple updates in the first example. On disk¶ Certain vectors (types 1-19) can be updated directly on disk without the need to fully rewrite the file. (Since V3.4) Such vectors must - have no attribute - be of a mappable type - not be nested, enumerated, or compressed q)`:data set til 20 q)@[`:data;3 6 8;:;100 200 300] q)get `:data 0 1 2 100 4 5 200 7 300 9 10 11 12 13 14 15 16 17 18 19 q)`:test set `:sym?9?`1 `:test q)type get `:test 20h q)@[`:test;0 1;:;`sym?`a`b] 'type/attr error amending file test [0] @[`:test;0 1;:;`sym?`a`b] ^ On-disk amend to apply p or g attributes now avoids in-memory copying since 4.1t 2023.01.20. q)`:tab/ set ([]where 10000#100); q)@[`:tab/;`x;`p#] Errors¶ domain d is a symbol atom but not a handle index a path in i is not a valid path of d length i and y are not conformable type an atom of i is not an integer, symbol or nil type replacement items of different type than selection type/attr error amending file test Apply, Apply At, Index, Index At Q for Mortals §6.8.3 General Form of Function Application
writedownadvanced:{ if[0=count .dqe.tosavedown`.dqe.advancedres;:()]; dbprocs:exec distinct procname from raze .servers.getservers[`proctype;;()!();0b;1b]each .dqe.hdbtypes,`dqedb`dqcdb; // Get a list of all databases. advtemp1:select from .dqe.advancedres where procs in dbprocs; advtemp2:select from .dqe.advancedres where not procs in dbprocs; advtemp3:.dqe.advancedres; .dqe.advancedres::advtemp1; .dqe.savedata[.dqe.dqedbdir;.dqe.getpartition[]-1;.dqe.tosavedown[`.dqe.advancedres];`.dqe;`advancedres]; .dqe.advancedres::advtemp2: .dqe.savedata[.dqe.dqedbdir;.dqe.getpartition[];.dqe.tosavedown[`.dqe.advancedres];`.dqe;`advancedres]; .dqe.advancedres::advtemp3; /- get handles for DBs that need to reload hdbs:distinct raze exec w from .servers.SERVERS where proctype=`dqedb; /- send message for DBs to reload .dqe.notifyhdb[.os.pth .dqe.dqedbdir]'[hdbs]; } \d . .dqe.currentpartition:.dqe.getpartition[]; /- initialize current partition .servers.CONNECTIONS:distinct .servers.CONNECTIONS,`tickerplant`rdb`hdb`dqedb`dqcdb /- open connections to required procs, need dqedb as some checks rely on info from both dqe and dqedb /- setting up .u.end for dqe .u.end:{[pt] .lg.o[`dqe;".u.end initiated"]; dbprocs:exec distinct procname from raze .servers.getservers[`proctype;;()!();0b;1b]each .dqe.hdbtypes,`dqedb`dqcdb; // Get a list of all databases. restemp1:select from .dqe.resultstab where procs in dbprocs; restemp2:select from .dqe.resultstab where not procs in dbprocs; advtemp1:select from .dqe.advancedres where procs in dbprocs; advtemp2:select from .dqe.advancedres where not procs in dbprocs; .dqe.resultstab::restemp1; .dqe.advancedres::advtemp1; {.dqe.endofday[.dqe.dqedbdir;.dqe.getpartition[]-1;x;`.dqe;.dqe.tosavedown[` sv(`.dqe;x)]]}each`resultstab`advancedres; .dqe.resultstab::restemp2; .dqe.advancedres::advtemp2; {.dqe.endofday[.dqe.dqedbdir;.dqe.getpartition[];x;`.dqe;.dqe.tosavedown[` sv(`.dqe;x)]]}each`resultstab`advancedres; /- get handles for DBs that need to reload hdbs:distinct raze exec w from .servers.SERVERS where proctype=`dqedb; /- check list of handles to DQEDBs is non-empty, we need at least one to /- notify DQEDB to reload if[0=count hdbs;.lg.e[`.u.end; "No handles open to the DQEDB, cannot notify DQEDB to reload."]]; /- send message for DBs to reload .dqe.notifyhdb[.os.pth .dqe.dqedbdir]'[hdbs]; /- clear check function timers .timer.removefunc'[exec funcparam from .timer.timer where `.dqe.runquery in' funcparam]; /- clear writedown timer on resultstab .timer.removefunc'[exec funcparam from .timer.timer where `.dqe.writedownengine in' funcparam]; /- clear writedown timer on advancedres .timer.removefunc'[exec funcparam from .timer.timer where `.dqe.writedownadvanced in' funcparam]; /- clear EOD timer .timer.removefunc'[exec funcparam from .timer.timer where `.u.end in' funcparam]; .lg.o[`dqe;"removed functions from .timer.timer, .u.end continues"]; .dqe.currentpartition:pt+1; /- Checking whether .eodtime.nextroll is correct as it affects periodic writedown if[(`timestamp$.dqe.currentpartition)>=.eodtime.nextroll; .eodtime.nextroll:.eodtime.getroll[`timestamp$.dqe.currentpartition]; .lg.o[`dqe;"Moving .eodtime.nextroll to match current partition"] ]; if[.dqe.utctime=1b;.eodtime.nextroll:.eodtime.getroll[`timestamp$.dqe.currentpartition]+(.z.T-.z.t)]; .lg.o[`dqe;".eodtime.nextroll set to ",string .eodtime.nextroll]; .dqe.init[]; .lg.o[`dqe;".u.end finished"]; }; .dqe.init[] ================================================================================ FILE: TorQ_code_processes_filealerter.q SIZE: 9,409 characters ================================================================================ //File-alerter inputcsv:@[value;`.fa.inputcsv;.proc.getconfigfile["filealerter.csv"]] // The name of the input csv to drive what gets done polltime:@[value;`.fa.polltime;0D00:00:10] // The period to poll the file system alreadyprocessed:@[value;`.fa.alreadyprocessed;.proc.getconfigfile["filealerterprocessed"]] // The location of the table on disk to store the information about files which have already been processed skipallonstart:@[value;`.fa.skipallonstart;0b] // Whether to skip all actions when the file alerter process starts up (so only "new" files after the processes starts will be processed) moveonfail:@[value;`.fa.moveonfail;0b] // If the processing of a file fails (by any action) then whether to move it or not regardless tickerplanttype:@[value;`.fa.tickerplanttype;`segmentedtickerplant] // Type of tickerplant to connect to os:$[like[string .z.o;"w*"];`win;`lin] usemd5:@[value; `.fa.usemd5; 1b] // Protected evaluation, returns value of usemd5 (from .fa namespace) or on fail, returns 1b inputcsv:string inputcsv alreadyprocessed:string alreadyprocessed //-function to load the config csv file csvloader:{[CSV] fullcsv:@[{.lg.o[`alerter;"opening ",x];("**SB*"; enlist ",") 0: hsym `$x};CSV;{.lg.e[`alerter;"failed to open",x," : ", y];'y}[CSV]]; check:all `path`match`function`newonly`movetodirectory in cols fullcsv; $[check=0b; [.lg.e[`alerter;"the file ",CSV," has incorrect layout"]]; .lg.o[`alerter;"successfully loaded ",CSV]]; /-Removing any null rows from the table nullrows:select from fullcsv where (0=count each path)|(0=count each match)|(null function); if[0<count nullrows; .lg.o[`alerter;"null rows were found in the csv file: they will be ignored"]]; filealertercsv::fullcsv except nullrows; if[os=`win; .lg.o[`alerter;"modifying file-paths to a Windows-friendly format"]; update path:ssr'[path;"/";"\\"],movetodirectory:ssr'[movetodirectory;"/";"\\"] from `filealertercsv]; } //-function to load the alreadyprocessed table or initialise a processed table if skipallonstart is enabled loadprocessed:{[BIN] .lg.o[`alerter;"loading alreadyprocessed table from ",alreadyprocessed]; splaytables[BIN]; if[skipallonstart;.lg.o[`alerter;"variable skipallonstart set to true"];skipall[]]} //-searches for files on a given path matching the search string find:{[path;match] findstring:$[os=`lin;"/usr/bin/find ", path," -maxdepth 1 -type f -name \"",match,"\"";"dir ",path,"\\",match, " /B 2>nul"]; .lg.o[`alerter;"searching for ",path,"/",match]; files:@[system;findstring;()]; if[os=`win;files:,/:[path,"\\"; files]]; files}; //-decodes pcap file and sends to tickerplant processpcaps:{[path;file;pcaptab] .lg.o[`alerter;"processing pcap file"]; .lg.o[`alerter;"decoding pcap file"]; table: .pcap.buildtable[hsym `$(path,"/",file)]; .lg.o[`alerter;"checking connection to tickerplant"]; sendtotickerplant[tickerplanttype;pcaptab;table[cols table]] } sendtotickerplant:{[tptype;t;x] if[not count .servers.gethandlebytype[tptype;`any]; .lg.e[`alerter;"no connection to tickerplant, exiting sendtotickerplant"]; :()]; .lg.o[`alerter;"connection found, sending ",string[t]," to tickerplant"]; .servers.gethandlebytype[tptype;`any](`.u.upd;t;x) } //-finds all matches to files in the csv and adds them to the already processed table skipall:{matches:raze find'[filealertercsv.path;filealertercsv.match]; .lg.o[`alerter;"found ",(string count matches)," files, but ignoring them"]; complete removeprocessed[matches]} //-runs the function on the file action:{[function;file] $[`nothere~@[value;function;`nothere]; {.lg.e[`alerter;"function ", (string x)," has not been defined"]}'[function]; .[{.lg.o[`alerter;"running function ",(string x)," on ",y];((value x)[getpath[y];getfile[y]]);:1b}; (function;file); {.lg.e[`alerter;"failed to execute ", (string x)," on ",y,": ", z]; ();:0b}[function;file]]]} //-adds the processed file, along with md5 hash and file size to the already processed table and saves it to disk complete:{[TAB] TAB:select filename, md5hash, filesize from TAB; if[count TAB; .lg.o[`alerter;"adding ",(" " sv TAB`filename)," to alreadyprocessed table"]; // write it to disk .lg.o[`alerter;"saving alreadyprocessed table to disk"]; .[insert;(hsym`$alreadyprocessed;TAB);{.lg.e[`alerter;"failed to save alreadyprocessed table to disk: ",x]}]]; } //-check files against alreadyprocessed, remove those which have been processed (called in getunprocessed) removeprocessed:{[files] x:chktable[files]; y:select from (get hsym`$alreadyprocessed) where filesize in (exec filesize from x); $[usemd5;x except y;x where not (select filename,filesize from x) in select filename,filesize from y]} //-discard processed files: if newonly is False match only on filename getunprocessed:{[matches;newonly] $[newonly;chktable[matches except exec filename from get hsym`$alreadyprocessed];removeprocessed[matches]]} //-Some utility functions getsize:{hcount hsym `$x} gethash:{[file] $[os=`lin; md5hash:@[{first " " vs raze system "md5sum ",x," 2>/dev/null"};file;{.lg.e[`alerter;"could not compute md5 on ",x,": ",y];""}[file]]; ""]} getfile:{[filestring] $[os=`lin;last "/" vs filestring;last "\\" vs filestring]} getpath:{[filestring] (neg count getfile[filestring]) _filestring} //-Create table of filename,md5hash,filesize (only compute md5hash if usemd5 is True) chktable:{[files] table:([]filename:files;md5hash:$[usemd5;gethash'[files];(count files)#enlist ""];filesize:getsize'[files])} //-The main function that brings everything together processfiles:{[DICT] /-find all matches to the file search string matches:find[.rmvr.removeenvvar[DICT[`path]];DICT[`match]]; toprocess:getunprocessed[matches;DICT[`newonly]]; files:exec filename from toprocess; /-If there are files to process $[0<count files; [{.lg.o[`alerter;"found file ", x]}'[files]; /-perform the function on the file pf:action/:[DICT[`function];files];]; .lg.o[`alerter;"no new files found"]]; t:update function:(count toprocess)#DICT[`function],funcpassed:pf,moveto:(count toprocess)#enlist .rmvr.removeenvvar[DICT[`movetodirectory]] from toprocess; t}
// set up the usage information .proc.extrausage:"Log Replay:\n This process is used to replay tickerplant log files. There are multiple options which can be set either in the config files or via the standard command line switches e.g. -.replay.firstmessage 20 \n It can be used to replay full files and partial files, either in chunks or all at once. Specific tables can be selected. It can either overwrite existing tables, or append to them. It can create empty tables to start with. Different tables can be sorted and started differently. Tables can be manipulated when saved. A postreplay hook allows extra actions to be taken once the tables are saved down. \n [-.replay.schemafile x]\t\t\tThe schema file to load. Must not be null [-.replay.hdbdir x]\t\t\t\tThe hdb directory to write data to. Must not be null [-.replay.tplogfile x]\t\t\t\tThe tickerplant log file to replay. Either this or tplogdir must be set [-.replay.tplogdir x]\t\t\t\tA directory containing tickerplant log files to replay. All the files in the directory will be replayed. [-.replay.tablelist x]\t\t\t\tThe list of tables to replay. `all for all tables [-.replay.firstmessage n]\t\t\tThe first message number to replay. Default is 0 [-.replay.lastmessage n]\t\t\tThe last message number to replay. Default is 0W [-.replay.messagechunks n]\t\t\tThe size of message chunks to replay. If set to a negative number, the replay progress will be tracked but tables will not be saved until the end. Default is 0W [-.replay.partitiontype [date|month|year]] \tMethod used to partition the database - can be date, month or year. Default is date [-.replay.sortafterreplay [0|1]]\t\tSort the data and apply attributes on disk after the replay. Default is 1 [-.replay.emptytables [0|1]]\t\t\tCreate empty versions of the tables in the partitions when the replay starts. This will effectively delete any data which is already there. Default is 1 [-.replay.basicmode [0|1]]\t\t\tDo a basic replay, which reads the table into memory then saves down with .Q.hdpf. Is probably faster for basic replays (in-memory sort rather than on-disk). Default is 0 [-.replay.exitwhencomplete [0|1]]\t\tProcess exits when complete. Default is 1 [-.replay.checklogfiles [0|1]\t\t\tCheck log files for corruption, if corrupt then write a good log and replay this. Default is 0 [-.replay.partandmerge [0|1]\t\t\tDo a replay where the data is partitioned to a specified temp directory and then merged on disk. Default is 0 [-.replay.compression x]\t\t\tSet the compression settings for .z.zd. Default is empty list (no compression) [-.replay.tempdir x]\t\t\tThe directory to save data to before moving it to the hdb. Default is the same as the hdb [-.replay.autoreplay [0|1]\t\tStarts replay of logs at end of script or defers start of log replay. Helpful if loading via a wrapper [-.replay.clean [0|1]\t\t Defines if the replay should zap any existing folder at the start of replay \n There are some other functions/variables which can be modified to change the behaviour of the replay, but shouldn't be done from the config file Instead, load the script in a wrapper script which sets up the definition \n savedownmanipulation\t\ta dictionary of tablename!function which can be used to manipulate a table before it is saved. Default is empty upd[tablename;data]\t\tthe function used to replay data into the tables. Default is insert postreplay[d;p]\t\t\tFunction invoked when each logfile is completely replayed. Default is set to nothing \n The behaviour upon encountering errors can be modified using the standard flags. With no flags set, the process will exit when it hits an error. To trap an error and carry on, use the -trap flag To stop at error and not exit, use the -stop flag " // check for a usage flag if[`.replay.usage in key .proc.params; -1 .proc.getusage[]; exit 0]; // Check if some variables are null // some must be set .err.exitifnull each `.replay.schemafile`.replay.hdbdir, $[all null (tplogdir;tplogfile); `.replay.tplogfile; ()]; if[basicmode and (messagechunks within (0;-1 + 0W)); .err.ex[`replayinit; "if using basic mode, messagechunks must not be used (it should be set to 0W). basicmode will use .Q.hdpf to overwrite tables at the end of the replay";1]]; if[not partitiontype in `date`month`year; .err.ex[`replayinit;"partitiontype must be one of `date`month`year";1]]; if[messagechunks=0;.err.ex[`replayinit;"messagechunks value cannot be 0";2]]; if[segmentedmode and ((0<>firstmessage) or 0W<>lastmessage);.err.ex[`replayinit;"firstmessage must be 0 and lastmessage must be 0W while in segmented mode"];1] trackonly:messagechunks < 0 if[trackonly;.lg.o[`replayinit;"messagechunks value is negative - log replay progress will be tracked"]]; messagechunks:abs messagechunks; if[partandmerge and hdbdir = tempdir;.err.ex[`replayinit;"if using partandmerge replay, tempdir must be set to a different directory than the hdb";1]]; if[partandmerge and sortafterreplay;(sortafterreplay:0b; .lg.o[`replayinit;"Setting sortafterreplay to 0b"])]; // load the schema \d . .lg.o[`replayinit;"loading schema file ",string .replay.schemafile] @[system;"l ",string .replay.schemafile;{.err.ex[`replayinit;"failed to load replay file ",(string x)," - ",y;2]}[.replay.schemafile]] \d .replay .lg.o[`replayinit;"hdb directory is set to ",string hdbdir:hsym hdbdir]; .lg.o[`replayinit;"tempdir directory is set to ",string tempdir:hsym tempdir]; // the path to the table to save pathtotable:{[h;p;t] `$(string .Q.par[h;partitiontype$p;t]),"/"} // create empty tables - we need to make sure we only create them once emptytabs:`symbol$() createemptytable:{[h;p;t;td] $[partandmerge;dest:td;dest:h]; if[(not (path:pathtotable[dest;p;t]) in .replay.emptytabs) and .replay.emptytables; .lg.o[`replay;"creating empty table ",(string t)," at ",string path]; .replay.emptytabs,:path; savetabdatatrapped[h;p;t;0#value t;0b;td]]} savetabdata:{[h;p;t;data;UPSERT;td] $[partandmerge;path:pathtotable[td;p;t];path:pathtotable[h;p;t]]; if[not partandmerge;.lg.o[`replay;"saving table ",(string t)," to ",string path]]; .replay.pathlist[t],:path; $[partandmerge;savetablesbypart[td;p;t;h];$[UPSERT;upsert;set] . (path;.Q.en[h;0!.save.manipulate[t;data]])] } savetabdatatrapped:{[h;p;t;data;UPSERT;td] .[savetabdata;(h;p;t;data;UPSERT;td);{.lg.e[`replay;"failed to save table : ",x]}]} // this function should be invoked for saving tables savetab:{[td;h;p;t] if[not partandmerge;createemptytable[h;p;t;td]]; if[count value t; .lg.o[`replay;"saving ",(string t)," which has row count ",string count value t]; savetabdatatrapped[h;p;t;value t;1b;td]; delete from t; if[gc;.gc.run[]]]} // function to apply the sorting and attributes at the end of the replay // input is a dictionary of tablename!(list of paths) // should be the same as .replay.pathlist applysortandattr:{[pathlist] // convert pathlist dictionary into a keys and values then transpose before passing into .sort.sorttab .sort.sorttab each flip (key;value) @\: distinct each pathlist }; // Given a list of table names, return the list in order according to the table counts // this is used at save down time as it should minimise memory usage to save the smaller tables first, and then garbage collect tabsincountorder:{x iasc count each value each x} // check if the count has been exceeded, and save down if it has currentcount:0 totalcount:0 checkcount:{[h;p;counter;td] currentcount+::counter; if[.replay.currentcount >= .replay.messagechunks; $[.replay.trackonly; [.replay.totalcount +: .replay.currentcount; .lg.o[`replay;"replayed a chunk of ",(string .replay.messagechunks)," messages. Total message count so far is ",string .replay.totalcount]]; [.lg.o[`replay;"number of messages to replay at once (",(string .replay.messagechunks),") has been exceeded. Saving down"]; savetab[td;h;p] each tabsincountorder[.replay.tablestoreplay]; .lg.o[`replay;"save complete- replaying next chunk of data"]]]; .replay.currentcount:0]} // function used to finish off the replay finishreplay:{[h;p;td] // save down any tables which haven't been saved savetab[td;h;p] each tabsincountorder[.replay.tablestoreplay]; // invoke any user defined post replay function .save.postreplay[h;p]; } // takes in log file directories made with segmented tickerplant expandstplogs:{[logdirectories] // always a list {` sv'raze x,/:'key each x}$[`~tplogdir;{enlist first x};]hsym logdirectories }; replaylog:{[logfile] // set the upd function to be the initialupd function .replay.msgcount:.replay.currentcount:.replay.totalcount:0; // check if logfile is corrupt if[checklogfiles; logfile: .tplog.check[logfile;lastmessage]]; $[firstmessage>0; [.lg.o[`replay;"skipping first ",(string firstmessage)," messages"]; @[`.;`upd;:;.replay.initialupd]]; @[`.;`upd;:;.replay.realupd]]; .replay.tablecounts:.replay.errorcounts:()!(); // If not running in segmented mode, reset replay date and clean HDB directory on each loop .replay.zipped:$[logfile like "*.gz";1b;0b]; if[not .replay.segmentedmode; // Pull out date from TP log file name - *YYYY.MM.DD (+ .gz if zipped) .replay.replaydate:"D"$$[.replay.zipped;-3_-13#;-10#] string logfile; if[.replay.clean;.replay.cleanhdb .replay.replaydate] ]; if[lastmessage<firstmessage; .lg.o[`replay;"lastmessage (",(string lastmessage),") is less than firstmessage (",(string firstmessage),"). Not replaying log file"]; :()]; .lg.o[`replay;"replaying data from logfile ",(string logfile)," from message ",(string firstmessage)," to ",(string lastmessage),". Message indices are from 0 and inclusive - so both the first and last message will be replayed"]; // when we do the replay, need to move the indexing, otherwise we won't replay the last message correctly .replay.replayinner[lastmessage+lastmessage<0W;logfile]; .lg.o[`replay;"replayed data into tables with the following counts: ","; " sv {" = " sv string x}@'flip(key .replay.tablecounts;value .replay.tablecounts)]; if[count .replay.errorcounts; .lg.e[`replay;"errors were hit when replaying the following tables: ","; " sv {" = " sv string x}@'flip(key .replay.errorcounts;value .replay.errorcounts)]]; // set compression level if[3=count compression; .lg.o[`compression;"setting compression level to (",(";" sv string compression),")"]; .dotz.set[`.z.zd;compression]; .lg.o[`compression;".z.zd has been set to (",(";" sv string .z.zd),")"]]; $[basicmode; [.lg.o[`replay;"basicmode set to true, saving down tables with .Q.hdpf"]; .Q.hdpf[`::;hdbdir;partitiontype$.replay.replaydate;`sym]]; // if not in basic mode, then we need to finish off the replay finishreplay[hdbdir;.replay.replaydate;tempdir]]; if[gc;.gc.run[]]; } // If replay date in HDB, delete tables/partition from the HDB so no data is duplicated cleanhdb:{[dt] if[not (`$sd:string dt) in key .replay.hdbdir;.lg.o[`cleanhdb;"Date ",sd," not in HDB."];:()]; delpaths:.os.pth each .Q.par[.replay.hdbdir;dt;] each $[`all~first .replay.tablelist;enlist `;.replay.tablestoreplay]; {.lg.o[`cleanhdb;"Deleting ",x," from HDB."];.os.deldir x} each delpaths; }; // Replay log file, if file is zipped and the kdb+ version is at least 4.0 then replay through named pipe replayinner:{[msgnum;logfile] if[not .replay.zipped;-11!(msgnum;logfile);:()]; if[not .z.o like "l*";.lg.e[`replaylog;m:"Zipped log files can only be directly replayed on Linux systems"];'m]; if[.z.K<4.0;.lg.e[`replaylog;m:"Zipped log files can only be directly replayed on kdb+ 4.0 or higher"];'m]; .lg.o[`replay;"Replaying logfile ",(f:1_string logfile)," over named pipe"]; -11!(msgnum;hsym `$fifo:.replay.readintofifo f); system "rm -f ",fifo; .replay.zipped:0b; };
Skip to content kdb+ and q documentation or – Reference – kdb+ and q documentation Initializing search Ask a question Home kdb+ and q kdb Insights SDK kdb Insights Enterprise KDB.AI PyKX APIs Help kdb+ and q documentation Home kdb+ and q kdb+ and q About Getting Started Getting Started Install Licenses Learn Learn Overview Mountain tour Mountain tour Overview Begin here The q session Tables CSVs Datatypes Scripts IDE Q for quants Q by Examples Q for All (video) Examples from Python Examples from Python Basic Array List Strings Dictionaries Q for Mortals 3 Q by Puzzles Q by Puzzles About 12 Days of Xmas ABC problem Abundant odds Four is magic Name Game Summarize and Say Word wheel Reading room Reading room Information desk Boggle Cats cradle Fizz buzz Klondike Phrasebook Scrabble Application examples Application examples Astronomy Detecting card counters Corporate actions Disaster management Exoplanets Market depth Market fragmentation Option pricing Predicting floods Signal processing Space weather Trading surveillance Transaction-cost analysis Trend indicators Advanced q Advanced q Remarks on Style Shifts & scans Technical articles Views Origins Terminology Starting kdb+ Starting kdb+ Overview The q language IPC Tables Historical database Realtime database Language Language Reference card By topic Iteration Iteration Overview Implicit iteration Iterators Maps Accumulators Guide to iterators Keywords Keywords abs aj, aj0, ajf, ajf0 all, any and asc, iasc, xasc asof attr avg, avgs, mavg, wavg bin, binr ceiling count, mcount cols, xcol, xcols cor cos, acos cov, scov cross csv cut delete deltas desc, idesc, xdesc dev, mdev, sdev differ distinct div dsave each, peach ej ema enlist eval, reval except exec exit exp, xexp fby fills first, last fkeys flip floor get, set getenv, setenv group gtime, ltime hcount hdel hopen, hclose hsym ij, ijf in insert inter inv key keys, xkey like lj, ljf load, rload log, xlog lower lsq max, maxs, mmax md5 med meta min, mins, mmin mmu mod neg next, prev, xprev not null or over, scan parse pj prd, prds prior rand rank ratios raze read0 read1 reciprocal reverse rotate save, rsave select show signum sin, asin sqrt ss, ssr string sublist sum, sums, msum, wsum sv system tables tan, atan til trim, ltrim, rtrim type uj, ujf union ungroup update upsert value var, svar view, views vs where within wj, wj1 xbar xgroup xrank Overloaded glyphs Operators Operators Add Amend Apply, Index, Trap Assign Cast Coalesce Compose Cut Deal, Roll, Permute Delete Display Dict Divide Dynamic Load Drop Enkey, Unkey Enumerate Enumeration Enum Extend Equal Exec File Binary File Text Fill Find Flip Splayed Greater Greater Than Identity, Null Join Less Than Lesser Match Matrix Multiply Multiply Not Equal Pad Select Set Attribute Simple Exec Signal Subtract Take Tok Update Vector Conditional Control constructs Control constructs Cond do if while Namespaces Namespaces .h (markup) .j (JSON) .m (memory backed files) .Q (utils) .z (env, callbacks) Application Atomic functions Comparison Conformability Connection handles Command-line options Datatypes Dictionaries Enumerations Evaluation control Exposed infrastructure File system Function notation Glossary Internal functions Joins Mathematics Metadata Namespaces Pattern matching Parse trees qSQL qSQL qSQL queries Functional qSQL Regular Expressions Syntax System commands Tables Variadic syntax Database Database Tables in the filesystem Populating tables Populating tables Loading from large files Foreign keys Linking columns Data loaders From MDB via ODBC Persisting tables Persisting tables Serializing an object Splayed tables Partitioned tables Segmented databases Multiple partitions Maintenance Maintenance Data management Data-At-Rest Encryption Compression Compression File compression Compression examples FSI case study Permissions Query optimization Query scaling Time-series simplification Compacting HDB sym Working with sym files Developing Developing IPC IPC Overview Listening port Deferred response Async callbacks Named pipes Serialization examples Socket sharding SSL/TLS HTTP WebSockets Tools Tools Code profiler Debugging Errors man.q Unit tests Monitor & control execution Coding Coding Geospatial indexing Linear programming Multithreaded primitives Pivoting tables Precision Programming examples Programming idioms Temporal data Timezones Unicode DevOps DevOps CPU affinity Daemon Firewalling inetd, xinetd Linux production notes Log Files Multi-threading Multiple versions Parallel processing Performance tips Shebang script Surveillance latency Windows service Optane Memory Optane Memory Optane Memory and kdb+ Performance tests Release notes Release notes History Changes in 4.1 Changes in 4.0 Changes in 3.6 Changes in 3.5 Changes in 3.4 Changes in 3.3 Changes in 3.2 Changes in 3.1 Changes in 3.0 Changes in 2.8 Changes in 2.7 Changes in 2.6 Changes in 2.5 Changes in 2.4 Withdrawn Developer tools FAQ Streaming Streaming General architecture General architecture Overview kdb+tick kdb+tick Tickerplant (tick.q) Tickerplant pub/sub (u.q) RDB (r.q) Alternative architecture TP Log (data recovery) RTEs (real-time engines) Gateway design Query routing Load balancing Profiling Disaster recovery Kubernetes Order Book Alternative in-memory layouts Corporate actions Advanced Advanced Distributed systems RDB intraday writedown Interfaces Interfaces Languages Languages C/C++ C/C++ Quick guide API reference C API for kdb+ Extending q with C/C++ Async callbacks (C client) C# Foreign Function Interface (FFI) Java Python R Rust Scala KX libraries Bloomberg Excel FIX messaging GPUs Matlab ODBC ODBC ODBC client ODBC3 server ODBC3 and Tableau Solace pub/sub Open source Machine learning Using kdb+ in the cloud Using kdb+ in the cloud About Amazon Web Services Amazon Web Services Reference architecture Amazon EC2 & Storage Services Amazon EC2 & Storage Services Migrating a kdb+ HDB to Amazon EC2 Elastic Block Store (EBS) EFS (NFS) Amazon Storage Gateway FSx for Lustre AWS Lambda Microsoft Azure Microsoft Azure Reference architecture Google Cloud Google Cloud Reference architecture Auto Scaling Auto Scaling About Amazon Web Services Realtime data cluster Costs and risks Other file systems Other file systems MapR-FS Goofys S3FS S3QL ObjectiveFS WekaIO Matrix Quobyte Academy Discussion Forum White papers About this site kdb Insights SDK kdb Insights Enterprise KDB.AI PyKX APIs Help or ¶ Greater of two values, logical OR or is a multithreaded primitive . Greater Back to top over , scan ¶ The keywords over and scan are covers for the accumulating iterators, Over and Scan. It is good style to use over and scan with unary and binary values. Just as with Over and Scan, over and scan share the same syntax and perform the same computation; but while scan returns the result of each evaluation, over returns only the last. See the Accumulators for a more detailed discussion. Converge¶ v1 over x over[v1;x] v1 scan x scan[v1;x] (vv)over x over[vv;x] (vv)scan x scan[vv;x] Where v1 is a unary applicable valuevv is a variadic applicable value applies the value progressively to x , then to v1[x] (or vv[x] ), and so on, until the result matches (within comparison tolerance) either - the previous result; or x . q)n:("the ";("quick ";"brown ";("fox ";"jumps ";"over ");"the ");("lazy ";"dog.")) q)raze over n "the quick brown fox jumps over the lazy dog." q)(,/)over n "the quick brown fox jumps over the lazy dog." q){x*x} scan .01 0.01 0.0001 1e-08 1e-16 1e-32 1e-64 1e-128 1e-256 0 See the Accumulators for more detail, and for the related forms Do and While. MapReduce, Fold¶ v2 over x over[v2;x] v2 scan x scan[v2;x] Where v2 is a binary applicable value, applies v2 progressively between successive items. scan[v2;] is a uniform function and over[v2;] is an aggregate function. q)(+) scan 1 2 3 4 5 1 3 6 10 15 q)(*) over 1 2 3 4 5 120 See the Accumulators for a more detailed discussion. Keywords¶ Q has keywords for common projections of scan and over . For example, sums is scan[+;] and prd is over[*;] . Efficiency and good q style prefers these keywords; i.e. prd rather than over[*;] or */ . keyword equivalents --------------------------------------- all over[and;] &/ Lesser Over any over[or;] |/ Greater Over max over[|;] |/ Greater Over maxs scan[|;] |\ Greater Scan min over[&;] &/ Lesser Over mins scan[&;] &\ Lesser Scan prd over[*;] */ Multiply Over prds scan[*;] *\ Multiply Scan raze over[,;] ,/ Join Over sum over[+;] +/ Add Over sums scan[+;] +\ Add Scan Overloaded glyphs Many non-alphabetic keyboard characters are overloaded. Operator overloads are resolved by rank , and sometimes by the type of argument/s. @ at \ backslash d: data n: non-negative integer atom u: unary value t: test value v: value rank>1 x: atom or vector y, z…: conformable atoms or lists ! bang a: select specifications b: group-by specifications c: where-specifications h: handle to a splayed or partitioned table i: integer >0 noasv: symbol atom, the name of a symbol vector sv: symbol vector t: table tk: keyed table ts: simple table x,y: same-length lists : colon :: colon colon - dash Syntax: immediately left of a number, indicates its negative. q)neg[3]~-3 1b Otherwise . dot In the Debugger , push the stack. $ dollar rank example semantics 3 $[x>10;y;z] Cond : conditional evaluation 2 "h"$y , `short$y , 11h$y Cast : cast datatype 2 "H"$y , -11h$y Tok : interpret string as data 2 x$y Enumerate : enumerate y from x 2 10$"abc" Pad : pad string 2 (1 2 3f;4 5 6f)$(7 8f;9 10f;11 12f) dot product, matrix multiply, mmu # hash ? query ' quote rank syntax semantics 1 (u')x , u'[x] , x b'y , v'[x;y;…] Each : iterate u , b or v itemwise 1 'msg Signal an error 1 int'[x;y;…] Case : successive items from lists 2 '[u;v] Compose u with v u: unary value int: int vector b: binary value msg: symbol or string v: value of rank ≥1 x, y: data ': quote-colon / slash rank syntax semantics n/a /a comment comment: ignore rest of line 1 (u/)y , u/[y] Converge 1 n u/ y , u/[n;y] Do 1 t u/ y , u/[t;y] While 1 (v/)y , v/[y] map-reduce : reduce a list or lists u: unary value t: test value v: value rank ≥1 y: list n: non-negative int atom Syntax: a space followed by / begins a trailing comment . Everything to the right of / is ignored. q)2+2 / we know this one 4 A / at the beginning of a line marks a comment line . The entire line is ignored. q)/nothing in this line is evaluated In a script, a line with a solitary / marks the beginning of a multiline comment . A multiline comment is terminated by a \ or the end of the script. / A script to add two numbers. Version 2018.1.19 \ 2+2 / That's all folks. _ underscore rank example semantics 2 3_ til 10 Cut , Drop Names can contain underscores Best practice is to use a space to separate names and the Cut and Drop operators. Many of the operators tabulated above have unary forms in k. Exposed infrastructure $ Pad¶ x$y $[x;y] Where x is a longy is a string returns y padded to length x . q)9$"foo" "foo " q)-9$"foo" " foo" Implicit iteration¶ Pad is string-atomic and applies to dictionaries and tables. q)9$("The";("fox";("jumps";"over"));("the";"dog")) / string-atomic "The " ("fox ";("jumps ";"over ")) ("the ";"dog ") q)-9$`a`b`c!("quick";"brown";"fox") / dictionary a| " quick" b| " brown" c| " fox" q)-9$string([]a:`quick`brown`fox;b:`jumps`over`the) / table a b ----------------------- " quick" " jumps" " brown" " over" " fox" " the" With a short left argument $ is Cast. q)9$("quick";"brown";"fox") "quick " "brown " "fox " q)9h$("quick";"brown";"fox") 113 117 105 99 107f 98 114 111 119 110f 102 111 120f
Socket sharding with kdb+ and Linux¶ When creating a new TCP socket on a Linux server several options can be set which affect the behavior of the newly-created socket. One of these options, SO_REUSEPORT , allows multiple sockets to bind to the same local IP address and port. Incoming connection requests to this port number are allocated by the Linux kernel to a listening process, allowing user queries to be spread across multiple processes while providing a single connection port to users and without requiring users to go through a load balancer. This feature is available as ‘socket sharding’ in the 3.5 release of kdb+. Socket sharding has several real-world applications including simple load balancers that can be adjusted dynamically to handle increased demand and rolling releases of updated processes in kdb+ systems. In this paper, we will investigate a number of scenarios where kdb+ processes are running with this option enabled. On Linux systems, the SO_REUSEPORT option was introduced in kernel version 3.9, so the commands in the examples below will fail if run on an older version of Linux. Enabling the SO_REUSEPORT socket option in kdb+¶ To enable socket sharding in kdb+, we use the reuse port parameter ‘rp’ along with the port number. This can be done in either of the following ways: Via the command line when starting a kdb+ process: $ q -p rp,5000 Or from within the q session: q)\p rp,5000 This will enable the SO_REUSEPORT option on port 5000 and we can define this on multiple q processes. Attempting to assign the same port number that is already in use without enabling the SO_REUSEPORT socket option would result in the following error: q)\p 5000 '5000: Address already in use In addition to this, the first process to open the port must use the rp option to allow future processes to also use this port. Releases: Changes in 3.5 Basic sharding implementation in kdb+¶ This example demonstrates how incoming connections are distributed when socket sharding is enabled. First, we started four processes listening on the same port number by enabling the SO_REUSEPORT socket option. q)//Set port number to 5000 q)\p rp,5000 A fifth q process, acting as a client, was started and multiple connections were opened to port 5000. q)h:() q) q)//Open 1000 connections to port 5000 q){h,:hopen `::5000}each til 1000 To inspect the assignment ordering of the connections, the difference between process IDs was checked across each handle. q)//Compare pid of consecutive connections q)differ{x".z.i"}each h 10111010110110100100... The presence of 0b s in the output above indicates that two connections were opened to the same server consecutively. However, the connections were distributed evenly across the processes running with socket sharding enabled: q)//Number of connections opened to each server q)count each group {x ".z.i "}each h 32219| 250 32217| 257 32213| 245 32215| 248 The connection distribution is roughly even between processes with splits of 25%, 25.7%, 24.5% and 24.8%. Adding more listeners on the fly¶ In this example, the client process opened a connection to a server process listening on a port with socket sharding enabled and made an asynchronous request for data. The asynchronous response from the server then closed the connection. It performed this operation on a timer set to run once per second. The client (connecting) process: //Table to store a number of time values for analysis messages:([]sendTime:();receiveTime:();timeTaken:()) //Timer function to record initial time, open handle to //gateway process, send an asynchronous message to the //gateway, then send an asynchronous callback back to the q)client .z.ts:{ st:.z.T; h:hopen 5000; (neg h)({(neg .z.w)({(x;.z.T;.z.T-x;hclose .z.w)}; x)}; st) } //Set asynchronous message handler to update our table //with time recordings .z.ps:{0N!list:3#value x;`messages insert list} //Start timer function to run every 1 second \t 1000 On the listener process, the asynchronous message handler .z.ps was defined to sleep the process for 2 seconds before executing the request. This was done to ensure a minimum processing time of 2 seconds for each incoming request. //Set port as 5000, with SO_REUSEPORT enabled \p rp,5000 //Counter variable to see how many messages each listener received cnt:0 //sleep for 2 seconds and increment counter by 1 .z.ps:{system "sleep 2"; value x; cnt+:1;} To begin, we started with one listener process receiving connections for a period of one minute. After one minute a second listener was started and after a further minute, a third listener was brought up, with all three listening on the same port. q)1#messages sendTime receiveTime timeTaken -------------------------------------- 19:34:10.514 19:34:12.517 00:00:02.003 q)//Normalize data into 3 distinct 1 minute buckets by q)//subtracting 10.514 from sendTime column q)select `time$avg timeTaken by sendTime.minute from update sendTime-00:00:10.514 from messages second | timeTaken --------| ------------ 19:34:00| 00:00:03.909 19:35:00| 00:00:03.494 19:36:00| 00:00:02.628 | minute | number of listeners | average number of requests processed over 3 tests | weighted average response time over 3 tests | |---|---|---|---| | 1 | 1 | 31 | 3.909 | | 2 | 2 | 42.33 | 3.356 | | 3 | 3 | 55 | 2.607 | Table 1: Requests processed and response time when listeners are added on the fly 1 minute apart When all incoming connections were sent to just one listener process the response time was consistent because the connecting process must wait until the server has completed processing the previous request. As more listener processes were added, the response times were reduced and the number of requests processed increased. This was due to the Linux kernel assigning incoming connections to processes that were not blocked by the 2-second sleep at the time of the connection request. Sending a large volume of requests¶ The previous example focused on sending queries that take a long time to process. In this example, we simulated sending a large volume of requests to one, two and three listeners, where each request is completed relatively quickly. The client in this example is the same client with one difference – the timer function runs every 100 ms instead of every second. //Start timer function to run every 100 milliseconds \t 100 The listener code was also altered, to sleep for 0.2 seconds instead of 2 seconds. //Set port as 500, with SO_REUSEPORT enabled \p rp,5000 //Timer function to send process to sleep for 200 //milliseconds .z.ps:{system "sleep 0.2"; value x} The results here also closely followed those in the previous section, with a significant improvement seen when sending queries to multiple listeners. Figure 1: Average response time in one-second windows when routing to one, two and three listeners Due to a kdb+ process being single-threaded, messages sent to a single listener process have a consistent average response time. The high variance in response time for 2 and 3 listeners (Figure 1) is due to the fact that if multiple connections are opened to the same listener consecutively, the processing time will be longer than if successive connections are allocated to different listeners. Figure 2: Number of requests processed when routing to one, two and three listeners Despite the high variance in Figure 1 for two and three listeners, we see the overall number of requests processed is higher, compared to one listener (Figure 2). The overall response time was also much improved as can be seen in Figure 3 below Figure 3 Response time when routing to one, two and three listeners Routing connections to multiple listeners when one is busy¶ In this example, we started two listener processes on the same port number with the rp socket option. The time at which a listener receives a message and the message ID was stored in a table for analysis. //Set port number to 5000 \p rp,5000 //Create table to store time and message counter messages:([]time:();counter:()) //Define synchronous message handler to store time and //message count in messages table .z.pg:{`messages insert (.z.T;x)} A third q process, acting as a client, made connection requests to the listening processes, issued a synchronous query and then closed the connection. Outgoing messages were assigned an ID based on a counter variable. cnt:0 .z.ts:{cnt+:1; h:hopen 5000; h(cnt); hclose h} \t 1000 During this simulation, we blocked one of the processes listening on port 5000 for 10 seconds. q)//Block the process by sending it to sleep for 10 seconds q)0N!.z.T; system "sleep 10" 12:15:40.723 Figure 4: Graphical representation of the timeline of requests from the client to server processes We can see that after blocking listener 1, the next two connections were allocated to listener 2. As listener 2’s main thread was not blocked, these messages were processed immediately. The following connection attempt was routed to listener 1, which was blocked. Due to the process being busy, the client hangs until this connection is established. When the listener was no longer blocked, the connection was established and messages continue to be processed as normal. Minimizing downtime with rolling updates¶ This example simulates a situation where, in order to upgrade a kdb+ process’ source code, the process must be restarted. This script will run a simulated HDB process that takes 30 seconds to load, running with socket sharding enabled. \p rp,5000 stdout:{0N!(string .z.T)," : ",x} .z.po:{stdout "Connection established from handle ",string x;} .z.pc:{stdout "Connect lost to handle ",string x;} stdout "Sleeping for 30 seconds to mimic HDB load" system "sleep 30" stdout "HDB load complete" This script runs a process which connects to the above HDB. stdout:{0N!(string .z.T)," : ",x} .util.connect:{ tms:2 xexp til 4; while[(not h:@[hopen;5000;0]) and count tms; stdout "Connection failed, waiting ", (string first tms), " seconds before retrying..."; system "sleep ",string first tms; tms:1_tms; ]; $[0=h; stdout "Connection failed after 4 attempts, exiting."; stdout "Connection established"]; h } .z.pc:{ stdout "Connection lost to HDB, attempting to reconnect..."; .util.connect[] } h:.util.connect[] One HDB process and one client were started. The HDB process prints the following: "04:04:16.628 : Sleeping for 30 seconds to mimic HDB load" "04:04:46.632 : HDB load complete" "04:04:46.632 : Connection established from handle 4" The connecting process outputs the following: "04:04:18.329 : Attempting to open connection" "04:04:46.632 : Connection established" As we can see, the connecting process had to wait 28 seconds to successfully connect as it cannot establish a connection until HDB load has completed. As a result of this, if the HDB requires a restart there will be a period of time where the service is unavailable. However, since the HDB script enables the rp socket option, we were able to start a second HDB on the same port while the first was running: "04:05:05.975 : Sleeping for 30 seconds to mimic HDB load" "04:05:35.978 : HDB load complete" "04:05:56.521 : Connection established from handle 4" Once both HDBs are running, the first was stopped. This disconnected the client, causing an attempt to reconnect to the same port. "04:05:56.520 : Connection lost to HDB, attempting to reconnect..." "04:05:56.520 : Attempting to open connection" "04:05:56.521 : Connection established" The reconnect attempt immediately connected resulting in minimal downtime. Conclusion¶ It is evident that there are both pros and cons of using socket sharding in kdb+. If processes listening on the sharded socket are all running with the same performance, socket sharding can reduce the response time for requests to clients and processing load on a listener process. As the processes are all listening on the same port, clients do not need configuration changes in order for connection requests to be assigned to new servers. This could be beneficial for gateway requests. If the response time slows (due to performance-intensive queries) another gateway process can be started up. This would result in a portion of the connection requests being routed to the new gateway. However, the requests will not be assigned in a round-robin manner so busy servers will affect some client connections. Finally, we explored easy rolling upgrades of existing processes while keeping downtime to a minimum. New versions of a system’s kdb+ processes can be started while current processes are still online. Once initialization is complete, with open ports, the processes from the older version are shut down and the disconnected client’s reconnect logic would then automatically re-establish a connection to the new process. Author¶ Marcus Clarke is a kdb+ consultant for KX and has worked at a number of leading financial institutions in both the UK and Asia. Currently based in New York, he is designing, developing and maintaining a kdb+ system for multiple asset classes at a top-tier investment bank.
// @kind function // @category nlp // @desc Parse URLs into dictionaries containing the // constituent components // @param url {string} The URL to decompose into its components // @returns {dictionary} Contains information about the scheme, domain name // and other URL information parseURLs:{[url] urlKeys:`scheme`domainName`path`parameters`query`fragment; urlVals:parser.i.parseURLs url; urlKeys!urlVals } // @kind function // @category nlp // @desc Create a new parser // @param spacyModel {symbol} The spaCy model/language to use. // This must already be installed. // @param fieldNames {symbol[]} The fields the parser should return // @returns {fn} A function to parse text newParser:{[spacyModel;fieldNames] options:{distinct x,raze parser.i.depOpts x}/[fieldNames]; disabled:`ner`tagger`parser except options; model:parser.i.newSubParser[spacyModel;options;disabled]; tokenAttrs:parser.i.q2spacy key[parser.i.q2spacy]inter options; pyParser:parser.i.parseText[model;tokenAttrs;options;]; listfn:$[.pykx.loaded;.pykx.eval["lambda x:list(x)";<];{`$.p.list[x]`}]; //! KXI-49361 is `$"-PRON-" still valid post en->en_core_web_sm update? stopWords:(listfn model`:Defaults.stop_words),`$"-PRON-"; parser.i.runParser[pyParser;fieldNames;options;stopWords] } // Sentiment // @kind function // @category nlp // @desc Calculate the sentiment of a sentence or short message, // such as a tweet // @param text {string} The text to score // @returns {dictionary} The score split up into compound, positive, negative // and neutral components sentiment:{[text] valences:sent.i.lexicon tokens:lower rawTokens:sent.i.tokenize text; isUpperCase:(rawTokens=upper rawTokens)& rawTokens<>tokens; upperIndices:where isUpperCase & not all isUpperCase; valences[upperIndices]+:sent.i.ALLCAPS_INCR*signum valences upperIndices; valences:sent.i.applyBoosters[tokens;isUpperCase;valences]; valences:sent.i.negationCheck[tokens;valences]; valences:sent.i.butCheck[tokens;valences]; sent.i.scoreValence[0f^valences;text] } // Comparing docs/terms // @kind function // @category nlp // @desc Calculates the affinity between terms in two corpus' using // an Algorithm from Rayson, Paul and Roger Garside. // "Comparing corpora using frequency profiling." // Proceedings of the workshop on Comparing Corpora. Association for // Computational Linguistics, 2000 // @param parsedTab1 {table} A parsed document containing keywords and their // associated significance scores // @param parsedTab2 {table} A parsed document containing keywords and their // associated significance scores // @returns {dictionary[]} A dictionary of terms and their affinities for // parsedTab2 over parsedTab1 compareCorpora:{[parsedTab1;parsedTab2] if[not min count each (parsedTab1;parsedTab2);:((`$())!();(`$())!())]; termCountA:i.getTermCount parsedTab1; termCountB:i.getTermCount parsedTab2; totalWordCountA:sum termCountA; totalWordCountB:sum termCountB; // The expected termCount of each term in each corpus coef:(termCountA+termCountB)%(totalWordCountA+totalWordCountB); expectedA:totalWordCountA*coef; expectedB:totalWordCountB*coef; // Return the differences between the corpora dict1:desc termCountA*log termCountA%expectedA; dict2:desc termCountB*log termCountB%expectedB; (dict1;dict2) } // @kind function // @category nlp // @desc Calculates the cosine similarity of two documents // @param keywords1 {dictionary} Keywords and their significance scores // @param keywords2 {dictionary} Keywords and their significance scores // @returns {float} The cosine similarity of two documents compareDocs:{[keyword1;keyword2] keywords:distinct raze key each(keyword1;keyword2); cosineSimilarity .(keyword1;keyword2)@\:keywords } // @kind function // @category nlp // @desc A function for comparing the similarity of two vectors // @param keywords1 {dictionary} Keywords and their significance scores // @param keywords2 {dictionary} Keywords and their significance scores // @returns {float} Similarity score between -1f and 1f inclusive, 1 being // perfectly similar, -1 being perfectly dissimilar cosineSimilarity:{[keywords1;keywords2] sqrtSum1:sqrt sum keywords1*keywords1; sqrtSum2:sqrt sum keywords2*keywords2; sum[keywords1*keywords2]%(sqrtSum1)*sqrtSum2 } // @kind function // @category nlp // @desc Calculate how much each term contributes to the // cosine similarity // @param keywords1 {dictionary} Keywords and their significance scores // @param keywords2 {dictionary} Keywords and their significance scores // @returns {dictionary} A dictionary of how much of the similarity score each // token is responsible for explainSimilarity:{[keywords1;keywords2] alignedKeys:inter[key keywords1;key keywords2]; keywords1@:alignedKeys; keywords2@:alignedKeys; product:(keywords2%i.magnitude keywords1)*(keywords2%i.magnitude keywords2); desc alignedKeys!product%sum product } // @kind function // @category nlp // @desc Calculates the cosine similarity of a document and a centroid, // subtracting the document from the centroid. // This does the subtraction after aligning the keys so that terms not in // the centroid don't get subtracted. // This assumes that the centroid is the sum, not the avg, of the documents // in the cluster // @param centroid {dictionary} The sum of all the keywords significance scores // @param keywords {dictionary} Keywords and their significance scores // @returns {float} The cosine similarity of a document and centroid compareDocToCentroid:{[centroid;keywords] keywords@:alignedKeys:distinct key[centroid],key keywords; vec:centroid[alignedKeys]-keywords; cosineSimilarity[keywords;vec] } // @kind function // @category nlp // @desc Find the cosine similarity between one document and all the // other documents of the corpus // @param keywords {dictionary} Keywords and their significance scores // @param idx {number} The index of the feature vector to compare to the rest // of the corpus // @returns {float[]} The document's significance to the rest of the corpus compareDocToCorpus:{[keywords;idx] compareDocs[keywords idx]each(idx+1)_ keywords } // @kind function // @category nlp // @desc Calculate the Jaro-Winkler distance of two strings, // scored between 0 and 1 // @param str1 {str|string[]} A string of text // @param str2 {string|string[]} A string of text // @returns {float} The Jaro-Winkler of two strings, between 0 and 1 jaroWinkler:{[str1;str2] str1:lower str1; str2:lower str2; jaroScore:i.jaro[str1;str2]; jaroScore+$[0.7<jaroScore; (sum mins(4#str1)~'4#str2)*.1*1-jaroScore; 0 ] } // Feature Vectors // @kind function // @category nlp // @desc Find related terms and their significance to a word // @param parsedTab {table} A parsed document containing keywords and their // associated significance scores // @param term {symbol} The tokens to find related terms for // @returns {dictionary} The related tokens and their relevances findRelatedTerms:{[parsedTab;term] term:lower term; stopWords:where each parsedTab`isStop; sent:raze parsedTab[`sentIndices]cut'@'[parsedTab[`tokens];stopWords;:;`]; sent@:asc distinct raze 0|-1 0 1+\:where term in/:sent; // The number of sentences the term co-occurs in coOccur:` _ count each group raze distinct each sent; idx:where each parsedTab[`tokens]in\:key coOccur; // Find how many sentences each word occurs in totOccur:idx@'group each parsedTab[`tokens]@'idx; sentInd:parsedTab[`sentIndices]bin'totOccur; totOccur:i.fastSum((count distinct@)each)each sentInd; coOccur%:totOccur term; totOccur%:sum count each parsedTab`sentIndices; results:(coOccur-totOccur)%sqrt totOccur*1-totOccur; desc except[where results>0;term]#results } // @kind function // @category nlp // @desc Find tokens that contain the term where each consecutive word // has an above-average co-occurrence with the term // @param parsedTab {table} A parsed document containing keywords and their // associated significance scores // @param term {symbol} The term to extract phrases around // @returns {dictionary} Phrases as the keys, and their relevance as the values extractPhrases:{[parsedTab;term] term:lower term; tokens:parsedTab`tokens; related:findRelatedTerms[parsedTab]term; // This gets the top words that have an above average relavance to the // query term relevant:term,sublist[150]where 0<related; // Find all of the term's indices in the corpus runs:(i.findRuns where@)each tokens in\:relevant; tokenRuns:raze tokens@'runs; phrases:count each group tokenRuns where term in/:tokenRuns; desc(where phrases>1)#phrases } // @kind function // @category nlp // @desc Given an input which is conceptually a single document, // such as a book, this will give better results than TF-IDF. // This algorithm is explained in the paper Carpena, P., et al. // "Level statistics of words: Finding keywords in literary texts // and symbolic sequences." // Physical Review E 79.3 (2009): 035102. // @param parsedTab {table} A parsed document containing keywords and their // associated significance scores // @returns {dictionary} Where the keys are keywords as symbols, and the values // are their significance, as floats,with higher values being more // significant keywordsContinuous:{[parsedTab] text:raze parsedTab[`tokens]@'where each not parsedTab`isStop; groupTxt:group text; n:count each groupTxt; // Find the distinct words, ignoring stop words and those with 3 or fewer // occurences, or make up less than .002% of the corpus words:where n>=4|.00002*count text; // Find the distances between occurences of the same word // and use this to generate a 'sigma value' for each word dist:deltas each words#groupTxt; n:words#n; sigma:(dev each dist)%(avg each dist)*sqrt 1-n%count text; stdSigma:1%sqrt[n]*1+2.8*n xexp -0.865; chevSigma:((2*n)-1)%2*n+1; desc(sigma-chevSigma)%stdSigma } // @kind function // @category nlp // @desc Find the TF-IDF scores for all terms in all documents // @param parsedTab {table} A parsed document containing keywords and their // associated significance scores // @returns {dictionary[]} For each document, a dictionary with the tokens as // keys, and relevance as values TFIDF:{[parsedTab] nums:parsedTab[`tokens]like\:"[0-9]*"; tokens:parsedTab[`tokens]@'where each not parsedTab[`isStop]|nums; words:distinct each tokens; // The term frequency of each token within the document TF:{x!{sum[x in y]%count x}[y]each x}'[words;tokens]; // Calculate the inverse document frequency IDF:1+log count[tokens]%{sum{x in y}[y]each x}[tokens]each words; TF*IDF } // Exploratory Analysis // @kind function // @category nlp // @desc Find runs of tokens whose POS tags are in the set passed in // @param tagType {symbol} `uniPOS or `pennPOS (Universal or Penn // Part-of-Speech) // @param tags {symbol|symbol[]} One or more POS tags // @param parsedTab {table} A parsed document containing keywords and their // associated significance scores // @returns {list} Two item list containing // 1. The text of the run as a symbol vector // 2. The index associated with the first token findPOSRuns:{[tagType;tags;parsedTab] matchingTag:parsedTab[tagType]in tags; start:where 1=deltas matchingTag; lengths:sum each start cut matchingTag; idx:start+til each lengths; runs:`$" "sv/:string each parsedTab[`tokens]start+til each lengths; flip(runs;idx) } // @kind function // @category nlp // @desc Determine the probability of one word following another // in a sequence of words // @param parsedTab {table} A parsed document containing keywords and their // associated significance scores // @returns {dictionary} The probability that the secondary word in the // sequence follows the primary word. biGram:{[parsedTab] nums:parsedTab[`tokens]like\:"[0-9]*"; tokens:raze parsedTab[`tokens]@'where each not parsedTab[`isStop]|nums; occurance:(distinct tokens)!{count where y=x}[tokens]each distinct tokens; raze i.biGram[tokens;occurance]''[tokens;next tokens] }
C API Reference¶ Overview¶ K object¶ The C API provides access to the fundamental data K object of kdb+ and methods of manipulating it. The K object is a pointer to a k0 struct, a tagged union, and most of the API manipulates this pointer. It is defined as typedef struct k0 *K; More detailed information can be found in C API header file k.h. The C API defines some types to improve uniformity of the API. unsigned char G 16-bit int H 32-bit int I 64-bit int J 32-bit float E 64-bit double F char C char* S void V 16-byte array U Accessing members of the K object¶ K object properties for object x¶ The members which are common to all variant types are t , u , and r . The field n is common to all variant types which have a length. These may be dereferenced as usual in the C language: x->t type of K object. (signed char) x->u attribute. 0 means no attributes. (C) x->r reference count. Modify only via r1(x), r0(x). (I) x->n number of elements in a list. (J) Atom accessors for object x¶ The fields of the variant types which represent an atom (sometimes called a scalar) are: kdb+ type accessor derived types --------------------------------------------------------- byte x->g (G) boolean, char short x->h (H) int x->i (I) month, date, minute, second, time long x->j (J) timestamp, timespan real x->e (E) float x->f (F) datetime symbol x->s (S) error table x->k (K) List accessors¶ To simplify accessing the members for list variant, the following multiple helper macros are provided, to be used as, kG(x) , for example. q type name interface list accessor function ---------------------------------------------------------- mixed list K* kK(x) boolean G* kG(x) guid U* kU(x) byte G* kG(x) short H* kH(x) int I* kI(x) long J* kJ(x) real E* kE(x) float F* kF(x) char C* kC(x) symbol S* kS(x) timestamp J* kJ(x) month I* kI(x) date I* kI(x) days from 2000.01.01 datetime F* kF(x) days from 2000.01.01 timespan J* kJ(x) nanoseconds minute I* kI(x) second I* kI(x) time I* kI(x) milliseconds dictionary kK(x)[0] or keys and kK(x)[1] for values Reference counting¶ Q uses reference counting to manage object lifetimes. You are said to own a reference if you have created it with r1 or received it with an object returned to a call by a C API function. You are responsible for destroying the reference you own with r0 when you are finished with the object. Ownership of a reference to an object passed as parameter can be taken from you by some of the C API functions. Other C API functions, and all functions from dynamically-linked modules, do not take ownership of references to their parameters; they have to create a new reference to any object they wish to retain or return. If ownership of a reference has been taken from you, you are no longer responsible for it and should not destroy it. To retain an owned reference to an object, create a new reference to it prior to the call. Error handling¶ C API functions marked as requiring ee return 0 on error. You should either propagate the error further by returning 0 or handle it by calling ee and handling the resulting object. Note that you can only return an error object at a top level from a C function called from q. Constants¶ Q has a rich type system. The type is indicated by a type number and many of these numbers have a constant defined around 0. Positive numbers are used for types which have a length, and the negative of these represent the scalar type. For example, KB is the type for a vector of booleans, and the negative, -KB is for an atom of type boolean. Some types do not have a constant. For example, mixed list has type 0 , and error has a type -128 . constant associated type value -------------------------------- KB boolean 1 UU guid 2 KG byte 4 KH short 5 KI int 6 KJ long 7 KE real 8 KF float 9 KC char 10 KS symbol 11 KP timestamp 12 KM month 13 KD date 14 KZ datetime 15 KN timespan 16 KU minute 17 KV second 18 KT time 19 XT table 98 XD dictionary 99 Some numeric constants are defined and have special meaning – indicating null or positive infinity for that type. constant value description -------------------------------------------------------------------- nh 0xFFFF8000 short null wh 0x7FFF short infinity ni 0x80000000 int null wi 0x7FFFFFFF int infinity nj 0x8000000000000000 long null wj 0x7FFFFFFFFFFFFFFF long infinity nf log(-1.0) on Windows or (0/0.0) on Linux float null wf -log(0.0) in Windows or (1/0.0) on Linux float infinity Functions by category¶ Constructors¶ ka atom kj long ktj timespan kb boolean knk list ktn vector kc char knt keyed table ku guid kd date kp char array kz datetime ke real kpn char array vaknk va_list version of knk kf float ks symbol xD dictionary kg byte kt time xT table kh short ktd simple table ki int ktj timestamp Joins¶ ja raw value to list js interned string to symbol vector jk K object to list jv K list to first of same type When appending to a list, if the capacity of the list is insufficient to accommodate the new data, the list is reallocated with the contents of x updated. The new data is always appended, unless the reallocation causes an out-of-memory condition which is then fatal; these functions never return NULL . The reallocation of the list will cause the initial list’s reference count to be decremented. The target list passed to join functions should not have an attribute, and the caller should consider that modifications to that target object will be visible to all references to that object unless a reallocation occurred. Other functions¶ b9 serialize r0 decrement ref count d9 deserialize r1 increment ref count dj date to integer sd0 remove callback dl dynamic link sd0x remove callback dot apply sd1 function on event loop ee capture error setm toggle symbol lock k evaluate sn intern chars from string krr signal C error ss intern null-terminated string m9 release memory vak va_list version of k okx verify IPC message ymd encode q date orr signal system error Standalone applications¶ kclose disconnect from host khp connect to host without credentials khpu connect to host without timeout khpun connect to host khpunc connect to host with capability Unless otherwise specified, no function accepting K objects should be passed NULL . Functions by name¶ In the following descriptions, functions are tagged as follows. c.o is also available in c.o own takes ownership of a reference ee requires ee for error handling b9 (serialize)¶ K b9(I mode, K x) Tags: c.o ee Uses q IPC and mode capabilities level, where mode is: | value | effect | |---|---| | -1 | valid for V3.0+ for serializing/deserializing within the same process | | 0 | unenumerate, block serialization of timespan and timestamp (for working with versions prior to V2.6) | | 1 | retain enumerations, allow serialization of timespan and timestamp: Useful for passing data between threads | | 2 | unenumerate, allow serialization of timespan and timestamp | | 3 | unenumerate, compress, allow serialization of timespan and timestamp | | 4 | (reserved) | | 5 | allow 1TB msgs, but no single vector may exceed 2 billion items | | 6 | allow 1TB msgs, and individual vectors may exceed 2 billion items | On success, returns a byte-array K object with serialized representation. On error, NULL is returned; use ee to retrieve error string. d9 (deserialize)¶ K d9(K x) Tags: c.o ee The byte array x is not modified. On success, returns deserialized K object. On error, NULL is returned; use ee to retrieve the error string. dj (date to number)¶ I dj(I date) Tags: c.o Converts a q date to a yyyymmdd integer. dl (dynamic link)¶ K dl(V* f, J n) Function takes a C function that would take n K objects as arguments and returns a K object. Shared library only. Returns a q function. dot (apply)¶ K dot(K x, K y) Tags: ee The same as the q function Apply, i.e. .[x;y] . Shared library only. On success, returns a K object with the result of the . application. On error, NULL is returned. See ee for result-handling example. ee (error string)¶ K ee(K) Tags: c.o Capture (and reset) error string into usual error object, e.g. K x=ee(dot(a,b));if(xt==-128)printf("error %s\n", x->s); Since V3.5 2017.02.16, V3.4 2017.03.13 Handling errors If a function returns type K and has the option to return NULL, the user should wrap the call with ee , and check for the error result, also considering that the error string pointer (x->s ) may also be NULL. e.g. K x=ee(dot(a,b));if(xt==-128)printf("error %s\n", x->s?x->s:""); Otherwise the error status within the interpreter may still be set, resulting in the error being signalled incorrectly elsewhere in kdb+. Calling ee(…) has the side effect of clearing the interpreter’s error status for the NULL result path. ja (join value)¶ K ja(K* x, V*) Tags: c.o Appends a raw value to a list. x points to a K object, which may be reallocated during the function. The contents of x , i.e. *x , will be updated in case of reallocation. Returns a pointer to the (potentially reallocated) K object. jk (join K object)¶ K jk(K* x, K y) Tags: c.o own Appends another K object to a mixed list. Takes ownership of a reference to its argument y . Returns a pointer to the (potentially reallocated) K object. js (join string)¶ K js(K* x, S s) Tags: c.o Appends an interned string s to a symbol list. Returns a pointer to the (potentially reallocated) K object. jv (join K lists)¶ K jv(K* x, K y) Tags: c.o Append a K list y to K list x . Both lists must be of the same type. Returns a pointer to the (potentially reallocated) K object. k (evaluate)¶ K k(I handle, const S s, …) Tags: own Evaluates s . Optional parameters are either local (shared library only) or remote. The last argument must be NULL . Takes ownership of references to its arguments. Behavior depends on the value of handle . - handle>0 , sends sync message to handle, to evaluate a string or function with parameters, and then blocks until a message of any type is received on handle. It can returnNULL (indicating a network error) or a pointer to a K object.k(handle,(S)NULL) does not send a message, and blocks until a message of any type is received on handle. The handle should have been previously returned from this API's family of connect functions e.g.khp .If that object has type -128, it indicates an error, accessible as a null-terminated string in r->s . When you have finished using this object, it should be freed by callingr0(r) . - handle<0 , this is for async messaging, and the return value can be either 0 (network error) or non-zero (success). This result should not be passed tor0 . The handle should have been previously returned from this API's family of connect functions e.g.khp . handle==0 is valid only for a plugin, and executes against the kdb+ process in which it is loaded. See more on message types. Note that a k() call will block until a message is completely sent/received (handle!=0 ) or evaluated (handle=0 ). This is true for both sync and async message types, although only the former will wait on a response from the peer socket. One should not confuse the qIPC async message type with async I/O. Blocking sockets As the C API does not perform any buffering, it does not support sending or reception of partial messages. Hence qIPC sockets must remain in blocking mode regardless of the message type used. ka (create atom)¶ K ka(I t) Tags: c.o Creates an atom of type t . kb (create boolean)¶ K kb(I) Tags: c.o kc (create char)¶ K kc(I) Tags: c.o Null: kc(" ") kclose (disconnect)¶ V kclose(I) With the release of c.o with V2.6, c.o now tracks the connection type (pre V2.6, or V2.6+). Hence, to close the connection, you must call kclose (instead of close or closeSocket ): this will clean up the connection tracking and close the socket. Standalone apps only. Available only from the c/e libs and not as a shared library loaded into kdb+. kd (create date)¶ K kd(I) Tags: c.o Null: kd(ni) ke (create real)¶ K ke(F) Tags: c.o Null: ke(nf) kf (create float)¶ K kf(F) Tags: c.o Null: kf(nf) kg (create byte)¶ K kg(I) Tags: c.o kh (create short)¶ K kh(I) Tags: c.o Null: kh(nh) khp (connect anonymously)¶ I khp(const S hostname, I port) Standalone apps only. Available only from the c/e libs and not as a shared library loaded into kdb+. khpu(hostname, port, "") khpu (connect, no timeout)¶ I khpu(const S hostname, I port, const S credentials) Standalone apps only. Available only from the c/e libs and not as a shared library loaded into kdb+. khpun(hostname, port, credentials, 0) khpun (connect)¶ I khpun(const S hostname, I port, const S credentials, I timeout) Establish a connection to hostname on port providing credentials (username:password format) with timeout. On success, returns positive file descriptor for established connection. On error, 0 or a negative value is returned. 0 Authentication error -1 Connection error -2 Timeout error Standalone apps only. Available only from the c/e libs and not as a shared library loaded into kdb+. khpunc (connect with capability)¶ I khpunc(S hostname, I port, S credentials, I timeout, I capability) Standalone apps only. Available only from the c/e libs and not as a shared library loaded into kdb+. capability is a bit field: 1 1 TB limit 2 use TLS Messages larger than 2GB During the initial handshake of a connection, each side’s capability is exchanged, and the common maximum is chosen for the connection. By setting the capability parameter for khpunc , the default message-size limit for this connection can be raised from 2GB to 1TB. e.g. int handle=khpunc("hostname",5000,"user:password",timeout,1); A TLS-enabled connection supporting upto 1TB messages can be achieved via bit-or of the TLS and 1TB bits, e.g. int handle=khpunc("hostname",5000,"user:password",timeout,1|2); A return value of -3 indicates the OpenSSL initialization failed. 0 Authentication error -1 Connection error -2 Timeout error -3 OpenSSL initialization failed Unix domain socket For khp , khpu , khpun , and khpunc a Unix domain socket may be requested via the IP address 0.0.0.0 , e.g. int handle=khpu("0.0.0.0",5000,"user:password"); ki (create int)¶ K ki(I) Tags: c.o Null: ki(ni) kj (create long)¶ K kj(J) Tags: c.o Null: kj(nj) knk (create list)¶ K knk(I n, …) Tags: c.o Create a mixed list. Takes ownership of references to arguments. knt (create keyed table)¶ K knt(J n, K x) Tags: c.o ee Create a table keyed by n first columns if number of columns exceeds n . Returns null if the argument x is not a table. kp (create string)¶ K kp(S x) Tags: c.o Create a char array from a string. kpn (create fixed-length string)¶ K kpn(S x, J n) Tags: c.o Create a char array from a string of length n . krr (signal C error)¶ K krr(const S) Tags: c.o kdb+ recognizes an error returned from a C function via the function’s return value being 0, combined with the value of a global error indicator that can be set by calling krr with a null-terminated string. As krr records only the passed pointer, you should ensure that the string remains valid after the return from your code into kdb+ – typically you should use static storage for the string. (Thread-local if you expect to amend the error string from multiple threads.) The strings "stop" , "abort" and "stack" are reserved values and krr must not be called with those. Do not call krr() and then return a valid pointer! For convenience, krr returns 0, so it can be used directly as K f(K x){ K r=someFn(); ... if(some error) return krr("an error message"); // preferred style ... return r; } or a style more prone to mismatch, decoupled as K f(K x){ I f=0; K r=someFn(); ... if(some error){ krr("an error message"); // set the message string f=1; } ... if(f) return 0; // combined with string set via krr(), this return value of 0 indicates an error else return r; } ks (create symbol)¶ K ks(S x) Tags: c.o Null: ks("") kt (create time)¶ K kt(I x) Tags: c.o Create a time from a number of milliseconds since midnight. Null: ki(ni) ktd (create simple table)¶ K ktd(K x) Tags: c.o ee own Create a simple table from a keyed table. Takes ownership of a reference to its argument x . ktj (create timestamp)¶ K ktj(-KP, x) Tags: c.o Create a timestamp from a number of nanos since 2000.01.01. Null: ktj(-KP, nj) ktj (create timespan)¶ K ktj(-KN, x) Tags: c.o Create a timespan from a number of nanos since the beginning of the interval: midnight in the case of .z.n . Null: ktj(-KN, nj) ktn (create vector)¶ K ktn(I type, J length) Tags: c.o ku (create guid)¶ K ku(U) Tags: c.o Null: U g={0};ku(g) kz (create datetime)¶ K kz(F) Tags: c.o Create a datetime from the number of days since 2000.01.01. The fractional part is the time. Null: kz(nf) m4 (stats)¶ K m4(I) Provides memory statistics. Standalone apps only. With parameter value 0, returns current memory usage for the current thread, as a list of 3 long integers: 0 number of bytes allocated 1 bytes available in heap 2 maximum heap size so far With parameter value 1, returns symbol stats as a pair of longs: 0 number of internalized symbols (or null value if not main thread) 1 corresponding memory usage (or null value if not main thread) m9 (release memory)¶ V m9(V) Release the memory allocated for the thread’s pool. Call m9() when the thread is about to complete, releasing the memory allocated for that thread’s pool. okx (verify IPC message)¶ I okx(K x) Tags: c.o Verify that the byte vector x is a valid IPC message. Decompressed data only. x is not modified. Returns 0 if not valid. orr (signal system error)¶ K orr(const S) Tags: c.o Similar to krr , this appends a system-error message to string S before passing it to krr . The system error message looks at errno/GetLastError and, if set, will format using strerror/FormatMessage . The user error string is copied to a static, thread-local buffer and, as such, is valid until the next call to orr from that thread. However, the total message size (including both user and system error) is limited to 255 characters and is truncated if it exceeds this limit. r0 (decrement refcount)¶ V r0(K) Tags: c.o Decrement an object‘s reference count. If x->r is 0, x is unusable after the r0(x) call, and the memory pointed to by it may have been freed. Reference counting starts and ends with 0, not 1. r1 (increment refcount)¶ K r1(K) Tags: c.o Increment an object‘s reference count. sd0 (remove callback)¶ V sd0(I d) Remove the callback on d and call kclose . Should only be called from main thread. Shared library only. sd0x (remove callback conditional)¶ V sd0x(I d, I f) Remove the callback on d and call kclose on d if f is 1. Should only be called from main thread. Shared library only. Ssince V3.0 2013.04.04. sd1 (set function on loop)¶ K sd1(I d, f) Put the function K f(I d){…} on the q main event loop given a socket d . Should only be called from main thread. If d is negative, the socket is switched to non-blocking. The function f should return NULL or a pointer to a K object. If the return value of f is a pointer to a K object, its reference count is decremented i.e. passed to r0 . On success, sd1 returns a K object of type integer, containing d . On error, NULL is returned and d is closed. Since 4.1t 2023.09.15, sd1 no longer imposes a limit of 1023 on the value of the descriptor submitted. Shared library only. setm (toggle symbol lock)¶ I setm(I m) Set whether interning symbols uses a lock: m is either 0 or 1. Returns the previously set value. sn (intern chars)¶ S sn(S, J n) Tags: c.o Intern n chars from a string. Returns an interned string and should be used to store the string in a symbol vector. ss (intern string)¶ S ss(S) Tags: c.o Intern a null-terminated string. Returns an interned string and should be used to store the string in a symbol vector. sslInfo (SSL info)¶ K sslInfo(K x) A dictionary of settings similar to -26!x , or an error if SSL initialization failed. extern I khpunc(S hostname,I port,S usernamepassword,I timeout,I capability); int handle=khpunc("remote host",5000,"user:password",timeout,2); extern K sslInfo(K x); if(handle==-3){ K x=ee(sslInfo((K)0)); printf("Init error %s\n",xt==-128?x->s:"unknown"); r0(x); } Returns null if there was an error initializing the OpenSSL lib. vak , vaknk (va_list versions of k, knk)¶ K vak(I,const S,va_list) K vaknk(I,va_list) where va_list is as defined in stdarg.h , included by k.h These are va_list versions of the K k(I,const S,…) and K knk(I,…) functions, useful for writing variadic utility functions that can forward the K objects. ver (release date)¶ I ver() Returns an int as yyyymmdd . vi (vector at index)¶ K vi(K x,UJ j) Access elements of the types 77..97 inclusive (anymap and nested homogeneous vectors), akin to the macro usage kK(x)[j] for x of type 0. Increments the reference count of the object at x[j], and hence the result should be freed via r0(result) when it is no longer needed. If j is out of bounds, i.e. j>=xn, a null object for the first element's type is returned. Available within shared library only. vk (collapse homogeneous list)¶ K vk(K) Tags: own Tries to collapse a general list of homogeneous elements into a simple list, or conforming dictionaries into a table. Takes ownership of its argument. K f(){J i;K x=ktn(0,0);for(i=0;i<10;i++)jk(&x,g(i));return vk(x);} // g(i) could return different types Shared library only. xD (create dictionary)¶ K xD(K x, K y) Tags: c.o own Create a dictionary from two K objects. Takes ownership of references to the arguments. If y is null, will r0(x) and return null. xT (table from dictionary)¶ K xT(K x) Tags: c.o ee own Create a table from a dictionary object. Will r0(x) and return null if it is unable to form a valid table from x . Takes ownership of a reference to its argument x . ymd (numbers to date)¶ I ymd(year, month, day) Tags: c.o Encode a year/month/day as a q date, e.g. 0==ymd(2000, 1, 1)
// @kind function // @category metric // @desc True/false positives and true/false negatives // @param pred {int[]|boolean[]|string[]} A vector of predicted labels // @param true {int[]|boolean[]|string[]} A vector of true labels // @param posClass {number|boolean} The positive class // @returns {dictionary} The count of true positives (tp), true negatives (tn), // false positives (fp) and false negatives (fn) confDict:{[pred;true;posClass] confKeys:`tn`fp`fn`tp; confVals:raze value confMatrix .(pred;true)=posClass; confKeys!confVals } // @kind function // @category metric // @desc Statistical information about classification result // @param pred {int[]|boolean[]|string[]} A vector of predicted labels // @param true {int[]|boolean[]|string[]} A vector of true labels // @returns {table} The accuracy, precision, f1 scores and the support // (number of occurrences) of each class. classReport:{[pred;true] trueClass:asc distinct true; dictCols:`precision`recall`f1_score`support; funcs:(precision;sensitivity;f1Score;{sum y=z}); dictVals:(funcs .\:(pred;true))@/:\:trueClass; dict:dictCols!dictVals; classTab:([]class:`$string[trueClass],enlist"avg/total"); classTab!flip[dict],(avg;avg;avg;sum)@'dict } // @kind function // @category metric // @desc Logarithmic loss // @param class {boolean[]} Class labels // @param prob {float[]} Representing the probability of belonging to // each class // @returns {float} Total logarithmic loss crossEntropy:logLoss:{[class;prob] // Formerly EPS:1e-15, new value from print(np.finfo(prob.dtype).eps) // Updated post scikit learn 1.5.1 EPS:2.220446049250313e-16; neg avg log EPS|prob@'class } // @kind function // @category metric // @desc Mean square error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The mean squared error between predicted values and // the true values mse:{[pred;true] avg diff*diff:pred-true } // @kind function // @category metric // @desc Sum squared error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The sum squared error between predicted values and // the true values sse:{[pred;true] sum diff*diff:pred-true } // @kind function // @category metric // @desc Root mean squared error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The root mean squared error between predicted values // and the true values rmse:{[pred;true] sqrt mse[pred;true] } // @kind function // @category metric // @desc Root mean squared log error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The root mean squared log error between predicted values // and the true values rmsle:{[pred;true] rmse . log(pred;true)+1 } // @kind function // @category metric // @desc Residual squared error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @param n {long} The degrees of freedom of the residual // @returns {float} The residual squared error between predicted values // and the true values rse:{[pred;true;n] sqrt sse[pred;true]%n } // @kind function // @category metric // @desc Mean absolute error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The mean absolute error between predicted values // and the true values mae:{[pred;true] avg abs pred-true } // @kind function // @category metric // @desc Mean absolute percentage error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The mean absolute percentage error between predicted values // and the true values mape:{[pred;true] 100*avg abs 1-pred%true } // @kind function // @category metric // @desc Symmetric mean absolute percentage error // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The symmetric-mean absolute percentage between predicted // and true values smape:{[pred;true] sumAbsVals:abs[pred]+abs true; 100*avg abs[true-pred]%sumAbsVals } // @kind function // @category metric // @desc R2-score for regression model validation // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @returns {float} The R2-score between the true and predicted values. // Values close to 1 indicate good prediction, while negative values // indicate poor predictors of the system behavior r2Score:{[pred;true] 1-sse[true;pred]%sse[true]avg true } // @kind function // @category metric // @desc R2 adjusted score for regression model validation // @param pred {float[]} A vector of predicted labels // @param true {float[]} A vector of true labels // @param p {long} Number of independent regressors, i.e. the number of // variables in your model, excluding the constant // @returns {float} The R2 adjusted score between the true and predicted // values. Values close to 1 indicate good prediction, while negative values // indicate poor predictors of the system behavior r2AdjScore:{[pred;true;p] n:count pred; r2:r2Score[pred;true]; 1-(1-r2)*(n-1)%(n-p)-1 } // @kind function // @category metric // @desc One-sample t-test score // @param sample {number[]} A set of samples from a distribution // @param mu {float} The population mean // @returns {float} The one sample t-score for a distribution with less than // 30 samples. tScore:{[sample;mu] (avg[sample]-mu)%sdev[sample]%sqrt count sample } // @kind function // @category metric // @desc T-test for independent samples with equal variances // and equal sample size // @param sample1 {number[]} A sample from a distribution // @param sample1 {number[]} A sample from a distribution // sample1&2 are independent with equal variance and sample size // @returns {float} Their t-test score tScoreEqual:{[sample1;sample2] count1:count sample1; count2:count sample2; absAvg:abs avg[sample1]-avg sample2; absAvg%sqrt(svar[sample1]%count1)+svar[sample2]%count2 } // @kind function // @category metric // @desc Calculate the covariance of a matrix // @param matrix {number[]} A sample from a distribution // @returns {number[]} The covariance matrix covMatrix:{[matrix] matrix:"f"$matrix; n:til count matrix; avgMat:avg each matrix; upperTri:matrix$/:'n _\:matrix; diag:not n=\:n; matrix:(n#'0.0),'upperTri%count first matrix; multiplyMat:matrix+flip diag*matrix; multiplyMat-avgMat*\:avgMat } // @kind function // @category metric // @desc Calculate the correlation of a matrix or table // @param data {table|number[]} A sample from a distribution // @returns {dictionary|number[]} The covariance of the data corrMatrix:{[data] dataTab:98=type data; matrix:$[dataTab;value flip@;]data; corrMat:i.corrMatrix matrix; $[dataTab;{x!x!/:y}cols data;]corrMat } // @kind function // @category metric // @desc X- and Y-axis values for an ROC curve // @param label {number[]|boolean[]} Label associated with a prediction // @param prob {float[]} Probability that each prediction belongs to // the positive class // @returns {number[]} The coordinates of the true-positive and false-positive // values associated with the ROC curve roc:{[label;prob] if[not 1h=type label;label:label=max label]; tab:(update sums label from`prob xdesc([]label;prob)); probDict:exec 1+i-label,label from tab where prob<>next prob; 0^{0.,x%last x}each value probDict } // @kind function // @category metric // @desc Area under an ROC curve // @param label {number[]|boolean[]} Label associated with a prediction // @param prob {float[]} Probability that each prediction belongs to // the positive class // @returns {float} The area under the ROC curve rocAucScore:{[label;prob] i.auc . i.curvePts . roc[label;prob] } // @kind function // @category metric // @desc Sharpe ratio anualized based on daily predictions // @param pred {int[]|boolean[]|string[]} A vector/matrix of predicted labels // @param true {int[]|boolean[]|string[]} A vector/matrix of true labels // @returns {float} The sharpe ratio of predictions made sharpe:{[pred;true] sqrt[252]*avg[pred*true]%dev pred*true } ================================================================================ FILE: ml_ml_util_mproc.q SIZE: 1,512 characters ================================================================================ // util/mproc.q - Utilities for multiprocessing // Copyright (c) 2021 Kx Systems Inc // // Distributes functions to worker processes \d .ml // @kind function // @category multiProcess // @desc If the multiProc key is not already loaded in set .`z.pd` and // N to 0 // @return {::} `.z.pd` and N are set to 0 if[not`multiProc in key `.ml;.z.pd:`u#0#0i;multiProc.N:0] // @kind function // @category multiProcess // @desc Define what happens when the connection is closed // @param func {fn} Value of `.z.pc` function // @param proc {int} Handle to the worker process // @return {::} Appropriate handles are closed .z.pc:{[func;proc] .z.pd:`u#.z.pd except proc; func proc }@[value;`.z.pc;{{}}] // @kind function // @category multiProcess // @desc Register the handle and pass any functions required to the // worker processes // @return {::} The handle is registered and function is passed to process multiProc.reg:{ .z.pd,:.z.w; neg[.z.w]@/:multiProc.cmds } // @kind function // @category multiProcess // @desc Distributes functions to worker processes // @param n {int} Number of processes open // @param func {string} Function to be passed to the process // @return {::} Each of the `n` worker processes evaluate `func` multiProc.init:{[n;func] if[not p:system"p";'"set port to multiprocess"]; neg[.z.pd]@\:/:func; multiProc.cmds,:func; do[0|n-multiProc.N;$[.pykx.loaded;.pykx.safeReimport;{x`}] {[x;y] system"q ",path,"/util/mprocw.q -pp ",string x} p]; multiProc.N|:n; } ================================================================================ FILE: ml_ml_util_mprocw.q SIZE: 513 characters ================================================================================ // util/mprocw.q - Multiprocessing // Copyright (c) 2021 Kx Systems Inc // // Mutliprocessing based on command line input // Exit if `pp isn't passed as a command parameter if[not`pp in key .Q.opt .z.x;exit 1]; // Exit if no values were passed with pp if[not count .Q.opt[.z.x]`pp;exit 2]; // Exit if cannot open port if[not h:@[hopen;"J"$first .Q.opt[.z.x]`pp;0];exit 3]; // Exit if cannot load ml.q @[system;"l ml/ml.q";{exit 4}] // Register the handle and run appropriate functions neg[h]`.ml.multiProc.reg` ================================================================================ FILE: ml_ml_util_pickle.q SIZE: 848 characters ================================================================================ // util/pickle.q - Pickle file utilities // Copyright (c) 2021 Kx Systems Inc // // Save and load python objects to and from pickle files \d .ml // @kind function // @cateogory pickle // @desc Generate python pickle dump module to save a python object pickleDump:.p.import[`pickle;`:dumps;<] // @kind function // @cateogory pickle // @desc Generate python pickle lodas module to load a python object pickleLoad:.p.import[`pickle;`:loads] // @kind function // @cateogory pickle // @desc A wrapper function to load and save python // objects using pickle // @param module {boolean} Whether the pickle load module (1b) or // dump module (0b) is to be invoked // @param obj {<} Python object to be saved/loaded // @return {::;<} Object is saved/loaded pickleWrap:{[module;obj] $[module;{.ml.pickleLoad y}[;pickleDump obj];{y}[;obj]] } ================================================================================ FILE: ml_ml_util_preproc.q SIZE: 13,684 characters ================================================================================ // util/preproc.q - Preprocessing functions // Copyright (c) 2021 Kx Systems Inc // // Preprocessing of data prior to training \d .ml
Intel Optane Persistent Memory and kdb+¶ Intel® Optane™ persistent memory, herein called Intel Optane PMem, is a new hardware technology from Intel. Intel Optane PMem is based on a new silicon technology, 3D XPoint, which has low latency (memory like) attributes and is more durable than traditional NAND Flash. Intel Optane technology was first unveiled in 2017, in the form of Intel Optane SSD. By packaging 3DX Point in a Solid State Drive (SSD), Intel created a product with speeds faster than the other SSD devices (largely based on NAND Flash) that preceded it. However, the scale of performance improvement of 3DX Point brought another target into Intel’s sights – main memory. The technology that dominates main memory, DRAM, is order of magnitudes faster to access, but smaller in size and more cost per Byte than NAND flash. Storage (whether SSD or spinning disk) is large and cheap, but orders-of-magnitude slower to access. This has led to a significant gap in the memory-storage hierarchy: SRAM CPU cache L1, L2, L3 DRAM main memory SSD storage HDD archival Intel Optane PMem introduces a new category that sits between memory and storage. In newly designed system boards, capable of supporting Intel Cascade Lake CPU chip set, or later, the memory sits in the same DDR4 DIMM slots (and memory bus) as DRAM. Persistent memory sits close to the CPU, and allows applications to directly address it as memory. SRAM CPU cache L1, L2, L3 DRAM main memory >> Optane Memory >> Optane SSD SSD storage HDD archival What are the advantages?¶ By combining storage and memory, Intel Optane PMem is at once high-performance, high-capacity, and cost-efficient. High-performance¶ Intel Optane technology is faster than existing storage media, as shown by Intel Optane SSDs. Intel Optane PMem offers another advantages, due to - Direct CPU access to individual bytes, rather than blocks, at a time. - Minimal latency and maximal throughput via the memory bus, versus PCIe connections for SSDs. High-capacity¶ While DRAM currently caps at 258 GiB per module, Intel Optane PMem is current generation of Optaen (aka Apache Pass) is available in capacities of 128 GiB, 256 GiB, and 512 GiB. On Cascade Lake designs, six Intel Optane PMem modules can be used per socket, users can address 10+ TB of optane memory space on a single 4 socket system. Cost-efficient¶ The retail prices of Intel Optane PMem are intended to sit between the price per GiB for DRAM and NVMe Intel Optane storage. This can be one consideration for a kdb+ solution, especially if it uses a lot of active memory for streaming or real-time analytics, or if it needs extremely fast access to hot data in a HDB. This may make such a solution more affordable than just using DRAM. The increased memory size also provides an opportunity to consolidate workloads onto fewer nodes, leading to an even lower TCO through reduced hardware, software, datacenter and operations costs. How can kdb+ users benefit?¶ Some advantages that Intel Optane PMem provides to databases are: - On-disk databases will run faster using expanded Intel Optane PMem as storage because some or all of the space does not need fetching from disk - In-memory databases will scale using Intel Optane PMem as a larger memory space A typical kdb+ application uses a combination of memory and storage to gather, persist and analyze enormous datasets. kdb+’s structured use of on-disk data allows efficient access to databases up to petabyte scale. The size of in-memory datasets, however, is primarily restricted by the size of the accessible memory space. Once datasets grow beyond the available memory capacity, users have three main options: - read/write data from storage - scale horizontally - scale vertically Read/write data from storage¶ kdb+ on-disk databases are partitioned, most commonly by date, with individual columns stored as binary objects within a file system. The result is a self-describing database on disk, mapped into memory by a kdb+ process and presented to users as if it resides in memory. The limiting factor with most queries to on-disk data, is the latency and bandwidth penalty paid to jump from storage to DRAM-based memory. Scale horizontally¶ Adding more machines into the mix allows users to add more memory by scaling out. Processes across a cluster communicate via IPC and work on calculations as a single logical unit. The success of this approach depends largely on the inherent parallelization of the task at hand, which must be balanced against the increased complexity and costs of hardware. Scale vertically¶ Vertical scaling is the preferred method of scaling for most kdb+ applications, as users aim to keep as much hot data as possible close to the CPU. If everything would fit in memory, and we could afford it, we’d probably put it there. However, traditional memory (DRAM) is expensive and, even if funds were unlimited, is limited in capacity on a per-socket basis. Intel Optane PMem presents opportunities to address these issues, through faster form of block storage or through significantly scaled-up memory capacity. How can kdb+ users deploy Intel Optane PMem?¶ Intel Optane PMem can be deployed in a number of ways, depending on the design of users’ existing applications. There are three modes by which Intel Optane PMem can be used by kdb+. - Memory mode - App Direct Mode - Storage over App Direct Memory mode¶ In Memory mode, the DRAM acts as a cache for frequently-accessed data, while the Intel Optane PMem provides large memory capacity. When configured for Memory Mode, the applications and operating system perceive a pool of volatile memory, no differently than on DRAM-only systems. In this mode, no specific persistent memory programming is required in the applications. This dramatically increases the amount of memory seen by the kernel and hence available to kdb+. DRAM mixes its memory address space with Optane. For larger datasets, this increased memory space avoids the costs and complexity of horizontal scaling. Vertical-vs-horizontal scaling A common solution for overly-large in-memory datasets, is to split the data across multiple machines. Data is usually split based on some inherent partition of the data (e.g. ticker symbol, sensor ID, region), to allow parallelization of calculations. Horizontal scaling allows users to add memory, but comes at a cost. Average performance (versus a single machine) is reduced due to the cost of IPC to move data between processes. There is also an increase in complexity as well as hardware, datacenter and operations costs. Intel Optane PMem, in Memory mode, creates a new opportunity to scale vertically. A significantly extended memory space enables calculations on a single machine, rather than a cluster. This removes or reduces the complexities and performance cost of IPC, allowing users to run simpler, more efficient analytics. App Direct Mode ¶ - kdb+ 4.0 contains support for App Direct Mode, in which the applications and operating system are explicitly aware there are two types of direct load/store memory in the platform, and can direct which type of data read or write is suitable for DRAM or Intel® Optane™ persistent memory. kdb+ sees Intel Optane PMem and DRAM as two separate pools, and gives users control over which entities reside in each. As a result, users can optimize their applications and schemas, keeping hot data in fast DRAM while still taking full advantage of the expanded memory capacity. - - Horizontal partitioning - e.g. Keep ‘recent’ historical data in Intel Optane PMem, allowing multi-day queries in memory - Vertical partitioning - e.g. Different tables/columns residing in DRAM/Intel Optane PMem Storage over App Direct¶ Storage over App Direct Mode is a specialized application of App Direct Mode, in which Intel Optane PMem behaves like a storage device accessible via a filesystem. As the filesystem is explicitly optimized for the underlying technology, it offers better operational latencies. With extremely low read/write speeds, data is passed quickly between storage and memory, enabling faster queries. Intel Optane PMem is particularly fast at small, random reads, which makes it particularly effective at speeding up kdb+ historical queries. Storage over App Direct Mode was recently benchmarked, publicly, using the STAC M3 industry-standard benchmarks. Tests ran on Lenovo ThinkSystem servers with Intel Optane PMem, 2nd Generation Intel® Xeon® processors, and kdb+ 3.6. Using a 2-socket server: - Intel Optane PMem was faster in 16 of 17 STAC-M3 Antuco benchmarks, relative to 3D NAND SSD - In 11 of the benchmarks, Intel Optane PMem was faster by more than 2× Using a 4-socket server: - Intel Optane PMem was faster in 8 of 9 STAC-M3 Kanaga benchmarks, relative to 3D NAND SSD - In 6 of the benchmarks, Intel Optane PMem was faster by more than 2× Compared to all publicly disclosed STAC-M3 Antuco results: - For 2-socket systems running kdb+, this solution set new records in 11 of 17 mean response-time benchmarks. - For 4-socket systems running kdb+ this solution set new records in 9 of 17 mean response-time benchmarks. Write speeds are also improved using Intel Optane PMem, allowing higher throughput when logging and writing database partitions. Summary¶ Intel Optane persistent memory is a game-changing technology from Intel, which allows kdb+ users to increase the performance and capacity of their applications. Through reduced memory costs and infrastructure consolidation, Intel Optane PMem should also reduce TCO. Earlier versions of kdb+ are already compatible with Intel Optane PMem through Memory Mode (BIOS settings required) and Storage over App Direct Mode, providing improvements for both in-memory and on-disk datasets. From version 4.0 onwards App Direct Mode gives users control over Intel Optane PMem, taking optimal advantage of the technology to suit their applications. KX has created a new technology and marketing partnership with Intel, around Optane Memory. By working closely with Intel’s engineers, we ensure kdb+ takes full advantage of the features of Intel Optane PMem. We also have a team of engineers ready to help customers evaluate Intel Optane PMem. Through a POC, we can determine the optimal way to deploy the new technology to new and existing use cases. Please contact [email protected] to coordinate any such POC, or for any technical questions. Partitioning tables across directories¶ A partitioned table is a splayed table that is further decomposed by grouping records having common values along a column of special type. The allowable special column types have underlying integer values: date, month, year and long. db ├── 2020.10.04 │ ├── quotes │ │ ├── .d │ │ ├── price │ │ ├── sym │ │ └── time │ └── trades │ ├── .d │ ├── price │ ├── sym │ ├── time │ └── vol ├── 2020.10.06 │ ├── quotes .. └── sym Partition data correctly: data for a particular date must reside in the partition for that date. Table counts¶ For partitioned databases, q caches the count for a table, and this count cannot be updated from within a reval expression or from a secondary thread. To avoid noupdate errors on queries on partitioned tables, put count table in your startup script. Use case¶ Partition a table if either - it has over 100 million records - it has a column that cannot fit in memory - it grows - many queries can be limited to a range of values of one column count , maps, peach , reval , select Errors, Parallel execution Q for Mortals §14.3 Partitioned Tables Segmented databases
/ @returns (FolderPath) The current working directory using the OS specific command / @throws OsNotSupportedForCwdException If the operating system is not supported .require.i.getCwd:{ os:first string .z.o; if["w"~os; :hsym first `$trim system "echo %cd%"; ]; if[os in "lms"; :hsym first `$trim system "pwd"; ]; '"OsNotSupportedForCwdException (",string[.z.o],")"; }; .require.i.tree:{[root] rc:` sv/:root,/:key root; rc:rc where not any rc like\:/:.require.location.ignore; folders:.require.i.isFolder each rc; :raze (rc where not folders),.z.s each rc where folders; }; .require.i.isFolder:{[folder] :(not ()~fc) & not folder~fc:key folder; }; / Set the default interface implementations before the Interface library (if) is available / @see .require.interfaces .require.i.setDefaultInterfaces:{ interfaces:0!delete from .require.interfaces where null[lib] | null ifFunc; (set)./: flip interfaces`ifFunc`implFunc; }; / Initialise and defer interface management to the Interface library (if) / @see .require.interfaces / @see .if.setImplementationsFor .require.i.initInterfaceLibrary:{ .require.libNoInit`if; requiredIfs:0!`lib xgroup .require.interfaces; { .if.setImplementationsFor[x`lib; flip `lib _ x] } each requiredIfs; .require.lib`if; }; / Protected execution wrapper for 'require'. It will run unprotected if '-e 1' / '-e 2' is specified. Otherwise it returns the same / format as '.ns.protectedExecute', with backtrace provided if running kdb 3.5 or later .require.i.protectedExecute:{[func; args; errSym] $[`boolean$system"e"; :func args; 3.5 <= .z.K; :.Q.trp[func; args; {[errSym; errMsg; bt] `isError`backtrace`errorMsg!(errSym; .Q.sbt bt; errMsg) }[errSym;;]]; / else :@[func; args; {[errSym; errMsg] (errSym; errMsg) }[errSym;]] ]; }; / Supports slf4j-style parameterised logging for improved logging performance even without a logging library / @param (String|List) If a generic list is provided, assume parameterised and replace "{}" in the message (first element) / @returns (String) The message with "{}" replaced with the values supplied after the message .require.i.parameterisedLog:{[message] if[0h = type message; message:"" sv ("{}" vs first message),'(.Q.s1 each 1_ message),enlist ""; ]; :message; }; / Standard out logger .require.i.log: ('[-1; .require.i.parameterisedLog]); / Standard error logger .require.i.logE:('[-2; .require.i.parameterisedLog]); ================================================================================ FILE: kdb-common_src_slack.q SIZE: 1,842 characters ================================================================================ // Slack Notification Integration via WebHook // Copyright (c) 2019 Sport Trades Ltd // Documentation: https://github.com/BuaBook/kdb-common/wiki/slack.q .require.lib each `type`ns; // NOTE: Depending on your OS, you might need to update the location of the SSL certificates on your machine. // For example, on Bash on Windows, you'll need to run: export SSL_CA_CERT_PATH=/etc/ssl/certs // Or you can simply disable server verification with: export SSL_VERIFY_SERVER=NO / Sends a message to Slack / @param username (String) The username to show the message as coming from (this does not have to be a real Slack user). If none is specified, it will default to user@host / @param slackHookUrl (String) The Slack hook URL to use to send the message to / @param messageBody (String) The body of the message to send to Slack / @returns (Boolean) True if the message was sent successfully, false otherwise. All exceptions from the underlying system command are suppressed / @see .Q.hp .slack.notify:{[username; slackHookUrl; messageBody] username:.type.ensureString username; if[.util.isEmpty slackHookUrl; '"IllegalArgumentException"; ]; if[.util.isEmpty username; username:"@" sv string each (.z.u;.z.h); ]; slackWebhookDict:`text`username!(messageBody; username); .log.if.info "Sending Slack notification [ Username: ",username," ] [ Message: ",.Q.s1[slackWebhookDict]," ]"; .log.if.debug " [ Slack Hook URL: ",slackHookUrl," ]"; slackPostResult:.ns.protectedExecute[`.Q.hp; (slackHookUrl; "application/json"; .j.j slackWebhookDict)]; if[.ns.const.pExecFailure ~ first slackPostResult; .log.if.warn "Failed to send Slack notification [ Username: ",username," ] [ Slack Hook URL: ",slackHookUrl," ]. Error - ",last slackPostResult; :0b; ]; :1b; }; ================================================================================ FILE: kdb-common_src_so.q SIZE: 3,744 characters ================================================================================ // Shared Object Function Manager // Copyright (c) 2020 - 2021 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/so.q .require.lib each `type`file`time`ns`os`env; / The target namespace that all shared library functions will be loaded into .so.cfg.targetNs:`.so.libs; / Store of all loaded shared library functions .so.loaded:`kdbRef xkey flip `kdbRef`soName`soPath`soFunctionName`loadTime!"SSSSP"$\:(); .so.init:{ .log.if.info "Shared object namespace [ Namespace: ",string[.so.cfg.targetNs]," ]"; set[.so.cfg.targetNs; 1#.q]; }; / Attempts to find a shared object with the specified name within the paths specified by the source environment variable / NOTE: If multiple files match the specified name, only the first will be returned / @param soName (String|Symbol) The shared object name to search for, including suffix / @returns (FilePath) Path to the matching shared object file, or empty symbol if no matching file found / @see .os.sharedObjectEnvVar / @see .env.get .so.findSharedObject:{[soName] soPaths:raze .file.findFilePaths["*",.type.ensureString soName;] each .env.get .os.sharedObjectEnvVar; if[0 = count soPaths; .log.if.warn "No matching paths found for shared object [ Shared Object: ",.type.ensureString[soName]," ] [ Source Env Var: ",string[.os.sharedObjectEnvVar]," ]"; :`; ]; if[1 < count soPaths; .log.if.warn "Multiple matching files for shared objects. Returning first [ Shared Object: ",.type.ensureString[soName]," ] [ Matching: ",string[count soPaths]," ]"; ]; :first soPaths; }; / Loads the specified function from the specified shared object into the process / NOTE: If the function is already loaded into the process, it will not be reloaded / @param soName (Symbol|FilePath) The name of the shared object to find, or the specific shared object to load the function from / @param soFunctionName (Symbol) The function to reference in the shared object / @param soFunctionArgs (Long) The number of arguments the function in the shared object requires to execute / @returns (Symbol) Namespace reference to the shared object code loaded into the current process / @throws SharedObjectNotFoundException If the specified shared object is a name and a matching file could not be found / @see .so.cfg.targetNs / @see .so.findSharedObject .so.loadFunction:{[soName; soFunctionName; soFunctionArgs] if[not all .type.isSymbol each (soName; soFunctionName); '"InvalidArgumentException"; ]; if[not .type.isLong soFunctionArgs; '"InvalidArgumentException"; ]; soPath:soName; if[not .type.isFilePath soPath; soPath:.so.findSharedObject soName; ]; if[null soPath; '"SharedObjectNotFoundException"; ]; / Remove file suffix from shared object path for 2: soLoadPath:` sv @[` vs soPath; 1; first ` vs]; kdbFunctionName:` sv .so.cfg.targetNs,last[` vs soLoadPath],soFunctionName; if[.ns.isSet kdbFunctionName; .log.if.info "Shared object function already loaded [ Shared Object: ",string[soName]," ] [ Function: ",string[soFunctionName]," ]"; :kdbFunctionName; ]; .log.if.info "Loading function from shared object [ Shared Object: ",string[soName]," (",string[soPath],") ] [ Function: ",string[soFunctionName]," -> ",string[kdbFunctionName]," ] [ Args: ",string[soFunctionArgs]," ]"; set[kdbFunctionName;] soLoadPath 2: (soFunctionName; soFunctionArgs); .so.loaded[kdbFunctionName]:(soName; soPath; soFunctionName; .time.now[]); .log.if.info "Shared object function loaded OK [ Shared Object: ",string[soName]," ] [ Function: ",string[kdbFunctionName]," ]"; :kdbFunctionName; }; ================================================================================ FILE: kdb-common_src_terminal.q SIZE: 3,477 characters ================================================================================ // Terminal (Console) Management // Copyright (c) 2020 Sport Trades Ltd, 2020 - 2022 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/terminal.q .require.lib each `os`ns; / If enabled, on library initialisation '.z.pi' will be set to check the current terminal size against system "c" and / adjust it if they are not in sync .terminal.cfg.trackSizeChange:1b; / If enabled, and '.terminal.cfg.trackSizeChange' is enabled, any console size specified via '-c' on the command line will / be used as the smallest the terminal window will be resized to .terminal.cfg.setMinWithCommandLineArg:1b; / The default '.z.pi' handler to parse standard input. This seems to give an equivalent of the default handler when / '.z.pi' is not set. .terminal.cfg.defaultZPi:{ 1 .Q.s value x; }; .terminal.cmdLineConsoleSize:0N 0Ni; .terminal.init:{ if[.terminal.cfg.setMinWithCommandLineArg; .require.lib`cargs; args:.cargs.getWithInternal[]; if[`c in key args; .terminal.cmdLineConsoleSize:"I"$" " vs args`c; .log.info ("Minimum console size specified via '-c' command line argument [ Minimum Size: {} ]"; .terminal.cmdLineConsoleSize); ]; ]; if[.terminal.cfg.trackSizeChange & .terminal.isInteractive[]; .log.if.info "Enabling terminal size change tracking on interactive terminal"; .terminal.i.enableSizeTracking[]; ]; }; / Gets the current terminal size and changes the kdb console size if it has changed / @see .terminal.cmdLineConsoleSize / @see .os.getTerminalSize / @see system "c" .terminal.setToCurrentSize:{ termSize:.os.getTerminalSize[]; termSizeInt:"I"$" " vs termSize; oldTermSize:system "c"; / If either the columns or lines is smaller than the console size specified via command line, don't change if[any termSizeInt < .terminal.cmdLineConsoleSize; termSize:" " sv string .terminal.cmdLineConsoleSize; termSizeInt:.terminal.cmdLineConsoleSize; ]; / If the console size is the same, just return if[oldTermSize ~ termSizeInt; :(::); ]; .log.if.trace "Console size change [ Old: ",.Q.s1[oldTermSize]," ] [ New: ",termSize," ]"; system "c ",termSize; }; / @returns (Boolean) True if the current OS is supported and the current session is interactive, false otherwise / @see .os.isInteractiveSession .terminal.isInteractive:{ if[not `isInteractive in .os.availableCommands[]; :0b; ]; interactive:.os.isInteractiveSession[]; .log.if.info "Current kdb process terminal state [ Interactive: ",string[`no`yes interactive]," ]"; :interactive; }; / Sets or overrides the standard input event handler (.z.pi) to allow terminal size tracking / @see .terminal.cfg.defaultZPi / @see .terminal.i.trackHandler .terminal.i.enableSizeTracking:{ dotZdotPi:.terminal.cfg.defaultZPi; if[.ns.isSet `.z.pi; .log.if.debug "Overloading existing .z.pi handler set for terminal size tracking"; dotZdotPi:.z.pi; ]; set[`.z.pi;] .terminal.i.trackHandler[dotZdotPi;]; }; / The '.z.pi' event handler when terminal size tracking is enabled / @param zPiHandler (Function) The function to process the specified input / @param input (String) The standard input typed on the command line / @see .terminal.setToCurrentSize .terminal.i.trackHandler:{[zPiHandler; input] .terminal.setToCurrentSize[]; zPiHandler input; }; ================================================================================ FILE: kdb-common_src_time.q SIZE: 921 characters ================================================================================ // Time Accessor Functions // Copyright (c) 2017 Sport Trades Ltd // Documentation: https://github.com/BuaBook/kdb-common/wiki/time.q
Views¶ A view is a calculation that is re-evaluated only if the values of the underlying dependencies have changed since its last evaluation. Why use a view?¶ Views can help avoid expensive calculations by delaying propagation of change until a result is demanded. How is a view defined?¶ Views and their dependencies can be defined only in the default namespace. The syntax for the definition is q)viewname::[expression;expression;…]expression Terminating semicolon The result returned by a view is the result of the last expression in the list, just as in a lambda. q)a: til 5 q)uu:: a q)uu 0 1 2 3 4 q)vv:: a; q)vv q)vv ~ (::) 1b q)vv ~ {[];}[] 1b The following defines a view called myview which depends on vars a and b . q)myview::a+b q)a:1 q)b:2 q)myview 3 Defining a view does not trigger its evaluation. A view should not have side-effects, i.e. should not update global variables. Although ; is permitted at the end of the definition, it would mean the view returns (::) . As a view should have no side-effects, returning (::) would make the purpose of the view redundant. A view definition can be spread over multiple lines in a script as long as it is indented accordingly (e.g. exactly like a function definition). e.g. $ cat v.q t:([]a:til 10) myview::select from t where a<5 / note this line is indented by one space $ q v.q KDB+ 3.2 2014.08.26 Copyright (C) 1993-2014 Kx Systems m64/... q)myview a - 0 1 2 3 4 Within a lambda, :: amends a global variable. It does not define a view. q)x:2 q)y:3 q)v::x+y /view q)v 5 q)x:10000 q)v /depends on x 10003 q){v::x+y}[10;20] /v now a global variable q)v 30 q)x:-1000000 q)v /global variable, no longer depends on x 30 How to list views¶ Invoke the views function (or \b ) to get a list of the defined views. q)a::b+c q)d::b+a q)views` `s#`a`d How to list invalidated views¶ Invalidated (pending) views are awaiting recalculation. Invoking \B will return a list of pending views. q)a::b+c q)\B ,`a q)b:c:1 q)\B ,`a q)a 2 q)\B `symbol$() Splayed tables To use views with splayed tables make sure you invalidate the data when it changes; this can be done for example by reloading the table. How to see the definition of a view¶ The text definition of a view can be seen with view `viewname . q)a::b+c q)view`a "b+c" The following view command has the form `. `viewName . Note the space between `. and `viewname . q)d::b+a q)`. `d b+a value on that reveals the underlying representation: - (last result|::) - parse-tree - dependencies - text q)value`. `d :: (+;`b;`a) `b`a "b+a" If previously evaluated, the last result can be seen here as the first element. q)b:1;a:2 q)d 3 q)value`. `d 3 (+;`b;`a) `b`a "b+a" A view which uses select/exec/update/delete is worth mentioning as it may not be immediately obvious what dependencies are present. e.g. in the following example, t is the only dependency, as a and b may be columns in t , or globals – this is not known until the select is evaluated and hence cannot be inferred as dependencies. q)v::select from t where a in b q)value`. `v :: (?;`t;,,(in;`a;`b);0b;()) ,`t "select from t where a in b" If a or b are globals to be dependencies, a workaround is for these is to be mentioned at the beginning of the definition, e.g. q)v::a;b;select from t where a in b q)value`. `v :: (";";`a;`b;(?;`t;,,(in;`a;`b);0b;())) `a`b`t "a;b;select from t where a in b" If a function is used within a view, that does not become a dependency. The following view would not be invalidated unless the f were redefined. q)v::f[]+1 q)f:{42} q)v 43 q)value`. `v 43 (+;(`f;::);1) ,`f "f[]+1" Self-referencing views¶ Self-referencing views are allowed since V3.2. A self-referencing view is a view that includes itself as part of the calculation. In such a case, the view uses its previous value as part of the evaluation if it exists, otherwise it signals 'loop . e.g. q)v::$[b;1;v+1] q)b:1;0N!v;b:0;v q)v::$[b;1;v+1] q)v 'loop From V3.2 view-loop detection is no longer performed during view creation; it is checked during the view recalc. Dot notation¶ Views do not support dot notation. q)t:.z.p q)t1::t q)t.date 2014.09.03 q)t1.date 'nyi Multithreading¶ Views must be evaluated on the main thread, otherwise the calculation will signal 'threadview . E.g. with q using two secondary threads $ q -s 2 KDB+ 3.2 2014.08.26 Copyright (C) 1993-2014 Kx Systems m64/... q)a::b+c q)b:c:1 q){a}peach 0 1 k){x':y} 'threadview @ {a}': 0 1 q.q))\ q)a 2 q){a}peach 0 1 2 2 Parse¶ Views are not parsable, e.g. eval parse "a::b+c" Installing kdb+¶ You can run kdb+ on Linux, macOS, or Windows Step 1: Download¶ The 64-bit kdb+ Personal Edition interpreter is licensed for non-commercial use. It is not licensed for use on cloud servers. The provided license-key file (kc.lic ) requires an always-on Internet connection. Commercial versions of kdb+ are available to customers from downloads.kx.com. Credentials are available from the customer's Designated Contacts. Requires a 64-bit interpreter and a k4.lic or kc.lic license-key file OR a 32-bit interpreter. 32-bit applications will not run in macOS 10.15+ (Catalina and later) Internal distribution at customer sites Most customers download the latest release of kdb+ (along with the accompanying README.txt , the detailed change list) and make a limited number of approved kdb+ versions available from a central file server. Designated Contacts should encourage developers to keep production systems up to date with these versions of kdb+. This can greatly simplify development, deployment and debugging. Platforms and versions The names of the ZIPs denote the platform: l64.zip – 64-bit Linux; w32.zip – 32-bit Windows, etc. m64 contains a universal binary suitable for both Intel and Apple Silicon Macs. l64 contains the Linux x86 build, with l64arm containing the Linux build suitable for ARM processors. Numerical release versions of the form 3.5, or 4.0 are production code. Versions of kdb+ with a trailing t in the name such as 3.7t are test versions and are neither intended nor supported for production use. Step 2: Unzip your download¶ Here we assume you install kdb+ in your HOME directory on Linux or macOS; or in C:\ on Windows, and set the environment variable QHOME accordingly. os QHOME --------------- Linux ~/q macOS ~/q Windows c:\q You can install kdb+ anywhere as long as you set the path in QHOME . Open a command shell and cd to your downloads directory. Unzip the downloaded ZIP to produce a folder q in your install location. unzip l64.zip -d $HOME/q unzip m64.zip -d $HOME/q Expand-Archive w64.zip -DestinationPath C:\q How to run 32-bit kdb+ on 64-bit Linux Use the uname -m command to determine whether your machine is using the 32-bit or 64-bit Linux distribution. If the result is i686 ori386 or similar, you are running a 32-bit Linux distributionx86_64 , you are running a 64-bit Linux distribution To install 32-bit kdb+ on a 64-bit Linux distribution, you need a 32-bit library. Use your usual package manager to install i686 or i386: for example, sudo apt-get install libc6-i386 . Step 3: Install the license file¶ If you have a license file, k4.lic or kc.lic , put it in the QHOME directory. Your QHOME directory will then contain: ├── kc.lic ├── l64/ │ └── q └── q.k ├── kc.lic ├── m64/ │ └── q └── q.k ├── kc.lic ├── w64/ │ └── q └── q.k (32-bit versions have 32 in the folder name instead of 64 .) kdb+ looks for a license file in QHOME . To keep your license file elsewhere, set its path in environment variable QLIC . Step 4: Confirm success¶ Confirm kdb+ is working: launch your first q session. cd q/l64/q cd spctl --add q/m64/q xattr -d com.apple.quarantine q/m64/q q/m64/q Authorizing macOS to run kdb+ MacOS Catalina (10.15) introduced tighter security. It may display a warning that it does not recognize the software. If the spctl and xattr commands above have not authorized the OS to run q, open System Preferences > Security & Privacy. You should see a notification that q has been blocked – and a button to override the block. c:\q\w64\q The q session opens with a banner like this. KDB+ 4.0 2020.06.01 Copyright (C) 1993-2020 Kx Systems m64/ 12()core 65536MB sjt mackenzie.local 127.0.0.1 EXPIRE… q) License files and 32-bit kdb+ 32-bit kdb+ does not require a license file to run, but if it finds one at launch it will signal a license error if the license is not valid. Try your first expression. q)til 6 0 1 2 3 4 5 End the q session and return to the command shell. q)\\ $ Step 5: Edit your profile¶ Defining q as a command allows you to invoke kdb+ without specifying the path to it. The q interpreter refers to environment variable QHOME for the location of certain files. Without this variable, it will guess based on the path to the interpreter. Better to set the variable explicitly. The QLIC environment variable tells kdb+ where to find a license key file. Absent the variable, QHOME is used. - Open ~/.bash_profile in a text editor, append the following line, and save the file. (Edit~/.bashrc to define a q command for non-console processes.) export QHOME=~/q export PATH=~/q/l64/:$PATH - In the command shell, use the revised profile: source .bash_profile - Open ~/.zshrc in a text editor, append the following lines, and save the file. export QHOME=~/q export PATH=~/q/m64/:$PATH - In the command shell, use the revised profile: source ~/.zshrc In the command shell issue the following commands: setx QHOME "C:\q" setx PATH "%PATH%;C:\q\w64" (In the above, substitute 32 for 64 if you are installing 32-bit kdb+.) Test the new command. Open a new command shell and type q . Last login: Sat Jun 20 12:42:49 on ttys004 ❯ q KDB+ 4.0 2020.06.01 Copyright (C) 1993-2020 Kx Systems m64/ 12()core 65536MB sjt mackenzie.local 127.0.0.1 EXPIRE… q) Further customization¶ rlwrap for Linux and macOS¶ On Linux and macOS, the rlwrap command allows the Up arrow to retrieve earlier expressions in the q session. This can be very useful and it is recommended you install it. Run rlwrap -v to check if it's currently installed. If not, install rlwrap using your package manager. Common package managers are: apt , dnf and yum for Linux, and Homebrew and MacPorts for macOS. After installation, the q command can be changed to always run with rlwrap : alias q="rlwrap -r q" This can be added to the end of the user's profile to take effect on every session. Interactive development environments¶ If you are a solo student, we recommend learning q by running it from a command shell, as a REPL, writing scripts in a text editor. The examples on this site are produced that way; visual fidelity should help you as you learn. Jupyter notebooks are an interactive publishing format. We are producing lessons in this form and the library is growing. The JupyterQ interface lets you run q code in notebooks. Notebooks are not, however, an IDE, and are unsuitable for studying features such as event handlers. For more advanced study, use either the bare q REPL, or download and install our interactive development environment, KX Developer. Multiple versions¶ Multiple versions of kdb+ can be installed on a system by following this guide. What’s next?¶ Learn the q programming language, look through the reference card, or see in the Database what you can do with kdb+.
NASA Frontier Development Lab Space Weather Challenge¶ The NASA Frontier Development Lab (FDL) is an applied artificial intelligence (AI) research accelerator, hosted by the SETI Institute in partnership with NASA Ames Research Centre. The programme brings commercial and private partners together with researchers to solve challenges in the space science community using new AI technologies. NASA FDL 2018 focused on four areas of research – Space Resources, Exoplanets, Space Weather and Astrobiology – each with their own separate challenges. This paper will focus on the first of the Space Weather challenges, which aimed to forecast Global Navigation Satellite System (GNSS) disruptions. The Space Weather Challenge¶ A GNSS is a network of satellites providing geospatial positioning with global coverage. The most famous example is the United States’ Global Positioning System (GPS). Such a network relies upon radio communications between satellites and ground-based receivers, which can be subject to interruptions in the presence of extreme space weather events. Space weather refers to changes in radiation emitted by the Sun, leading to fluctuations in the Earth’s ionosphere. Changes to the electron density in the ionosphere cause fluctuations in the amplitude and phase of radio signals, referred to as phase scintillation. Radio signals propagating between GNSS satellites and ground-based receivers are affected by these scintillation events and can become inaccurate or even lost. In a society that has become dependent on GNSS services for navigation in everyday life, it is important to know when signal disruptions might occur. Given that space weather events occurring between the Sun and the Earth have a non-linear relationship, physical models have struggled to predict scintillation events. One solution to making more accurate predictions, is to use machine-learning (ML) techniques. In this paper, we examine the use of ML models to predict scintillation events, using historical GNSS data. Initially, a Support Vector Machine (SVM) was used to recreate the baseline model outlined in McGranaghan et al., 2018. We then implemented a neural network model in an attempt to improve upon the baseline results and accurately predict events as far as 24 hours ahead. Both methods used the strength of kdb+/q to deal with time-series data and embedPy to import the necessary python ML libraries. The technical dependencies required for the below work are as follows: - embedPy - TensorFlow 16.04 - NumPy 1.14.0 - pandas 0.20.3 - Matplotlib 2.1.1 - Keras 2.0.9 - scikit_learn 0.19.1 Data¶ Publicly available data was used to develop the ML models discussed below. Different datasets describe the state of the Sun, the ionosphere and the magnetic field of the Earth. Combining these datasets created an overall picture of atmospheric conditions at each timestep, including when scintillation events occurred. The first dataset was collected by the Canadian High Arctic Ionospheric Network (CHAIN) [1] from high-latitude GNSS receivers located throughout the Canadian Arctic. Data from multiple satellites was recorded by ground-based ionospheric scintillation and total electron count (TEC) monitors. For the purpose of this research, receivers from the Septentrio PolarRxS branch of the CHAIN network were used, taking the 14 CHAIN receiver stations [2] with the most continuous data. Recorded features for each receiver include; TEC, differential TEC (current TEC minus TEC recorded 15 seconds previously), the scintillation index, the phase and amplitude scintillation indices and the phase spectral slope. Solar and geomagnetic features can be found in the second dataset, which is available on the NASA OMNI database [3]. Features in the data include solar wind properties (velocity, power, and the Newell and Borovsky constants), magnetic properties (magnetic field strength, IMF and clock angle), and geomagnetic indices (AE and SymH), along with proton fluxes and indices Kp and F10.7. Additional solar X-ray measurements were included in the solar dataset. Such measurements are available from the NOAA Geostationary Satellite Server [4]. The third dataset was collected by the Canadian Array for Real-time Investigations of Magnetic Activity Network (CARISMA) [5]. CARISMA data was recorded by magnetometers at high latitudes, and could therefore be co-located with CHAIN data. Pre-processing¶ During the initial stages of pre-processing, the following steps were taken: CHAIN¶ - Only data with a lock-time of greater than 200 seconds was included to account for ‘loss of lock’ events, where receivers stop receiving satellite signals due to significant signal irregularities. [6] - Satellites traveling at low elevations experience ‘multi-path’ irregularities where signals have to travel longer distances through the ionosphere and are therefore reflected and follow multiple paths before reaching receivers. [7] To differentiate between multi-path and scintillation irregularities, data with an elevation of greater than 30 degrees was selected and the phase and amplitude scintillation indices (σφ and S4 respectively) were projected to the vertical. - Latitude and longitude for each station were also added to the data [2]. - The median value was calculated for each feature at each timestep. Solar¶ - Newell and Borovsky constants were added to the dataset. - Following the method used in McGranaghan et al. (2018), historical values recorded 15 and 30 minutes previously were included for each input parameter from the OMNI dataset. - All solar features were recorded at 5 minute intervals, except for the Kp and F10.7 indices which were recorded every hour. Data was back filled to match the minute granularity of the chain dataset. - Additional solar X-ray feature GOESx was recorded at random intervals, every few seconds. Average measurements were selected at minute intervals. Magnetometer¶ - Raw data was recorded at minute intervals. - A chain station column was added to the final table so that data could be joined with CHAIN data at a later stage. Following pre-processing, the data was persisted as a date-partitioned kdb+ database. Scripts were written to create configuration tables, specifying the features and scaling required for each model. The configuration tables had the below form. table colname feature scaler ------------------------------ chain dt 0 :: chain doy 0 :: chain cs 0 :: chain tec 1 :: chain dtec 1 :: chain SI 1 :: chain specSlope 1 :: .. SVM model¶ A SVM was used to recreate the baseline model outlined in McGranaghan et al., 2018. The baseline model uses the first two datasets to predict scintillation events an hour ahead. For this method, a total of 40,000 random data points were selected from 2015, taking data from each CHAIN receiver. We aim to improve upon this model by: - Considering data on a receiver-by-receiver basis, while adding localized features, to account for the geospatial element in the data. - Performing feature selection to reduce the dimensionality of the input features. - Adding an exponential weighting to input features to give the most recent data the highest importance and account for the temporal element in the data. CHAIN and solar data¶ Along with the partitioned database, scripts were loaded containing utility functions, graphing methods, and the required configuration table. q)\l /SpaceWeather/kxdb q)\l ../utils/utils.q q)\l ../utils/graphics.q q)\l ../config/configSVM.q For the SVM method, CHAIN and solar datasets were used, with measurements recorded at 1-minute intervals. CHAIN data was recorded for each of the 14 receiver stations. All data from 2015 was loaded, with the solar table joined to the corresponding rows in the CHAIN table. q)sdateSVM:2015.01.01 q)edateSVM:2015.12.31 q)getTabDate:{[dt;t]?[t;enlist(=;`date;dt);0b;{x!x}exec colname from configSVM where table=t]} q)getAllDate:{[dt] r:tabs!getTabDate[dt]each tabs:`chain`solar`goes; t:select from(r[`chain]lj`dt xkey update match:1b from r`solar)where match; select from(t lj`dt xkey update match:1b from r`goes)where match} q)show completeSVM:raze getAllDate peach sdateSVM+til 1+edateSVM-sdateSVM dt doy cs tec dtec SI specSlope s4 sigPhiVer Bz .. -------------------------------------------------------------------------------------------------.. 2015.01.01D00:00:00.000000000 1 arv 16.31073 0.285 0.014 1.77 0.04130524 0.03474961 1.05 .. 2015.01.01D00:00:00.000000000 1 chu 20.58558 0.003 0.009 1.89 0.03389442 0.03238033 1.05 .. 2015.01.01D00:00:00.000000000 1 cor 17.63518 0.072 0.013 2.06 0.04001991 0.0569824 1.05 .. 2015.01.01D00:00:00.000000000 1 edm 26.65708 -0.046 0.01 1.86 0.0443945 0.03070174 1.05 .. 2015.01.01D00:00:00.000000000 1 fsi 27.10333 -0.011 0.008 1.77 0.02914058 0.02512171 1.05 .. 2015.01.01D00:00:00.000000000 1 fsm 21.78102 -0.033 0.009 1.83 0.02766845 0.02570405 1.05 .. 2015.01.01D00:00:00.000000000 1 gil 24.6702 -0.009 0.012 2.06 0.03305384 0.07465466 1.05 .. .. Target data¶ The occurrences of scintillation events are shown by sudden irregularities in a number of features, specifically the phase scintillation index, which was projected to the vertical throughout this work (sigPhiVer ). As the baseline looks at predicting scintillation 1 hour ahead, the value of σφ 1 hour ahead of the current timestep, sigPhiVer1hr , was used as target data for the models. A phase scintillation event is said to be occurring when sigPhiVer has a value of greater than 0.1 radians. The target data is therefore assigned a value of 1 (positive class) if it has a value greater than 0.1 radians and 0 (negative class) if it has a value less than 0.1 radians. The percentage of scintillation events present in the SVM data are shown below. q)dist:update pcnt:round[;.01]100*num%sum num from select num:count i by scintillation from([]scintillation:.1<completeSVM`sigPhiVer1hr); scintillation| num pcnt -------------| ------------- 0 | 4304589 96.84 1 | 140503 3.16 Ideally, data would have been recorded for each CHAIN station, at every minute throughout 2015. However, GNSS receivers are often prone to hardware failures, which lead to gaps in the data. Figure 1: Values for the phase scintillation index projected to the vertical, recorded by each CHAIN receiver throughout 2015. Metrics¶ As only 3% of the data represented scintillation occurring, it would have been easy to create a model which produced high accuracy. A naïve model which predicted that scintillation never occurred would still have been correct 97% of the time. Additional metrics were therefore needed to determine how well the models performed. In addition to accuracy, the True Skill Statistic (TSS) has been used throughout this paper to evaluate model performance. The TSS calculates the difference between recall and the false positive rate and produces values ranging from -1 to 1, with 1 being the perfect score. [8] where TP, TN, FP and FN are true positives, true negatives, false positives and false negatives respectively. Additional features¶ Scintillation events are subject to diurnal and seasonal variations, caused by the inclination of the Earth in relation to the Sun. When either hemisphere of the Earth is tilted towards the Sun, increased solar radiation causes greater ionization in the upper atmosphere. This leads to higher scintillation indices and thus more scintillation events.[9] To account for such variations, the sine and cosine local time of day and day of year were added to the dataset. For the baseline, only the cosine day of year was added. where \(doy\) is the day of year, \(D_{tot}\) is the number of days in the year (365 for the SVM model, 365.25 for the neural network model), \(dt\) is the time in minutes and \(T_{tot}\) is the total number of minutes in a day. q)completeSVM:update cosdoy:cos 2*pi*doy%365 from completeSVM Feature engineering¶ To account for gaps in both feature and target data, rows containing nulls were dropped. As ML models are sensitive to inputs with large ranges, some features in the input data were log(1+x) scaled (as defined in the SVM configuration table). q)completeSVM@:where not any flip null completeSVM q)completeSVM:flip(exec first scaler by colname from configSVM)@'flip Standard scaling was then used to remove the mean and scale each feature to unit variance. Meanwhile, target data was left unscaled and assigned a binary value, using the 0.1 radians threshold mentioned above. For the baseline model, a total of 40,000 random data points were selected and split into training (80%) and testing (20%) sets. Initially, two sets of shuffled indices were produced, covering the full set of indices in the data. q)splitIdx:{[x;y]k:neg[n]?n:count y;p:floor x*n;(p _ k;p#k)} q)splitIdx[.2;y] 134574 189809 424470 960362 629691 516399 721898 1091736 101292 .. 492448 121854 186677 1144240 176314 261502 853557 580623 494990 .. These indices were then used to split X and Y data into respective train-test sets. q)count each`xtrn`ytrn`xtst`ytst!raze(xdata;ydata)@\:/:splitIdx[.8 .2; xdata] xtrn| 915793 ytrn| 915793 xtst| 228949 ytst| 228949 Model¶ The Python libraries and functions required to run the SVM model were imported using embedPy. q)array: .p.import[`numpy]`:array q)svc: .p.import[`sklearn.svm]`:SVC To give the positive class a higher importance, a ratio of 1:50 was assigned to the class-weight parameter in the SVM classification model. At this stage, X and Y training sets were passed to the model. Once trained, the SVM was used to make binary predictions for Y, given the X testing data. Predicted values and Y test values were then used to create a confusion matrix. A function was created so that the model could be run using different subsets of the data. q)trainPredSVM:{[stn;col] sample:t neg[c]?c:count t:svmData stn; xdata:flip stdscaler each flip(exec colname from configSVM where feature)#sample; ydata:.1<sample col; r:`xtrn`ytrn`xtst`ytst!raze(xdata;ydata)@\:/:splitIdx[.2;ydata]; model:svc[`kernel pykw`rbf;`C pykw .1;`gamma pykw .01;`class_weight pykw enlist[1]!enlist 50;`probability pykw 1b]; if[(::)~ .[model[`:fit];(array[value flip r`xtrn]`:T;r`ytrn);{[e] -2"Error: ",e;}];:()]; pred:model[`:predict][array[value flip r`xtst]`:T]`; CM:cfm[r`ytst;pred]; (`model`cs`pred!(`SVM;stn;col)),metrics CM } Additionally, data was split into different tables for each receiver station: q)stn:distinct completeSVM`cs q)svmData:(`ALL,stn)!enlist[-40000?completeSVM],{select from completeSVM where cs=x}each stn All stations (baseline model)¶ To achieve the baseline result, the function was passed combined data from all 14 stations. This model correctly identified 222 scintillation events, shown in the confusion matrix below. q)cfm[r`ytst;pred] 0| 5621 45 1| 2112 222 These values were used to produce performance metrics. cs accuracy errorRate precision recall specificity TSS ------------------------------------------------------------- ALL 73.04 26.96 9.512 83.15 72.69 0.5583 Individual stations¶ As a comparison, data was split into respective tables for each receiver station. These were used to individually train and test the model. cs accuracy errorRate precision recall specificity TSS -------------------------------------------------------------- arv 81.46 18.54 7.331 73.08 81.63 0.5471 chu 84.5 15.5 6.789 83.18 84.52 0.677 cor 72.69 27.31 6.664 84.62 72.41 0.5703 edm 97.65 2.35 21.28 94.34 97.67 0.9201 fsi 86.55 13.45 10.03 81.82 86.64 0.6845 fsm 87.84 12.16 7.949 69.83 88.1 0.5793 gil 20.91 79.09 13.65 99.7 9.618 0.09319 gjo 71.64 28.36 11.56 89.57 70.88 0.6045 mcm 86.54 13.46 13.89 93.99 86.36 0.8035 rab 84.78 15.22 8.855 82.86 84.81 0.6767 ran 76.74 23.26 7.285 77.72 76.71 0.5443 rep 39.62 60.38 7.927 97.65 36.37 0.3402 arc 79.97 20.03 9.76 89.06 79.75 0.6881 gri 99.04 0.9625 9.756 72.73 99.07 0.718 Fort McMurray station¶ Both previous models used data from the same station/s to train and make predictions on the SVM. An additional method was to train the SVM using one station and make predictions using data from the remaining stations. In the below case, 32,000 random data points were selected as training data from the Fort McMurray (mcm ) table and 8,000 points were chosen from each of the remaining tables to test the model. The SVM was then run as before. cs accuracy errorRate precision recall specificity TSS --------------------------------------------------------------- arc 84.46 15.54 2.256 13.51 86.14 -0.003445 arv 83.95 16.05 6.031 59.85 84.35 0.442 chu 83.58 16.43 6.759 75 83.71 0.5871 cor 82.15 17.85 7.456 51.69 82.96 0.3465 edm 87.36 12.64 4.442 100 87.29 0.8729 fsi 83.75 16.25 6.74 77.31 83.85 0.6116 fsm 84.65 15.35 7.291 82.61 84.68 0.6729 gil 80.76 19.24 25.87 36.27 86.53 0.228 gjo 81.77 18.22 10.76 43.55 83.52 0.2707 gri 83.16 16.84 0.2976 36.36 83.23 0.1959 mcm 86.54 13.46 13.89 93.99 86.36 0.8035 rab 84.4 15.6 7.241 75.4 84.54 0.5994 ran 82.62 17.38 8.432 61.42 83.16 0.4458 rep 80.61 19.39 12.53 41.67 82.9 0.2457 SVM results¶ Plotting the TSS for all three methods allowed the performance of each model to be compared. Figure 2: True Skill Statistic results produced by the Support Vector Machine models (individual models – left, Fort McMurray model – right). The combined and Fort McMurray models are plotted in black. The SVM baseline model, which combined data from all the receiver stations, gave an accuracy of 73.04% and a TSS of 0.56 (precision = 9.51%, recall = 83.15%). The top plot shows how the performance of the model varied depending on which station was used to train and test the SVM. TSS results produced by the Fort McMurray model are shown in the right plot, with results varying even more drastically from station to station. From these results we can infer that scintillation events must be localized and will therefore depend on the location of each individual receiver. In order to train a model with higher accuracy and TSS, data must either be separated on a station-by-station basis, or additional spatial parameters must be introduced to account for the geospatial elements in the data. Until this point, the SVM trained and tested on combined/individual data had been used to make predictions 1 hour ahead. However, it was possible to predict at any chosen prediction time. In this paper, we look at predictions for 0.5, 1, 3, 6, 9, 12 and 24 hours ahead. Using the same model as before, random samples of 40,000 timesteps from 2015 were selected for each station to train and test the SVM at each prediction time. Figure 3: True Skill Statistic results for the Support Vector Machine model, trained on combined and individual data at multiple prediction times. As expected, the model tends to perform better at 0.5 and 1 hour prediction times, with results getting worse as the prediction time increases. Accuracy follows the same trend, decreasing as prediction time increases. The Fort McMurray station gives the highest TSS throughout, allowing the model to predict 24 hours ahead with a TSS score of 0.61 and an accuracy of 73%. Figure 4: Accuracy results for the Support Vector Machine model, trained on combined and individual data at multiple prediction times. Feature selection¶ Before moving from an SVM to a neural network, dimensionality reduction was performed on the dataset. Dimensionality reduction is the process of reducing the number of features in a dataset, while preserving as much information as possible. Finding a lower-dimensional representation of the dataset can improve both the efficiency and accuracy when the data is fed to a ML model. A number of scikit-learn libraries were imported (using embedPy) for feature selection. Each determined the importance of features using a different method. - PCA - Feature importance was carried out by calculating the variance of each component. Plotting the variance allowed the minimum number of features to be chosen, which reduced dimensionality while keeping data loss to a minimum. It was found that the first 15 components held the most variance in the data. PCA was carried out again, using only the first 15 components in order to maximize the variance. - Fisher ranking - The Fisher Score is a feature-selection algorithm used to determine the most important features in a dataset. It assigns a score to each component, determining the number of features that should remain in the dataset based on their score. This was done using the python scikit-learn SelectKBest library and the f_classif function, which determine the F-score for each component. - Extra tree classification - This method works by selecting features at random from a decision tree, in order to increase accuracy and control over-fitting within a model. They work similarly to random forests, but with a different data split, where for an extra trees classification, feature splitting is chosen at random, compared to the random-forest method of choosing the best split among a subset of features. Important features among the dataset were scored based on the results of the decision-tree outcomes. - Logistic regression - In ML, logistic regression is commonly used for binary classifications. Logistic regression uses probabilities to calculate the relationship between the desired predicted outcome and the input features from the dataset. These probabilities can then be used to rank the importance of features in the dataset. This method returned a list of selected features, as opposed to a list of features in descending order of importance. - Decision tree classifier - Given features as inputs, a decision tree is used to represent the different possible outcomes in a classification problem. The model uses branch nodes to describe observations about the inputs and leaves to represent the classification result. Feature importance was calculated by assessing the outcome achieved when each feature was used as input. - Random forests - A random forest is a supervised ML algorithm, which builds a model based on multiple random decision trees. It combines results to increase the accuracy of the model. The split at each node is decided based on the best split in a subset of features. Using this algorithm, feature importance can be extracted for each component. This is done by looking at the results of each node of the decision tree and assessing whether the features associated with each node increase the accuracy of the overall decision tree. Following this method, a score was assigned to each feature. Each of the above methods produced a list of important features. Combining these and selecting the components which had been selected multiple times produced the final feature list; tec , dtec , s4 , SI , specSlope , sigPhiVer , AE , AE_15 , AE_30 , newell_30 , P , V , proton60 , f107 , kp and GOESx . These were used in the neural-network model, including time-lagged columns for each. Neural-network model¶ Pre-processing¶ To improve performance metrics, data from 2015-2017 was used to train a neural-network model. Going forward, only components selected in the feature selection process were used (found in the neural-network configuration table). Data was loaded using the same method as above. As results showed that scintillation events are specific to the location of each station, localized features were added to the dataset. These included the magnetometer dataset, sindoy, sintime, cosdoy and costime. As previously stated, 365.25 is used for Dtot in this model to account for the extra day present in a leap year. Figure 5: The variation in the phase scintillation index (sigPhiVer), the differential Total Electron Content (dtec) and the X component of the Earth’s magnetic field during a scintillation event, where sigPhiVer is greater than 0.1 radians. When a scintillation event occurs, geomagnetic features such as x , y , z and dtec will fluctuate drastically. It was therefore useful to give more importance to these features by adding columns which contained their absolute values with the mean removed. q)newval:{abs(x-avg x)} q)completeNN:update newx:newval x,newy:newval y,newz:newval z,newdtec:newval dtec from completeNN Feature engineering¶ For this model, X data was exponentially weighted to give the most recent data the highest importance. q)xdata:flip(reverse ema[.1]reverse@)each flip xdata Target data was log scaled to prevent negative predictions for σφ, which is always positive. A train-test split of 80%/20% was again used. To overcome the small fraction of data representing scintillation events, oversampling was used on the training set. A random sample taken from the positive class was re-added to the training dataset, giving a final training set with 50% positive samples. q)r:`xtrn`ytrn`xtst`ytst!raze(xdata;ydata)@\:/:splitIdx[.2;ydata] q)positiveIdx:where yscint:.1<exp r`ytrn q)pos:`x`y!{x[y]}[;positiveIdx]each(r`xtrn;r`ytrn) q)sampleIdx:(nadd:(-) . sum each yscint=/:(0 1))?count pos`x q)sample:`x`y!{x[y]}[;sampleIdx]each(pos`x;pos`y) q)oversampled:`x`y set'(r`xtrn;r`ytrn),'(sample`x;sample`y) Scintillation events before oversampling: ybinary| num pcnt -------| ------------- 0 | 4325747 88.44 1 | 565260 11.56 Scintillation events after oversampling: ybinary| num pcnt -------| ------------ 0 | 4325747 50 1 | 4325747 50 Model¶ To create a neural-network model, embedPy was used to import the necessary ML libraries. q)sequential: .p.import[`keras.models]`:Sequential q)dense: .p.import[`keras.layers]`:Dense q)normalization: .p.import[`keras.layers]`:BatchNormalization q)pylist: .p.import[`builtins]`:list The model had 1 input layer, 4 hidden layers and 1 output layer. A normal distribution was used as the initializer for the kernel to set the weights in each layer of the model. The input and hidden layers have output widths of 256 nodes, along with an Exponential Linear Unit (ELU) activation function, which gave the best model performance. ELU was specifically chosen as it converged the loss function to zero better than other activation functions, such as a Rectified Linear Unit (RELU). The output layer had 1 node and a linear activation function to allow a single value for to be returned for each timestep. q)model:sequential[]; q)model[`:add]dense[256;`input_dim pykw 37;`kernel_initializer pykw`normal; `activation pykw`elu]; q)model[`:add]normalization[]; q)model[`:add]dense[256;`activation pykw`elu;`kernel_initializer pykw`normal]; q)model[`:add]normalization[]; q)model[`:add]dense[256;`activation pykw`elu;`kernel_initializer pykw`normal]; q)model[`:add]normalization[]; q)model[`:add]dense[256;`activation pykw`elu;`kernel_initializer pykw`normal]; q)model[`:add]normalization[]; q)model[`:add]dense[256;`activation pykw`elu;`kernel_initializer pykw`normal]; q)model[`:add]normalization[]; q)model[`:add]dense[1;`activation pykw`linear]; q)model[`:compile][`loss pykw`mean_squared_error;`optimizer pykw`adam;`metrics pykw pylist `mse`mae]; At this stage, the model was trained for 50 epochs, using batch sizes of 512 each time. The model performed validation using 20% of the training data. q)resNN:model[`:fit][array[value flip oversampled`x]`:T;oversampled`y;`batch_size pykw 512;`verbose pykw 0;`epochs pykw 50;`validation_split pykw .2] Once trained, the model was used to make predictions 1 hour ahead. q)predNN:raze(model[`:predict]array[value flip r`xtst]`:T)`` Outputs were assigned binary values, using the 0.1 radians threshold, and then compared to the y test values selected previously. ypred| 0.03101 0.02524 0.02811 0.7762 0.02906 0.02075 0.02273 0.02361 0.06351 .. ybin | 0 0 0 1 0 0 0 0 0 .. Neural network results¶ The model was trained and tested using combined data from the Fort Churchill (chu ), Fort McMurray (mcm ) and Fort Simpson (fsi ) stations, each co-located magnetometer data. The true and false positives and negatives predicted by the model were represented in a confusion matrix. Figure 6: Confusion matrix produced by the neural network model at 1 hour prediction time. Scintillation represents the positive class. The model correctly identified 134,586 scintillation events at 1 hour prediction time. From these results, performance metrics were calculated to allow comparison between the SVM and neural network. model cs accuracy errorRate precision recall specificity TSS ------------------------------------------------------------------ SVM ALL 73.04 26.96 9.512 83.15 72.69 0.5583 NN ALL 98.41 1.591 91.49 95.09 98.84 0.9393 By introducing geospatial features and exponentially weighting input data to account for the spatial and temporal elements in the data, we managed to increase accuracy in the combined model by over 25% and TSS by 0.38. Precision and recall have also increase to greater than 90%, a large improvement on the baseline model. As before, the model was also run separately for each station. model cs accuracy errorRate precision recall specificity TSS --------------------------------------------------------------- SVM chu 84.5 15.5 6.789 83.18 84.52 0.677 NN chu 98.63 1.375 48.26 78.49 98.89 0.7739 SVM fsi 86.55 13.45 10.03 81.82 86.64 0.6845 NN fsi 98.89 1.105 51.58 71.17 99.22 0.7039 SVM mcm 86.54 13.46 13.89 93.99 86.36 0.8035 NN mcm 99.02 0.9849 65.32 78.01 99.35 0.7736 Accuracy for all three models has increased to over 98%, while TSS has increased to values above 0.7. This is an impressive result given that 3 years of data was used to train/test each of the neural networks, compared to the 40,000 data points from 2015 used in the SVM model. Another means of determining how well the models performed was to plot true and predicted values together. In the below plot, the first 300 values for at 1-hour prediction time have been plotted for the combined model. This plot shows how well predicted values compare with the test set. Figure 7: True (blue) and predicted values (orange) for the phase scintillation index at 1 hour prediction time (sigPhiVer1hr) produced by the neural network model using the combined dataset. The performance of the model is also apparent in the Receiver Operating Characteristics (ROC) curve plot, which compares the True Positive Rate (Sensitivity) and False Positive Rate (1-Specificity). This produces an area under the curve of 0.9972. Figure 8: Receiver Operating Characteristic curve for 1-hour prediction values produced in the neural network model using combined data. Similarly to the SVM model, the neural network method was used to predict at a range of prediction times, from 0.5-24 hours ahead. Figure 9: True Skill Statistic results for the neural network model, predicting 30 minutes – 24 hours ahead for the combined (ALL), Fort Churchill (chu), Fort Simpson (fsi) and Fort McMurray (mcm) models. Unlike the SVM model, predictions made using the neural network model produced high values for TSS regardless of prediction time, with all values sitting above 0.67. The combined model produced the highest TSS throughout, with a value of 0.94. For prediction time 24 hours, TSS results have increased by an average of 0.39 in comparison to the SVM model, with all values now sitting above 0.73. This is impressive compared to the baseline model where results became less reliable as the prediction time increased. Figure 10: Accuracy results for the neural network model, predicting 30 minutes – 24 hours ahead for the combined (ALL), Fort Churchill (chu), Fort Simpson (fsi) and Fort McMurray (mcm) models. The accuracy results for each of the neural-network models were also improved. Each model produces an accuracy of greater than 98% regardless of the prediction time. This is a vast improvement on the baseline model. As future work, it could be beneficial to try and find the maximum prediction time for each dataset using the neural-network model. It would also be interesting to see how results compared if null features for stations with sparse data were interpolated, as opposed to dropped. This would allow the neural network to be run for more stations. Conclusions¶ In a society that is increasingly dependent on GNSS technology, it is important to be able to predict signal disruptions accurately. Previous models did not produce reliable results, as they struggled to account for the non-linear nature of Sun-Earth interactions. This paper discussed how to harness the power of kdb+ and embedPy, to train machine-learning models and predict scintillations events. Data was pre-processed and scaled using kdb+. The support vector machine baseline model was then built, trained and tested using embedPy. Results produced by this model were improved upon by separating data on a receiver-by-receiver basis, showing that scintillation events are localized and, therefore, dependent on the location of each CHAIN receiver station. Feature selection allowed the dimensionality of the dataset to be reduced before adding spatial features, which accounted for the geospatial element of the Sun-Earth interactions. Additionally, adding an exponentially moving window to the input data helped to account for the temporal element in the data. Oversampling was also used in the training set to make it easier to train models to predict when scintillation events were occurring. The neural network method vastly improved results compared to the baseline model. Using this model allowed data from 2015-2017 to be used, compared to the baseline model which used 40,000 data points from 2015. The combined dataset for 1-hour prediction produced an increase in accuracy and total skill score of over 25% and 0.38 respectively. Predicting at 0.5-24 hour prediction times for the combined dataset, along with the Fort Churchill, Simpson and McMurray stations, also improved on the baseline results. Predictions produced high values for TSS regardless of prediction time, with all values sitting above 0.67. The combined model produced the highest TSS results with a value of 0.94 throughout. For 24-hour prediction, both accuracy and TSS results increased by an average of 27% and 0.39 respectively. We were therefore able to create a machine-learning model, which reliably predicts phase scintillation events as far as 24 hours ahead. Author¶ Deanna Morgan joined First Derivatives in June 2018 as a Data Scientist in the Capital Markets Training Program. References and data sources¶ [1] Canadian High Arctic Ionospheric Network: Data Download, accessed 30 October 2018. [2] Canadian High Arctic Ionospheric Network: Stations, accessed 30 October 2018. [3] OMNI Web: ftp://spdf.gsfc.nasa.gov/pub/data/omni/, accessed 30 October 2018. [4] National Oceanic and Atmospheric Administration: GOES SEM Data Files, accessed 30 Jan 2024. [5] CARISMA: University of Alberta, accessed 30 October 2018. [6] Kintner, P., Ledvina, B. and de Paula, E. “GPS and ionospheric scintillations”, Space Weather, 5(9), 2007. [7] Tinin, M. and I.Knizhin, S. “Eliminating the Effects of Multipath Signal Propagation in a Smoothly Inhomogeneous Medium”, Radiophysics and Quantum Electronics, 56(7), pp.413-421, 2013. [8] Bobra, M. and Couvidat, S. Solar Flare Prediction Using SDO/HMI Vector Magnetic Field Data with a Machine-Learning Algorithm, 2015. [9] Jin, Y., Miloch, W. J., Moen, J. I. and Clausen, L. B. “Solar cycle and seasonal variations of the GPS phase scintillation at high latitudes”, Journal of Space Weather and Space Climate, 8, p.A48., 2018. Code¶ The code presented in this paper is available on GitHub at kxcontrib/space-weather. Acknowledgements¶ I gratefully acknowledge the Space Weather 1 team at FDL – Danny Kumar, Karthik Venkataramani, Kibrom Ebuy Abraha and Laura Hayes – for their contributions and support.
Server calling client¶ This demonstrates how to simulate a C client handling a get call from a kdb+ server. The Java interface allows you to emulate a kdb+ server. The C interface does not provide the ability to respond to a sync call from the server, however, async responses (message type 0) can be sent using k(-c,...) . A get call may be desirable when client functions need to be called by the server – as though the client were an extension. This q code shows how a listening kdb+ server can call a kdb+ client (with handle h ) using async messaging only: q)f:{neg[h]({neg[.z.w]value x};x);h[]} q)f"1+1" 2 Generally, async set messages to the client are preferable because the server has many clients and does not want to be blocked by a slow response from any one client. One application of simulated get from the server is where an extension might have been the solution to a problem, but an out-of-process solution was preferred because: - only 32-bit or 64-bit libraries were available - unreliable external code may stomp on kdb+ - licensing issues - system calls in external code conflict with kdb+ Example¶ This example shows a kdb+ server operating with a single client. The client defines a range of C functions, which are registered with the kdb+ instance and can then be susequently called remotely using the q language. Code¶ sc.q¶ Script for kdb+ server instance GET:{(neg h)x;x:h[];x[1]} S:string fs:{{eval parse s,":{GET[(`",(s:S x[0]y),";",(S y),";",(";"sv S x[1;y]#"xyz"),")]}"}[x]each til count x} .z.po:{h::x;fs GET`} sc.c¶ C code for building client application //sc.c server calls client with simulated GET. // linux build instructions gcc sc.c -o sc -DKXVER=3 -lrt -pthread l64/c.o // macOS build instructions gcc sc.c -o sc -DKXVER=3 -pthread m64/c.o #include<stdio.h> #include<stdlib.h> #include<string.h> #include<sys/select.h> #include"k.h" #define A(x) if(!(x))printf("A(%s)@%d\n",#x,__LINE__),exit(0); //assert - simplistic error handling static K home(K x){ char* s; printf("%s function called\n",__FUNCTION__); s=getenv("HOME"); x=ktn(KC,strlen(s)); DO(xn,xC[i]=s[i]) return x; } static K palindrome(K x){ char c,*d; printf("%s function called\n",__FUNCTION__); A(xt==KC); K k=ktn(KC,xn*2); DO(xn,kC(k)[i]=xC[i]); DO(xn,kC(k)[xn+i]=xC[xn-1-i]); return k; } //exported functions and their arity static K(*f[])()={home,palindrome,0}; static char* n[]={"home","palindrome",0}; static long long a[]={1,1}; static K d(K x){ K k=ktn(KS,0),v=ktn(KJ,0); long long i=0; while(f[i]) js(&k,ss(n[i])),ja(&v,a+i),i++; return knk(2,k,v); } //remote sends atom or (`palindrome;0;x) or (`home;1;) static K call(K x){ P(0>xt,d(0)); A(xt==0); A(xn>1); A(xx->t==-KS); return f[xy->j](xK[2]); } static I sel(int c,double t){ A(2<c); int r; fd_set f,*p=&f; FD_ZERO(p); FD_SET(c,p); long s=t,v[]={s,1e6*(t-s)}; A(-1<(r=select(c+1,p,(V*)0,(V*)0,(V*)v))); P(r&&FD_ISSET(c,&f),c) return 0; } static K sr(int c){ int t; K x; A(x=k(c,(S)0)); return k(-c,"",call(x),(K)0); } //async from q int main(int n,char**v){ int c=khp("",5001); while(1) if(c==sel(c,1e-2)) A(sr(c)); } Running Example¶ Run kdb+, listening on port 5001 using the sc.q script: q sc.q -p 5001 on another terminal run sc to connect to the kdb+ instance. In q, .z.po is called when sc connects. .z.po then saves the socket h and calls GET` to find the list of functions the client provides. fs is called to eval a new function definition for home and palindrome . Then in q you can view the registered functions after sc connects: Here is what sc.q defined when it received the list of functions from the client: q)home {GET[(`home;0;x)]} q)palindrome {GET[(`palindrome;1;x)]} then in q, you can call these functions and see that the client C program sc executes its C functions & returns a result to kdb+ q)home[] "/home/jack" q)palindrome home[] "/home/jackkcaj/emoh/" Other uses¶ Consider a C client that is nothing but a TUI. It exposes ncurses functionality for a kdb+ listener. For fun, Conway’s game of Life will play out on the client application – all drawn by a q program. Basics: Interprocess communication
FDL Europe: Analyzing social media data for disaster management¶ Frontier Development Lab (FDL) Europe is an applied artificial-intelligence (AI) research accelerator, in partnership with the European Space Agency (ESA) and Oxford University and leaders in commercial AI. The overall goal of the program is to solve challenges in the space-science sector using AI techniques and cutting-edge technologies. FDL Europe 2019 focused on three main areas of research – Atmospheric Phenomena and Climate Variability, Disaster Prevention Progress and Response, and Ground Station Pass Optimization for Constellations. This paper will focus on the second of these challenges and, more specifically, the response aspect of flood management. Project overview¶ Annually, flooding events worldwide affect on the order of 80 million people, both in the developed and developing world. Such events create huge social and logistical problems for first responders and interested parties, including both governmental and non-governmental organizations. There are limitations within these groups, associated with the ability to reliably contact affected individuals and maintain up-to-date information on the extent of flood waters. These issues in particular, pose challenges to effective resourcing and efficient response. The primary goal of the European research team focusing on disaster management, was to investigate the use of AI to improve the capabilities of organizations to respond to flooding using orbital imagery and social media data. The central problem tackled by the team, was the development of deep-learning algorithms to map flood extent for deployment on a CubeSat satellite. This project used a VPU microprocessor chip in the hope that a neural-network architecture could be embedded on the chip, thus allowing for on-the-edge mapping of floods on cheap satellite systems. The cost of such satellites is on the order of 100 times cheaper than a typical imaging satellite, thus allowing a larger number to be deployed for tailored purposes such as flood mapping. Given the use of extremely specialized hardware for this task, a complementary project was designed to leverage kdb+ and the machine-learning and interface libraries. In this paper, we will examine the use of deep-learning methods to classify tweets relating to natural disasters and, more specifically, flooding events. The goal is to allow concerned parties to filter tweets and thus contact individuals based on their needs. This project was seen as complementary to the CubeSat project for a number of reasons. Firstly, tweets sourced from the Twitter API often contain GPS information thus providing locations for the CubeSats to focus the production flood maps. Secondly, the flood maps provided can give first responders and NGOs information about which affected areas to avoid during a flood event. This work was completed across two distinct sections: - The training of a binary classifier to discern relevant vs irrelevant tweets and following this a multi-class model in an attempt to label the tweets according to sub-classes including but not limited to: - affected individuals - infrastructural damage - Creation of a multi-class model on a kdb+ tickerplant architecture to produce a framework for the live classification and querying of tweets. All development was done with the following software versions. | software | version | |---|---| | kdb+ | 3.6 | | Python | 3.7.0 | Python modules used and associated versions are as follows. | library | version | |---|---| | beautifulsoup4 | 4.5.3 | | keras | 2.0.9 | | numpy | 1.16.0 | | pickle | 4.0 | | spacy | 2.0.18 | | wordcloud | 1.5.0 | In addition to this, a number of kdb+ libraries and interfaces were used. | library/interface | Release | |---|---| | embedPy | 1.3.2 | | JupyterQ | 1.1.7 | | ML-Toolkit | 0.3.2 | | NLP | 0.1 | Data¶ The data used for this work was sourced from the Crisis NLP datasets. This datasource contains human-annotated tweets collected from Twitter and relating directly to a wide variety of crises. These crises range from earthquakes and virus outbreaks, to typhoons and war events. The data of interest within this use case, is that relating to floods. Flood data from the following events were chosen. - 2012 Philippines - 2013 Alberta, Canada - 2013 Colorado, USA - 2013 Queensland, Australia - 2014 India - 2014 Pakistan These events were chosen both due to the availability of the datasets themselves, and the geographical and socio-economic variability in those affected. In total, the dataset contains approximately 8,000 tweets. The data comes from two distinct macro datasets, which contain both the tweet text and classifications of the tweets. Following preprocessing to standardize the classes across the datasets, the following are the sub-classes used within the multi-class section of this project. - Affected individual - Sympathy and prayers - Infrastructure or utilities - Caution and advice - Other useful information - Donations and volunteering - Useless information Modelling issues in social media data¶ Dealing with social-media data and in particular Twitter data, poses a number of problems for producing reliable machine-learning models. - The first of these issues is the character limit of tweets. While this has been increased over the years to 280 characters, the median tweet length is 33 characters. This creates the potential for a tweet to add ‘noise’ due to the lack of a clear discernible signal, thus making it difficult to derive meaning from the tweet. - The ambiguity of language also poses an issue. The same phrase in different contexts can have wildly different meanings. For example, if an individual were to tweet “I just got free ice-cream and now looking forward to the theater later. How much better could my day get?” vs someone tweeting "It's been raining all day and I missed my bus. How much better could my day get?", clearly the first use of better is positive while the second is sarcastic. In each case, information about the correct interpretation is contained within the first sentence. - Colloquialisms and the names of locations can also pose an issue. One of the most important target categories used in this work is infrastructure and utilities. This target has a strong association with place names. For example, “Terrible to see the damage on the Hoover due with the flooding in Colorado”. For anyone aware of the Hoover Dam in Colorado, it is clear that there is likely infrastructural damage to the dam. However, a computer is likely to miss this without context. These are just a small number of potential issues which can arise when dealing with social media data but can be rectified in the following manner. - Dealing with noise is handled in the preprocessing step through the removal of emojis, email links etc. The decisions made here can improve the ability to classify the data through standardizing the text, but can also remove important information if taken too far. - Both the 2nd and 3rd issues are mitigated through the use of models or techniques with an understanding of the ordering of language. For example, in the the Hoover dam example, knowing that the words damage and terrible preceded the word Hoover may indicate that there has been some infrastructural damage. The use of a model to solve this issue is presented within this paper. Pre-processing¶ To highlight the need to preprocess the data used in this paper, the following are examples of some tweets seen within the corpus Tweet contains user handle with leading at symbol and numeric values: rescueph @cesc_1213: please help us seek rescue for our friend! :(Jala Vigilia, 09329166833 Tweet contains hashtags, times and numeric values: river could now reach 15-metre flood peak after midnight (it's 11:05pm up here). bundaberg still the big #dangerzone#flooding Tweet contains URL and emojis: Colorado flooding could help keep tourists away http://t.co/vqwifb51hk Denver 🤔 For this work, the following steps were taken to standardize the data being presented to the model. - Remove all capitalization by lowering each tweet. - Remove full stops, commas and other common single character. - Replace hashtags with a space allowing individual words to be taken out of the tweet hashtags. - Remove emojis from the tweets. - Remove the rt tag. - Remove the at symbol indicating user name. The code to achieve this in its entirety is wrapped in several helper functions within code/fdl_disasters.q and is executed as follows within the notebook provided. rmv_list :("http*";"rt";"*,";"*&*";"*[0-9]*") rmv_single :rmv_master[;",.:?!/@'";""] rmv_hashtag:rmv_master[;"#";""] data_m[`tweet_text]:data_b[`tweet_text]: (rmv_ascii rmv_custom[;rmv_list] rmv_hashtag rmv_single@) each data_m`tweet_text Taking these changes into account, the tweets above are transformed into the following "rescueph cesc please help us seek rescue for our friend jala vigilia" "river could now reach metre flood peak after midnight its up here bundaberg still the big dangerzone flooding" "colorado flooding could help keep tourists away denver" Data exploration¶ When producing a machine-learning model, it is important to understand the content of the data being used. Doing so provides us with the ability to choose and tune an appropriate model to apply. This is heavily influenced by an understanding of how the target data is distributed and what information is contained in the data itself. Data distribution¶ Firstly I looked at the distributions of the targets in the binary example: distrib_b:desc count each group data_b`target plt[`:bar][til count distrib_b;value distrib_b;`color pykw `b]; plt[`:title][`$"Distribution of target categories within binary-class dataset"]; plt[`:xlabel][`Category]; plt[`:xticks][til count distrib_b;key distrib_b;`rotation pykw `45]; plt[`:ylabel][`$"#Tweets"]; plt[`:show][]; We can see from this that the dataset contains significantly more of the affected_individuals class. Given the dataset being used, this is unsurprising as every effort has been made to make the dataset as relevant as possible by the collators of this dataset. Looking at the multi-class example, we can see how the dataset as a whole breaks down into categories. The code to achieve this is similar to that above and thus not displayed again. As with the binary case, there are a number of classes that are more prominent within the data, such as affected individuals and donations/volunteering. Some classes are seen less often, such as sympathy and prayers. As such it may be expected that the models produced will be more likely to correctly classify tweets surrounding donations than those relating to prayers. Word cloud¶ Similar to the data-distribution case, it is possible to gain some insights into the content of the dataset by looking at commonly occurring words within the classes. This was achieved here through the use of the wordcloud library in Python. The code to achieve this was wrapped in the function wordcloud , which functionally is as follows args:`background_color`collocations`min_font_size`max_font_size vals:(`white;0b;10;90) wordcloudfn:{ cloud:z[`:generate]raze(?[x;enlist(=;`target;enlist y);();`tweet_text]),'" "; plt[`:figure][`figsize pykw (10;20)]; plt[`:title]["keywords regarding ", string y]; plt[`:imshow][cloud;`interpolation pykw `bilinear]; plt[`:axis]["off"]; plt[`:show][];}[;;wcloud[pykwargs args!vals]] Execution of this code is completed as follows. q)wordcloudfn[data_m;`affected_individuals] This produces the following output. In the above example, surrounding the affected individuals class, it is clear that tweets in this category contain some distinguishing characteristics. For example, words such as death, killed, missing and rescue all are associated with people who have had their lives disrupted by flooding. Meanwhile words contained in the sympathy and prayers class, use language strongly relating to religion as seen below. This indicates that while words such as flood and kashmir are prominent in tweets associated with each class, there are words that seem to be indicative of the base class of the tweets themselves. Sentiment analysis¶ The final step in the data-exploration phase was to look at the positive and negative sentiment of tweets within the corpus. This was achieved using functionality within the NLP library released by KX. The code for it is as follows. q)sentiment:.nlp.sentiment each data_m`tweet_text q)// Positive tweets q)3?100#data_m[`tweet_text] idesc sentiment`compound "request to all Twitter friends pray and help the flood victims of pak.. "joydas please use kashmir flood hashtag only if u need help or offeri.. "south ab flood relief fund supports local charities that help those i.. q)// Negative tweets q)3?100#data_m[`tweet_text] iasc sentiment`compound "news update floods kill in eastern india new delhi - flooding in east.. "in qlds criminal code stealing by looting subs carries a max penalty .. "at least dead in colo flooding severe flooding in jamestown in colora.. This allows us to gain insights into the state of mind of individuals who are tweeting and an understanding of some of the characteristics that may be associated with individual classes. For example, the positive tweets above both offer the sympathy and donations, whereas the negative tweets talk about the death of individuals and criminal activity. This could have a bearing on how tweets are classified, based on the absence or presence of specific words or phrases. Model¶ The model that was applied to both the binary- and multi-classification problems, is a Long Short-Term Memory (LSTM) model. This type of deep-learning architecture is a form of recurrent neural network (RNN). Its use stems from the need to gain an understanding of the ordering of words within the tweets, in order for context to be derived. To gain this understanding, the model uses a structure known as a memory cell to regulate weights/gradients within the system. Commonly RNNs suffer issues with exploding or vanishing gradients during back propagation but these are mitigated through the memory structure of the model. The following is a pictorial representation of an LSTM cell, with the purpose of each gate outlined. | gate | function | |---|---| | input | Controls how new information flows into the cell | | forget | Controls how long a value from the input gate stays in the cell (memory) | | output | Controls how the cell value is used to compute the activation of an LSTM unit | Model structure¶ Producing an LSTM model was done in embedPy using Keras. The following is the model used for the multi-class use case in this paper, // Define python functionality to produce the model kl:{.p.import[`keras.layers]x} seq :.p.import[`keras.models]`:Sequential dense :kl`:Dense embed :kl`:Embedding lstm :kl`:LSTM spdrop1:kl`:SpatialDropout1D dropout:kl`:Dropout // Create the model to be fit mdl_m:seq[] mdl_m[`:add][embed[2000;100;`input_length pykw (.ml.shape X)1]] mdl_m[`:add]spdrop1[0.1] mdl_m[`:add]lstm[100;pykwargs `dropout`recurrent_dropout!(0.1;0.1)] mdl_m[`:add]dense[7;`activation pykw `sigmoid] mdl_m[`:compile][pykwargs `loss`optimizer`metrics! (`categorical_crossentropy;`adam;enlist `accuracy)] print mdl_m[`:summary][] The summary of this model is as follows: A few points of note on this model: - A number of forms of dropout were used to prevent model overfitting. - The dense layer contained seven nodes, one associated with each of the output classes in the multi-class example. - The number of LSTM units chosen was 100, these are 100 individual layers with independent weights. - The loss function used is categorical cross-entropy, this accounts for the target being categorical and non-binary. Model data preparation¶ Prior to fitting this model, a number of steps were taken to manipulate the data, such that it could be ‘understood’ by the LSTM and scored correctly. Due to how computers handle information, data cannot be passed to the model as strings or symbols. Instead, it must be encoded numerically. This can be achieved through a number of methods, including tokenization and one-hot encoding, both of which were used here. Tokenization in Natural Language Processing is the splitting of data into distinct pieces known as tokens. These tokens provide natural points of distinction between words within the corpus and thus allow the text to be converted into numerical sequences. This conversion was completed as follows using Keras text processing tools on the tweets: // Python text processing utilities token:.p.import[`keras.preprocessing.text]`:Tokenizer pad :.p.import[`keras.preprocessing.sequence]`:pad_sequences // Set the maximum number of important words in the dataset max_nb_words:2000 // Set the maximum allowable length of a tweet (in words) max_seq_len :50 // Convert the data to a numpy array tweet_vals :npa data_b`tweet_text // Set up and fit the tokenizer to create // the numerical sequence of important words tokenizer:token[`num_words pykw max_nb_words;`lower pykw 1b] tokenizer[`:fit_on_texts]tweet_vals; // Convert the individual tweets into numerical sequences X:tokenizer[`:texts_to_sequences]tweet_vals Finally, once the data has been converted into numerical sequences, it is ‘padded’ such that the input length of each of the tweets is the same. This ensures that the neural network is passed consistent lengths of data. Padding refers to the addition of leading zeros to the numeric representation of the tweets, such that each is a list of 50 integers. The display of the tweets below is truncated to ensure the final non-zero values can be seen. It is representative of a subset of the data that was used in this paper. q)X:pad[X;`maxlen pykw max_seq_len]` // display the integer representation of the tweets q)5#(30_)each X 0 0 0 0 0 0 0 0 0 0 0 0 0 732 12 1 .. 0 0 0 0 0 649 2 90 158 520 308 252 1 57 501 1357 .. 0 0 0 0 0 0 0 0 0 0 0 158 380 12 50 201 .. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 732 .. 0 0 0 0 216 6 233 63 3 116 1 141 99 195 68 138 .. As mentioned above, one-hot encoding can also be used to create a mapping between text and numbers. As the target categories themselves are symbols, these must also be encoded. This was done using a utility function contained within the machine-learning toolkit. q)show y:data_m`target `sympathy_prayers`other_useful_info`other_useful_info`other_useful_in.. q)5#Y_m:flip value ohe_m:.ml.i.onehot1 data_m`target 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 Model fitting¶ Once the categorical and textual data had been converted into a numerical representation, it was split into a training and testing set. This is done in order to maintain separation of the data, such that results could be judged fairly. This was completed as follows: // train-test split binary data tts_b:.ml.traintestsplit[X;Y_b;0.1] xtrn_b:tts_b`xtrain;ytrn_b:tts_b`ytrain xtst_b:tts_b`xtest;ytst_b:tts_b`ytest // train-test split multi-class data tts_m:.ml.traintestsplit[X;Y_m;0.1] xtrn_m:tts_m`xtrain;ytrn_m:tts_m`ytrain xtst_m:tts_m`xtest;ytst_m:tts_m`ytest With the data split, both the binary and multi-class models can be fit such that new tweets can be classified and the results scored. // Fit binary model on transformed binary datasets mdl_b[`:fit][npa xtrn_b;npa ytrn_b;`epochs pykw epochs;`verbose pykw 0] // Fit multi-class model on transformed multi-class data mdl_m[`:fit][npa xtrn_m;npa ytrn_m;`epochs pykw epochs;`verbose pykw 0] Results¶ Once the models had been fit on the training set the results could be scored on the held out test set, the scoring was done in a number of parts: - Percentage of correct predictions vs misses per class. - Confusion matrix for predicted vs actual class. - Classification report outlining precision and recall and f1-score for each class. This functionality is wrapped in a function class_scoring in the code/fdl_disasters.q script // Binary classification prediction and scoring q)class_scoring[xtst_b;ytst_b;mdl_b;ohe_b] The following is the integer mapping between class integer representation and real class value: affected_individuals| 0 not_applicable | 1 Actual Class vs prediction Class Prediction Hit -------------------- 0 0 1 0 0 1 1 1 1 0 0 1 0 0 1 Displaying percentage of Correct prediction vs misses per class: Class| Hit Miss -----| -------------------- 0 | 0.9550225 0.04497751 1 | 0.6111111 0.3888889 TOTAL| 0.9070968 0.09290323 Displaying predicted vs actual class assignment matrix: Class| Pred_0 Pred_1 -----| ------------- 0 | 637 30 1 | 42 66 Classification report showing precision, recall and f1-score for each class: class | precision recall f1_score support --------------------| ------------------------------------- affected_individuals| 0.9381443 0.9550225 0.9465082 667 not_applicable | 0.6875 0.6111111 0.6470588 108 avg/total | 0.8128222 0.7830668 0.7967835 775 In the case of the binary classifier, accuracies in the region of 91% show that the model was capable of discerning between relevant and irrelevant tweets. More informative, however, is the recall on the affected individuals class, which was 95%. As such, we are only missing 5% of the total true positives of affected individuals. In this case, recall is the most important characteristic for model performance. // Multi-class prediction and scoring q)class_scoring[xtst_m;ytst_m;mdl_m;ohe_m] The following is the integer mapping between class integer representation and real class value: affected_individuals | 0 caution_advice | 1 donations_volunteering | 2 infrastructure_utilities| 3 not_applicable | 4 other_useful_info | 5 sympathy_prayers | 6 Actual Class vs prediction Class Prediction Hit -------------------- 0 0 1 2 2 1 2 0 0 4 4 1 5 3 0 Displaying percentage of Correct prediction vs misses per class: Class| Hit Miss -----| ------------------- 0 | 0.8831776 0.1168224 1 | 0.5625 0.4375 2 | 0.7424242 0.2575758 3 | 0.4756098 0.5243902 4 | 0.7068966 0.2931034 5 | 0.5906433 0.4093567 6 | 0.6666667 0.3333333 TOTAL| 0.6967742 0.3032258 Displaying predicted vs actual class assignment matrix: Class| Pred_0 Pred_1 Pred_2 Pred_3 Pred_4 Pred_5 Pred_6 -----| ------------------------------------------------ 0 | 189 3 6 3 1 6 6 1 | 3 36 4 4 0 15 2 2 | 10 3 98 1 4 12 4 3 | 7 5 7 39 1 22 1 4 | 1 2 0 0 41 10 4 5 | 20 16 14 11 3 101 6 6 | 4 2 4 0 1 7 36 Classification report showing precision, recall and f1-score for each class: class | precision recall f1_score support ------------------------| ------------------------------------- affected_individuals | 0.8076923 0.8831776 0.84375 214 caution_advice | 0.5373134 0.5625 0.5496183 64 donations_volunteering | 0.7368421 0.7424242 0.7396226 132 infrastructure_utilities| 0.6724138 0.4756098 0.5571429 82 not_applicable | 0.8039216 0.7068966 0.7522936 58 other_useful_info | 0.583815 0.5906433 0.5872093 171 sympathy_prayers | 0.6101695 0.6666667 0.6371681 54 avg/total | 0.6788811 0.6611312 0.6666864 775 The multi-class example also appears to be working well, with overall accuracy of ~70%. In the most important category (affected individuals), recall was ~88%. The most common misclassification was the classification of ‘infrastructure/utilities’ damage as ‘other useful information’, which is a reasonable miscategorization as outlined below in Conclusions. Live system¶ As outlined in the Results section above, given the scores produced for the categorization of multi-class tweets, the production of a model was broadly successful. The Conclusions section below outlines the limiting factors that affect the ability to produce a better model. However, the results are sufficient to move onto producing a framework, which could be used for the live classification of tweets. The first step was the saving of the tokenizer and model, which were to be applied to the data as it is fed through the system. This can be seen within the notebook in the following commands // python script which uses pickle to save tokenizer \l ../code/token_save.p sv_tok:.p.get[`save_token] sv_tok[tokenizer]; // save model as a h5 file mdl_m[`:save]["../live/multiclass_mdl.h5"] Given limited availability to data, data from the notebook was used to produce the ‘live’ system. The outline for this system is based heavily on the ‘vanilla’ kdb tickerplant architecture. The first step to run the system is to initialize the tickerplant. Here the port is being automatically set to \ , any other port assignment would be overwritten. $ q tick.q sym ./log/ For the purposes of this example -p must be set to 5140, setting port accordingly q) Once the tickerplant is listening for messages from the feed handler, we can start to look at how this feed was produced. The code sections of note within this are the following. // Open a connection to the tickerplant h:neg hopen`:localhost:5140 // Classes c:`affected_individuals`caution_advice`donations_volunteering`sympathy_prayers c,:`other_useful_info`infrastructure_utilities`useless_info // Create a dictionary showing rolling number of tweets per class processed_data:c!count[c]#0 // Function to update the appropriate tables on the tickerplant // update the number of values classified in each class upd_vals:{(h(".u.upd";x;y);processed_data[x]+:1)} // time-sensitive data feed .z.ts:{[c] if[(0=n mod 50)and n>1; -1"\nThe following are the number of tweets in each class for ", string[n]," processed tweets"; show processed_data]; clean_tweet: (rmv_ascii rmv_custom[;rmv_list] rmv_emoji rmv_hashtag rmv_single@) tweets[n]; X:pad[tokenizer[`:texts_to_sequences]enlist clean_tweet;`maxlen pykw 50]; pred:key[ohe]raze{where x=max x}(svd_mdl[`:predict][X]`)0; pkg:(.z.N;pred[0];clean_tweet); upd_vals[;pkg] {$[x in c; x; last c]}first pred; n+:1; }[c] Looking closely at the feed function above, it is clear that this is generally following the data pipeline used within the notebook. - Tweets are purged of ASCII characters, emojis, special characters and hashtags. - The tweets are tokenized and padded to an appropriate length. - The trained model is used to predict the class of the tweet. The divergence comes once tweets have been classified. At this point, the table appropriate for the class is updated using the upd_vals function. The classification time, the class label and the cleaned tweet are inserted into the appropriate tables. The feed is started, at which point the required libraries are loaded into the feed process. $q feed.q Loading utils.q Loading regex.q Loading sent.q Loading parser.q Loading time.q ... // set system to publish a message every 100ms q)\t 100 At this point, an RDB can be set up to allow a user to query the tables associated with each class. For the sake of simplicity, the RDB in this example is subscribed to all the tables. However, this could be modified based on use case. $q tick/r.q -p 5011 q)caution_advice time sym tweet .. -----------------------------------------------------------------------------.. 0D13:19:27.402944000 caution_advice "abcnews follow our live blog for the lat.. 0D13:19:28.898058000 caution_advice "davidcurnowabc toowoomba not spared wind.. 0D13:19:31.498798000 caution_advice "acpmh check out beyondblue looking after.. 0D13:19:33.797604000 caution_advice "ancalerts pagasa advisory red warning fo.. 0D13:19:34.798857000 caution_advice "flood warning for the dawson and fitzroy.. q)donations_volunteering time sym tweet .. -----------------------------------------------------------------------------.. 0D13:19:27.300326000 donations_volunteering "rancyamor annecurtissmith please.. 0D13:19:27.601642000 donations_volunteering "arvindkejriwal all aap mlas to d.. 0D13:19:28.198921000 donations_volunteering "truevirathindu manmohan singh so.. 0D13:19:29.001481000 donations_volunteering "bpincott collecting donations in.. 0D13:19:30.297868000 donations_volunteering "vailresorts vail resorts gives p.. Conclusions¶ In conclusion, it is clear from the results above that the use of an LSTM architecture to create a classifier for tweet content was broadly successful. A number of limiting factors hamper the ability to create a better model with the data available. These are as follows: - The dataset used was limited in size with only 7,800 classified tweets readily available. Given the ‘noisy’ nature of tweets this creates difficulties around producing a reliable model. A larger corpus would likely have produced a better representation of the language used in flooding scenarios and thus allow a better model to be produced. - The human-annotated data can be unreliable. While the data was collected and tagged by CrisisNLP, given the similarity of some of the classes, it may be the case that mistakes being made by the model are accurate representation of the true class. This is certainly true in the case of the data from India and Pakistan, where a reference for the quality of the classifications is provided in the raw dataset. - Decisions regarding information to remove from the dataset can have an impact. The inclusion of hashtags or the removal of user handles or rt tags, can impact the model’s ability to derive context from the tweets. For example, a search of this parameter space showed that the removal of user names had a negative effect. This is likely a result of tweets from news organizations, which are prevalent and are more likely to relate to a small number of classes. For example, infrastructure/utilities and caution/advice. The production of a framework to ‘live’ score data was also outlined. As mentioned when discussing the limits in model performance, there are also a number of limiting factors with this live system. The processing and classification time for an individual tweet limits the throughput of the system to approximately 40 messages per second. In order to scale this system to a larger dataset with higher throughput requirements, a more complex infrastructure or simplified machine learning pipeline would be required. However, this system shows the potential for the use of kdb+ in the sphere of machine learning when applied to natural language processing tasks. Author¶ Conor McCarthy joined First Derivatives in March 2018 as a Data Scientist in the Capital Markets Training Program and currently works as a machine learning engineer and interfaces architect in London. Code¶ The code presented in this paper is available on GitHub. Acknowledgements¶ I gratefully acknowledge the help of all those at FDL Europe for their support and guidance in this project and my colleagues on the KX Machine Learning team for their help vetting technical aspects of this paper.
Datatypes¶ Basic datatypes n c name sz literal null inf SQL ---------------------------------------------------------- 0 * list 1 b boolean 1 0b 2 g guid 16 0Ng 4 x byte 1 0x00 5 h short 2 0h 0Nh 0Wh smallint 6 i int 4 0i 0Ni 0Wi int 7 j long 8 0j 0Nj 0Wj bigint 0 0N 0W 8 e real 4 0e 0Ne 0We real 9 f float 8 0.0 0n 0w float 0f 0Nf 10 c char 1 " " " " 11 s symbol ` ` varchar 12 p timestamp 8 dateDtimespan 0Np 0Wp 13 m month 4 2000.01m 0Nm 0Wm 14 d date 4 2000.01.01 0Nd 0Wd date 15 z datetime 8 dateTtime 0Nz 0wz timestamp 16 n timespan 8 00:00:00.000000000 0Nn 0Wn 17 u minute 4 00:00 0Nu 0Wu 18 v second 4 00:00:00 0Nv 0Wv 19 t time 4 00:00:00.000 0Nt 0Wt time Columns: n short int returned by type and used for Cast, e.g. 9h$3 c character used lower-case for Cast and upper-case for Tok and Load CSV sz size in bytes inf infinity (no math on temporal types); 0Wh is 32767h RO: read only; RW: read-write Other datatypes 20-76 enums 77 anymap 104 projection 78-96 77+t – mapped list of lists of type t 105 composition 97 nested sym enum 106 f' 98 table 107 f/ 99 dictionary 108 f\ 100 lambda 109 f': 101 unary primitive 110 f/: 102 operator 111 f\: 103 iterator 112 dynamic load Above, f is an applicable value. Nested types are 77+t (e.g. 78 is boolean. 96 is time.) The type is a short int: - zero for a general list - negative for atoms of basic datatypes - positive for everything else Cast, Tok, type , key , .Q.ty (type) Temporal data, Timezones Basic types¶ The default type for an integer is long (7h or "j" ). Before V3.0 it was int (6h or "i" ). Strings¶ There is no string datatype. On this site, string is a synonym for character vector (type 10h ). In q, the nearest equivalent to an atomic string is the symbol. Strings can include multibyte characters, which each occupy the respective number of bytes. For example, assuming that the input encoding is UTF-8: q){(x;count x)}"Zürich" "Z\303\274rich" 7 q){(x;count x)}"日本" "\346\227\245\346\234\254" 6 Other encodings may give different results. q)\chcp "Active code page: 850" q)"Zürich" "Z\201rich" q)\chcp 1250 "Active code page: 1250" q)"Zürich" "Z\374rich" Temporal¶ The valid date range is 0001.01.01 to 9999.12.31 . (Since V3.6 2017.10.23.) The datetime datatype (15) is deprecated in favour of the timestamp datatype (12). q)"D"$"3001.01.01" 3001.01.01 Internally, dates, times and timestamps are represented by integers: q)show noon:`minutes`seconds`nanoseconds!(12:00;12:00:00;12:00:00.000000000) minutes | 12:00 seconds | 12:00:00 nanoseconds| 0D12:00:00.000000000 q)"j"$noon minutes | 720 seconds | 43200 nanoseconds| 43200000000000 Date calculations assume the proleptic Gregorian calendar. Casting to timestamp from date or datetime outside of the timestamp supported year range results in ±0Wp . Out-of-range dates and datetimes display as 0000.00.00 and 0000.00.00T00:00:00:.000 . q)`timestamp$1666.09.02 -0Wp q)0001.01.01-1 0000.00.00 q)"z"$0001.01.01-1 0000.00.00T00:00:00.000 Valid ranges can be seen by incrementing or decrementing the infinities. q)-0W 0Wp+1 -1 / limit of timestamp type 1707.09.22D00:12:43.145224194 2292.04.10D23:47:16.854775806 q)0p+ -0W 0Wp+1 -1 / timespan offset of those from 0p -106751D23:47:16.854775806 106751D23:47:16.854775806 q)-0W 0Wn+1 -1 / coincide with the min/max for timespan Symbols¶ A back tick ` followed by a series of characters represents a symbol, which is not the same as a string. q)`symbol ~ "symbol" 0b A back tick without characters after it represents the empty symbol: ` . Cast string to symbol The empty symbol can be used with Cast to cast a string into a symbol, creating symbols whose names could not otherwise be written, such as symbols containing spaces. `$x is shorthand for "S"$x . q)s:`hello world 'world q)s:`$"hello world" q)s `hello world Q for Mortals: §2.4 Basic Data Types – Atoms Filepaths¶ Filepaths are a special form of symbol. q)count read0 `:path/to/myfile.txt / count lines in myfile.txt Infinities¶ Note that arithmetic for integer infinities (0Wh ,0Wi ,0Wj ) is undefined, and does not retain the concept when cast. q)0Wi+5 2147483652 q)0Wi+5i -2147483644i q)`float$0Wj 9.223372e+18 q)`float$0Wi 2.147484e+09 Arithmetic for float infinities (0we ,0w ) behaves as expected. q)0we + 5 0we q)0w + 5 0w To infinity and beyond Floating-point arithmetic follows IEEE754. Integer arithmetic does no checks for infinities, just treats them as a signed integer. q)vs[0b]@/:0N!0W+til 3 0W 0N -0W 0111111111111111111111111111111111111111111111111111111111111111b 1000000000000000000000000000000000000000000000000000000000000000b 1000000000000000000000000000000000000000000000000000000000000001b but it does check for nulls. q)10+0W+til 3 -9223372036854775799 0N -9223372036854775797 This can be abused to push infinities on nulls which then become sticky and can be filtered out altogether, e.g. q)1+-1+-1+1+ -0W 0N 0W 1 2 3 0N 0N 0N 1 2 3 There is no display for short infinity. q)0Wh 32767h q)-0Wh -32767h Integer promotion is documented for Add. Integer infinities - do not promote, other than the signed bit; there is no special treatment over any other int value - map to int_min+1 and int_max, with 0N as int_min; so there is no number smaller than0N Best practice is to view infinities as placeholders only, and not perform arithmetic on them. Guid¶ The guid type (since V3.0) is a 16-byte type, and can be used for storing arbitrary 16-byte values, typically transaction IDs. Generation Use Deal to generate a guid (global unique: uses .z.a .z.i .z.p ). q)-2?0Ng 337714f8-3d76-f283-cdc1-33ca89be59e9 0a369037-75d3-b24d-6721-5a1d44d4bed5 If necessary, manipulate the bytes to make the uuid a Version-4 'standard' uuid. Guids can also be created from strings or byte vectors, using sv or "G"$ , e.g. q)0x0 sv 16?0xff 8c680a01-5a49-5aab-5a65-d4bfddb6a661 q)"G"$"8c680a01-5a49-5aab-5a65-d4bfddb6a661" 8c680a01-5a49-5aab-5a65-d4bfddb6a661 0Ng is null guid. q)0Ng 00000000-0000-0000-0000-000000000000 q)null 0Ng 1b There is no literal entry for a guid, it has no conversions, and the only scalar primitives are = , < and > (similar to sym). In general, since V3.0, there should be no need for char vectors for IDs. IDs should be int, sym or guid. Guids are faster (much faster for = ) than the 16-byte char vecs and take 2.5 times less storage (16 per instead of 40 per). Other types¶ Enumerated types¶ Enumerated types are numbered from 20h up to 76h . For example, in a new session with no enumerations defined: q)type `sym$10?sym:`AAPL`AIG`GOOG`IBM 20h q)type `city$10?city:`london`paris`rome 20h (Since V3.0, type 20h is reserved for `xxx$ where xxx is the name of a variable.) Enumerate, Enumeration, Enum Extend Enumerations Nested types¶ These types are used for mapped lists of lists of the same type. The numbering is 77 + primitive type (e.g. 77 is anymap, 78 is boolean, 96 is time and 97 is `sym$ enumeration.) q)`:t1.dat set 2 3#til 6 `:t1.dat q)a:get `:t1.dat q)type a /integer nested type 83h q)a 0 1 2 3 4 5 Dictionary and table¶ Dictionary is 99h and table is 98h . q)type d:`a`b`c!(1 2;3 5;7 11) / dict 99h q)type flip d / table 98h Functions, iterators, derived functions¶ Functions, lambdas, operators, iterators, projections, compositions and derived functions have types in the range [100–112]. q)type each({x+y};neg;-;\;+[;1];<>;,';+/;+\;prev;+/:;+\:;`f 2:`f,1) 100 101 102 103 104 105 106 107 108 109 110 111 112h
Cost/risk analysis¶ To determine how much cost savings our cluster of RDBs can make we will deploy the stack and simulate a day in the market. t3 instances were used here for simplicity. Their small sizes meant the clusters could scale in and out to demonstrate cost savings without using a huge amount of data. In reality they can incur a significant amount of excess costs due to their burstable performance. For production systems fixed cost instances like r5 , m5 , and i3 should really be used. Initial simulation¶ First we just want to see the cluster in action so we can see how it behaves. To do this we will run the cluster with t3a.micro instances. In the Auto Scaling the RDB section above, data is distributed evenly throughout the day. This will not be the case in most of our systems as data volumes will be highly concentrated between market open and close. To simulate this as closely as possible we will generate data following the distribution below. Figure 2.1: Simulation data volume distribution In this simulation we will aim to send in 6GB of data of mock trade and quote data. The peak data volume will be almost 1GB of data per hour (15% of the daily data) . The t3a.micro instances only have 1GB of RAM so we should see the cluster scaling out quite quickly while markets are open. The behavior of the cluster was monitored using Cloudwatch metrics. Each RDB server published the results of the Linux free command. First we will take a look at the total capacity of the cluster throughout the day. Figure 2.2: Total memory capacity of the t3a.micro cluster – Cloudwatch Metrics As expected we can see the number of servers stayed at one until the market opened. The RDBs then started to receive data and the cluster scaled up to eight instances. At end-of-day the data was flushed from memory and all but the live server was terminated. So the capacity was reduced back to 1GB and the cycle continued the day after. Plotting the memory usage of each server we see that the rates at which they rose were higher in the middle of the day when the data volumes were highest. Figure 2.3: Memory usage of each of the t3a.micro servers – Cloudwatch Metrics Focusing on just two of the servers we can see the relationship between the live server and the one it eventually launches. Figure 2.4: Scaling thresholds of t3a.micro servers – Cloudwatch Metrics At 60% memory usage the live server increased the ASG’s DesiredCapacity and launched the new server. We can see the new server then waited for about twenty minutes until the live RDB reached the roll threshold of 80%. The live server then unsubscribed from the tickerplant and the next server took over. Cost factors¶ Now that we can see the cluster working as expected we can take a look at its cost-efficiency. More specifically, how much of the computing resources we provisioned did we actually use. To do that we can take a look at the capacity of the cluster versus its memory usage. Figure 2.5: T3a.micro cluster’s total memory capacity vs total memory usage – Cloudwatch Metrics We can see from the graph above that the cluster’s capacity follows the demand line quite closely. As we pay per GB of RAM we use, the capacity line can be taken as the cost of the cluster. The gap between it and the usage line is where the cluster can make savings. Our first option is to reduce the size of each step up in capacity by reducing the size of our cluster’s servers. To bring the step itself closer to the demand line we need to either scale the server as late as possible or have each RDB hold more data. To summarize, there are three factors we can adjust in our cluster. - The server size - The scale threshold - The roll threshold Risk analysis¶ Care will be needed when adjusting these factors for cost-efficiency as each one will increase the risk of failure. First and foremost a roll threshold should be chosen so that the chance of losing an RDB to a 'wsfull error is minimized. The main risk associated with scaling comes from not being able to scale out fast enough. This will occur if the lead time for an RDB server is greater than the time it takes for the live server to roll after it has told the ASG to scale out. Figure 2.6: T3a.micro server waiting to become the live subscriber – Cloudwatch Metric Taking a closer look at Figure 2.4 we can see the t3a.micro took around one minute to initialize. It then waited another 22 minutes for the live server to climb to its roll threshold of 80% and took its place. So for this simulation the cluster had a 22-minute cushion. With a one-minute lead time, the data volumes would have to increase to 22 times that of the mock feed before the cluster started to fall behind. We could reduce this time by narrowing the gap between scaling and rolling, but it may not be worth it. Falling behind the tickerplant will mean recovering data from its log. This issue will be a compounding one as each subsequent server that comes up will be farther and farther behind the tickerplant. More and more data will need to be recovered, and live data will be delayed. One of the mantras of Auto Scaling is to stop guessing demand. By keeping a cushion for the RDBs in the tickerplant’s queue we will likely not have to worry about large spikes in demand affecting our system. Further simulations will be run to determine whether cost savings associated with adjusting these factors are worth the risk. Server size comparison¶ To determine the impact of using smaller instances four clusters were launched each with a different instance type. The instances used had capacities of 2, 4, 8, and 16GB. Figure 3.1: T3a instance types used for cost efficiency comparison As in the first simulation the data volumes were distributed in order to simulate a day in the market. However, in this simulation we aimed to send in around 16GB of data to match the total capacity of one t3a.xlarge (the largest instance type of the clusters). .sub.i was published from each of the live RDBs allowing us to plot the upd message throughput. Figure 3.2: T3a cluster’s upd throughput – Cloudwatch Metrics Since there was no great difference between the clusters, the assumption could be made that the amount of data in each cluster at any given time throughout the day was equivalent. So any further comparisons between the four clusters would be valid. Next the total capacity of each cluster was plotted. Figure 3.3: T3a clusters' total memory capacity Strangely the capacity of the t3a.small cluster (the smallest instance) rose above the capacity of the larger ones. Intuitively they should scale together but the smaller steps of the t3a.small cluster should still have kept it below the others. When the memory usage of each server was plotted we saw that the smaller instances once again rose above the larger ones. Figure 3.4: T3a clusters' memory usage – Cloudwatch Metrics This comes down to the latent memory of each server, when an empty RDB process is running the memory usage is approximately 150 MB. (base) [ec2-user@ip-10-0-1-212 ~]$ free total used free shared buff/cache available Mem: 2002032 150784 1369484 476 481764 1694748 Swap: 0 0 0 So for every instance that we add to the cluster, the overall memory usage will increase by 150MB. This extra 150MB will be negligible when the data volumes are scaled up as much larger servers will be used. The effect is less prominent in the 4, 8, and 16GB servers so going forward we will use them to compare costs. Figure 3.5: Larger t3a Clusters' Memory Usage - Cloudwatch Metrics The three clusters here behave as expected. The smallest cluster’s capacity stays far closer to the demand line, although it does move towards the larger ones as more instances are added. This is the worst-case scenario for the t3a.xlarge cluster, as 16GBs means it has to scale up to safely meet the demand of the simulation’s data, but the second server stays mostly empty until end-of-day. The cluster will still have major savings over a t3.2xlarge with 32GB. The cost of running each cluster was calculated, the results are shown below. We can see that the two smaller instances have significant savings compared to the larger ones. 50% savings when compared to running a t3a.2xlarge . The clusters with larger instances saw just 35 and 38%. | instance | capacity (GB) | total cost ($) | cost saving (%) | |---|---|---|---| | t3a.small | 1 | 3.7895 | 48 | | t3a.medium | 2 | 3.5413 | 51 | | t3a.large | 4 | 4.4493 | 38 | | t3a.xlarge | 16 | 4.7175 | 35 | | t3a.2xlarge | 32 | 7.2192 | 0 | Figure 3.6: T3a clusters' cost savings If data volumes are scaled up the savings could become even greater as the ratio of server size to total daily data volume becomes greater. However it is worth noting that the larger servers did have more capacity when the data volumes stopped, so the differences may also be slightly exaggerated. Taking a look at Figure 3.5 we can intuitively split the day into three stages. - End of Day to Market Open - Market Open to Market Close - Market Close to End of Day Savings in the first stage will only be achieved by reducing the instance size. In the second stage savings look to be less significant, but could be achieved by both reducing server size and reducing the time in the queue of the servers. From market-close to end-of-day the clusters have scaled out fully. In this stage cost-efficiency will be determined by how much data is in the final server. If it is only holding a small amount of data when market closes there will be idle capacity in the cluster until end-of-day occurs. This will be rather random and depend mainly on how much data is generated by the market. Although having smaller servers will reduce the maximum amount of capacity that could be left unused. The worst-case scenario in this stage is that the amount of data held by the last live server falls in the range between the scale and roll thresholds. This will mean an entire RDB server will be sitting idle until end-of-day. To reduce the likelihood of this occurring it may be worth increasing the scale threshold and risking falling behind the tickerplant in the case of high data volumes. Threshold window comparison¶ To test the effects of the scale threshold on cost another stack was launched (also with four RDB clusters). In this stack all four clusters used t3a.medium EC2 instances (4GB) and a roll threshold of 85% was set. Data was generated in the same fashion as the previous simulation. The scale thresholds were set to 20, 40, 60, and 80% and the memory capacity was plotted as in Figure 3.4. Figure 4.1: T3a.medium clusters' memory capacity vs memory usage – Cloudwatch Metrics As expected the clusters with the lower scale thresholds scale out farther away from the demand line. Their new servers will then have a longer wait time in the tickerplant queue. This will reduce the risks associated with the second stage but also increase its costs. This difference can be seen more clearly if only the 20 and 80% clusters are plotted. Figure 4.2: T3a.medium 20 and 80% clusters' memory capacity vs memory usage – Cloudwatch Metrics Most importantly we can see that in the third stage the clusters with lower thresholds started an extra server. So a whole instance was left idle in those clusters from market-close to end-of-day. The costs associated with each cluster were calculated below. | instance | threshold | capacity (GB) | total cost ($) | cost saving (%) | |---|---|---|---|---| | t3a.medium | 80% | 4 | 3.14 | 43 | | t3a.medium | 60% | 4 | 3.19 | 44 | | t3a.medium | 40% | 4 | 3.56 | 49 | | t3a.medium | 20% | 4 | 3.61 | 50 | | t3a.2xlarge | n/a | 32 | 7.21 | 0 | The 20 and 40% clusters and the 60 and 80% clusters started the same amount of servers as each other throughout the day. So we can compare their costs to analyze cost-efficiencies in the second stage (market-open to close). With differences of under 1% compared to the t3.2xlarge the cost savings we can make from this stage are not that significant. Comparing the difference between the two pairs we can see that costs jump from 44 to 49%. Therefore the final stage where there is an extra server sitting idle until end-of-day has a much larger impact. Even though raising the scale threshold has a significant impact when no extra server is added at market-close, choosing whether to do so will still be dependant on the needs of each system. A 5% decrease in costs may not be worth the risk of falling behind the tickerplant. Taking it further¶ Turning the cluster off¶ The saving estimates in the previous sections could be taken a step further by adding scheduled scaling. When the RDBs are not in use we could scale the cluster down to zero, effectively turning off the RDB. Weekends are a prime example of when this could be useful, but it could also be extended to the period between end-of-day and market open. If data only starts coming into the RDB at around 07:00 when markets open there is no point having a server up. So we could schedule the ASG to turn down to zero instances at end-of-day. We then have a few options for scaling back out, each have some pros and cons. | option | remarks | |---|---| | Schedule the ASG to scale out at 05:30 before the market opens | Data will not be available until then if it starts to come in before. | Monitor the tickerplant for the first upd message and scale out when it is received | Data will not be available until the RDB comes up and recovers from the tickerplant log. Will not be much data to recover. | | Scale out when the first query is run | Useful because data is not needed until it is queried. RDBs may come up before there is any data. A large amount of data may need to be recovered if queries start to come in later in the day. | Intraday write-down¶ The least complex way to run this solution would be in tandem with a write-down database (WDB) process. The RDBs will then not have to save down to disk at end-of-day so scaling in will be quicker. The complexity will also be reduced. If the RDBs were to write down at end-of-day a separate process would be needed to coordinate the writes of each one and sort and part the data. As the cluster will most likely be deployed alongside a WDB process an intraday write-down solution could also be incorporated. If we were to write to the HDB every hour, the RDBs could then flush their data from memory allowing the cluster to scale in each time. Options for how to set up an intraday write-down solution have been discussed in a whitepaper by Colm McCarthy. Querying distributed RDBs¶ As discussed, building a gateway to query the RDBs is beyond the scope of this paper. When a gateway process is set up, distributed RDBs could offer some advantages over a regular RDB: - RDBs can be filtered out by the gateway pre-query based on which data sets they are holding. - Each RDB will be holding a fraction of the day’s data, decreasing query memory and duration. - Queries across multiple RDBs can be done in parallel. Conclusion¶ This article has presented a solution for a scalable real-time database cluster. The simulations carried out showed savings of up to 50% could be made. These savings, along with the increased availability of the cluster, could make holding a whole day’s data in memory more feasible for our kdb+ databases. If not, the cluster can be used alongside an intraday write-down process. If an intraday write is incorporated in a system it is usually one that needs to keep memory usage low. The scalability of the cluster can guard against large spikes in intraday data volumes crippling the system. Used in this way very small instances could be used to reduce costs. The .u.asg functionality in the tickerplant also gives the opportunity to run multiple clusters at different levels of risk. Highly important data can be placed in a cluster with a low scale threshold or larger instance size. If certain data sources do not need to be available with a low latency clusters with smaller instances and higher scale thresholds can be used to reduce costs. Author¶ Jack Stapleton is a kdb+ consultant for KX who has worked for some the world’s largest financial institutions. Based in Dublin, Jack is currently working on the design, development, and maintenance of a range of kdb+ solutions in the cloud for a leading financial institution. kxcontrib/cloud-autoscaling companion scripts
Regular expressions¶ Keywords like , ss , and ssr interpret their second arguments as a limited form of Regular Expression (regex). In a q regex pattern certain characters have special meaning: ? wildcard: matches any character * matches any sequence of characters [] embraces a list of alternatives, any of which matches Wildcard¶ A ? in the pattern matches any character. q)("brown";"drown";"frown";"grown") like "?rown" 1111b q)"the brown duck drowned" ss "?rown" 4 15 List of alternatives¶ A list of alternatives is embraced by square brackets and consists of: [^] + [char|range] where char is a character atomrange has the form0-9 ,a-z , orA-Z Beginning the list with a caret makes the list match any characters except those listed. q)"brown" like "[bf]rown" 1b q)"brown" like "[^cf]rown" 1b q)"br^wn" like "br[&^]wn" 1b The list can include ranges of the form 0-9 , a-z , and A-Z . q)"brAwn" like "br[A-Z]wn" 1b q)"br0wn" like "br[0-3]wn" 1b q)"br0wn" like "br[3-6]wn" 0b q)"br0wn" like "br[^3-6]wn" 1b Within a list of alternatives ? and * are not wildcards. q)"brown" like "br?*wn" 1b q)"brown" like "br[?*]wn" 0b Matching special characters¶ Special characters can be matched by bracketing them as lists of alternatives. q)"br*wn" like "br[*]wn" 1b q)"br?wn" like "br[?]wn" 1b q)"br]wn" like "[bf]r[]]wn" 1b q)a:("roam";"rome") q)a like "r?me" 01b q)a like "ro*" 11b q)a like "ro[ab]?" 10b q)a like "ro[^ab]?" 01b q)"a[c" like "a[[]c" 1b q)(`$("ab*c";"abcc"))like"ab[*]c" 10b q)(`$("ab?c";"abcc"))like"ab[?]c" 10b q)(`$("ab^c";"abcc"))like"ab[*^]c" 10b Empty strings¶ Empty strings are everywhere. They cannot be matched by ss or ssr . q)"A grown man in a gown" ss "rown" ,3 q)"A grown man in a gown" ss "own" 4 18 q)"A grown man in a gown" ss "n" 6 10 13 20 q)"A grown man in a gown" ss "" 'length [0] "A grown man in a gown" ss "" ^ Arbitrary sequence¶ There are limits to matching patterns containing * A * in a pattern matches a sequence of any length, including an empty string. q)"brown" like "br*wn" 1b q)"broom of your own" like "br*wn" 1b q)"brwn" like "br*wn" 1b ss , ssr ¶ With patterns containing * , keywords ss and ssr signal a length error. q)s:"Now is the time for all good men to come to the aid of the party." q)s ss "t?e" 7 44 55 q)s ss "t*e" 'length [0] s ss "t*e" ^ like ¶ Some patterns with * are too difficult to match. They produce a nyi error. q)s like "*the*" 1b q)s like "*the*the*" 'nyi [0] s like "*the*the*" ^ q)s like "*the*the" 'nyi [0] s like "*the*the" ^ Worked example¶ The left argument in the following example is a list of telephone book entries: q)tb "Smith John 101 N Broadway Elmsville 123-4567" "Smyth Barbara 27 Maple Ave Elmstwn 321-7654" "Smythe Ken 321-a Maple Avenue Elmstown 123-9999" "Smothers 11 Jordan Road Oakwood 123-2357" "Smith-Hawkins K Maple St Elmwood 321-832e" q)tb like "Smith*" 10001b q)tb like "Sm?th*" 11111b q)tb like "Sm[iy]th*" 11101b We can try finding everyone with the telephone exchange code 321 as follows: q)tb like "*321-*" 01101b Unfortunately, this pattern also picks up the item for Ken Smythe, who has "321-" as part of his address. Since the exchange code is part of a telephone number the "-" must be followed by a digit, which can be expressed by the pattern *321-[0123456789]* . There is a shorthand for long sequences of alternatives, which in this case is *321-[0-9]* . q)tb like "*321-[0-9]*" 01001b Other sequences for which this shorthand works are sequences of alphabetic characters (in alphabetic order). The pattern in the last example isn’t foolproof. We would also have picked up Ken Smythe’s item if his street number had been 321-1a instead of 321-a. Since the telephone number comes at the end of the text, we could repeat the above alternative four times and leave out the final "*" , indicating that there are four digits are at the end of each item. q)tb like "*321-[0-9][0-9][0-9][0-9]" 01000b Unfortunately this pattern misses the last item, which has an error in the last position of the telephone number. However, in this case the simpler pattern *321-???? will work. It is generally best to not over-specify the pattern constraint. q)tb like "*321-????" 01001b The reserved character ^ selects characters that are not among the specified alternatives. For example, there are errors in some items where the last position in the telephone number is not a digit. We can locate all those errors as follows. q)tb like "*[^0-9]" 00001b Regex libraries¶ For something more flexible, it is possible to use regex libs such as google/re2. The code below was compiled to use re2 with V3.1. The k.h file can be downloaded from This can be compiled for 64-bit Linux: g++ -m64 -fPIC -O2 re2.cc -o re2.so -I . re2/obj/libre2.a -DKXVER=3 -shared -static and the resulting re2.so copied into the $QHOME/l64 subdirectory. It can then be loaded and called in q: q)f:`re2 2:(`FullMatch;2) / bind FullMatch to f q)f["hello world";"hello ..rld"] #include <re2/re2.h> #include <re2/filtered_re2.h> #include <stdlib.h> //malloc #include <stdio.h> #include"k.h" using namespace re2; extern "C" { Z S makeErrStr(S s1,S s2){Z __thread char b[256];snprintf(b,256,"%s - %s",s1,s2);R b;} Z __inline S c2s(S s,J n){S r=(S)malloc(n+1);R r?memcpy(r,s,n),r[n]=0,r:(S)krr((S)"wsfull (re2)");} K FullMatch(K x,K y){ S s,sy;K r; P(x->t&&x->t!=KC&&x->t!=KS&&x->t!=-KS||y->t!=KC,krr((S)"type")) U(sy=c2s((S)kC(y),y->n)) RE2 pattern(sy,RE2::Quiet); free(sy); P(!pattern.ok(),krr(makeErrStr((S)"bad regex",(S)pattern.error().c_str()))) if(!x->t||x->t==KS){ J i=0; K r=ktn(KB,x->n); for(;i<x->n;i++){ K z=0; P(!x->t&&(z=kK(x)[i])->t!=KC,(r0(r),krr((S)"type"))) s=z?c2s((S)kC(z),z->n):kS(x)[i];P(!s,(r0(r),(K)0)) kG(r)[i]=RE2::FullMatch(s,pattern); if(z)free(s); } R r; } s=x->t==-KS?x->s:c2s((S)kC(x),x->n); r=kb(RE2::FullMatch(s,pattern)); if(s!=x->s)free(s); R r; } } Regex in q¶ Itis also possible to create a regex matcher in q, using a state machine, e.g. / want to match "x*fz*0*0" q)m:({0};{2*x="x"};{2+x="f"};{2+/1 2*x="fz"};{4+x="0"};{5+x="0"};{7-x="0"};{7-x="0"}) q)f:{6=1 m/x} q)f"xyzfz000" 1b However, this does not return until all input chars have been processed, even if a match can be eliminated on the first char. This could be accommodated here: q)f:{6~last{$[count x 1;((m x 0)[first x 1];1 _ x 1);(0;first x)]}/[{0<x 0};(1;x)]}
Machine Learning in kdb+: k-Nearest Neighbor classification and pattern recognition with q¶ Amongst the numerous algorithms used in machine learning, k-Nearest Neighbors (k-NN) is often used in pattern recognition due to its easy implementation and non-parametric nature. A k-NN classifier aims to predict the class of an observation based on the prevailing class among its k-nearest neighbors; “nearest” is determined by a distance metric between class attributes (or features), and k is the number of nearest neighbors to consider. Class attributes are collected in n-dimensional arrays. This means that the performance of a k-NN classifier varies depending on how quickly it can scan through, apply functions to, and reduce numerous, potentially large arrays. The UCI website contains several examples of such datasets; an interesting example is the Pen-Based Recognition of Handwritten Digits. Lichman, M. (2013). UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science. This dataset contains two disjointed collections: pendigits.tra , a collection of 7494 instances with known class labels which will be used as training set;pendigits.tes , a collection of 3498 instances (with known labels, but not used by the classifier) which will be used as test set to assess the accuracy of the predictions made. Class features for each instance are represented via one-dimensional arrays of 16 integers, representing X-Y coordinate pairs in a 100×100 space, of the digital sampling of the handwritten digits. The average number of instances per class in the training set is 749.4 with a standard deviation of 30.00. Due to its compute-heavy features, k-NN has limited industry application compared to other machine-learning methods. In this paper, we will analyze an alternative implementation of the k-NN, using the array-processing power of kdb+. kdb+ has powerful built-in functions designed and optimized for tables and lists. Together with qSQL, these functions can be used to great effect for machine-learning purposes, especially with compute-heavy implementations like k-NN. All tests were run using kdb+ version 3.5 2017.06.15, on a Virtual Machine with four allocated 4.2GHz cores. Code used can be found at kxcontrib/wp-knn. Loading the dataset in q¶ Once downloaded, this is how the dataset looks in a text editor: Figure 1: CSV dataset The last number on the right-hand side of each line is the class label, and the other sixteen numbers are the class attributes; 16 numbers representing the 8 Cartesian coordinates sampled from each handwritten digit. We start by loading both test and training sets into a q session: loadSet:{ n:16; / # columns c:(`$'n#.Q.a),`class; / column names t:"ic" where n,1; / types x set`class xkey flip c! (t; ",") 0: ` sv`pendigits,x } q)loadSet each `tra`tes; q) tes class| a b c d e f g h i j k l m n o p -----| ------------------------------------------------------------ 8 | 88 92 2 99 16 66 94 37 70 0 0 24 42 65 100 100 8 | 80 100 18 98 60 66 100 29 42 0 0 23 42 61 56 98 8 | 0 94 9 57 20 19 7 0 20 36 70 68 100 100 18 92 9 | 95 82 71 100 27 77 77 73 100 80 93 42 56 13 0 0 9 | 68 100 6 88 47 75 87 82 85 56 100 29 75 6 0 0 .. q)tra class| a b c d e f g h i j k l m n o p -----| ------------------------------------------------------------ 8 | 47 100 27 81 57 37 26 0 0 23 56 53 100 90 40 98 2 | 0 89 27 100 42 75 29 45 15 15 37 0 69 2 100 6 1 | 0 57 31 68 72 90 100 100 76 75 50 51 28 26 16 0 4 | 0 100 7 92 5 68 19 45 86 34 100 45 74 23 67 0 1 | 0 67 49 83 100 100 81 80 60 60 40 40 33 20 47 0 .. For convenience, the two resulting dictionaries are flipped into tables and keyed on the class attribute so that we can later leverage qSQL and some of its powerful features. Keying at this stage is done for display purposes only. Rows taken from the sets will be flipped back into dictionaries and the class label will be dropped while computing the distance metric. The column names do not carry any information and so the first 16 letters of the alphabet are chosen for the 16 integers representing the class attributes; while the class label, stored as a character, is assigned a mnemonic tag class . Calculating distance metric¶ As mentioned previously, in a k-NN classifier the distance metric between instances is the distance between their feature arrays. In our dataset, the instances are rows of the tra and tes tables, and their attributes are the columns. To better explain this, we demonstrate with two instances from the training and test set: q) show tra1:1#tra class| a b c d e f g h i j k l m n o p -----| ------------------------------------------------------------ 8 | 47 100 27 81 57 37 26 0 0 23 56 53 100 90 40 98 q) show tes1:1#tes class| a b c d e f g h i j k l m n o p -----| ------------------------------------------------------------ 8 | 88 92 2 99 16 66 94 37 70 0 0 24 42 65 100 100 Figure 2-1: tra1 and tes1 point plot Figure 2-2: tra1 and tes1 visual approximation Both instances belong to the class 8 , as per their class labels. However, this is not clear by just looking at the plotted points and the class column is not used by the classifier, which will instead calculate the distance between matching columns of the two instances. That is, calculating how far the columns a , b , …, p of tes1 are from their counterparts in tra1 . While only an arbitrary measure, the classifier will use these distances to identify the nearest neighbor/s in the training set and make a prediction. In q, this will be achieved using a binary function whose arguments can be two tables. Using the right iterators, this function is applied column by column, returning one table that stores the result of each iteration in the columns a , b , …, p . A major benefit of this approach is that it relieves the developer from the burden of looping and indexing lists when doing point-point computation. The metric that will be used to determine the distance between the feature points of two instances is the Manhattan distance: It is calculated as the sum of the absolute difference of the Cartesian coordinates of the two points. Using a Cartesian distance metric is intuitive and convenient as the columns in our set represent X or Y coordinates: q)dist:{abs x-y} q)tra1 dist' tes1 class| a b c d e f g h i j k l m n o p -----| --------------------------------------------- 8 | 41 8 25 18 41 29 68 37 70 23 56 29 58 25 60 2 Now that the resulting table represents the distance metric between each attribute of the two instances, we can sum all the values and obtain the distance between the two instances in the feature space: q)sums each tra1 dist' tes1 class| a b c d e f g h i j k l m n o p -----| ----------------------------------------------------------- 8 | 41 49 74 92 133 162 230 267 337 360 416 445 503 528 588 590 The q keyword sums adds up the columns from the left to the right starting at a . Thus, the last column, p , holds the total: 590, which represents the Manhattan distance between tra1 and tes1 . Expanding to run this against the whole training set, tra , will return the Manhattan distances between tes1 and all the instances in tra . Calculating the distance metric is possibly the heaviest computing step of a k-NN classifier. We can use \ts to compare the performance between the use of Each Left and Each Right, displaying the total execution time after many iterations (5000 in the example below was enough to show a difference). As both data sets are keyed on class , and tra contains more than one instance, a change of paradigm and iterator is necessary: - Un-key and remove the class column fromtes1 - Update the iterator so that dist gets applied to all rows of thetra table: q)\ts:5000 tra dist\: 1_flip 0!tes1 203770 5626544 q)\ts:5000 (1_flip 0!tes1) dist/: tra 167611 5626544 Keeping tes1 as the left argument while using the Each Right iterator makes the execution a little more time-efficient due to how tes1 and tra are serialized and how tra is indexed. Additionally, Each Right makes the order of the operations clearer: we are calculating the distance between the left argument (validation instance) and each row on the table in the right argument (training set). In q, lambda calculus is supported and functions are “first-class citizens”: q){sums each (1_x) dist/: tra} flip 0!tes1 class| a b c d e f g h i j k l m n o p -----| -------------------------------------------------------------- 8 | 41 49 74 92 133 162 230 267 337 360 416 445 503 528 588 590 2 | 88 91 116 117 143 152 217 225 280 295 332 356 383 446 446 540 1 | 88 123 152 183 239 263 269 332 338 413 463 490 504 544 628 728 4 | 88 96 101 108 119 121 196 204 220 254 354 375 407 449 482 582 1 | 88 113 160 176 260 294 307 350 360 420 460 476 485 530 583 683 6 | 12 20 106 106 139 147 224 234 304 320 357 381 412 461 541 621 4 | 88 96 97 124 134 165 174 176 206 277 350 423 446 462 496 596 .. Note there is no argument validation done within the lambda (this has minimal memory footprint and compute cost). The x argument must be a dictionary with the same keys as the table tra , class column excluded. Assigning the operations to calculate the distance metric to a function (dist ) is a convenient approach for non-complex metrics and testing, and it can be changed on the command line. However, it is executed 16 times each row in d , which makes it worth exploring if dropping dist results in better performance: q) \ts:250 {[d;t] sums each t dist/: d}[tra;] raze delete class from tes1 3427 2315424 q)// 13.34ms per run q)\ts:250 {[d;t] sums each abs t -/: d}[tra;] raze delete class from tes1 3002 2314720 q)// 12.00ms per run As mentioned earlier, q is an array-processing language. If we take advantage of this by extracting the columns from the table and doing the arithmetic on them, we may gain performance by removing a layer of indirection. Let’s test converting tra into vectors (flip value flip value ) before applying the distance metric: q)\ts:250 {[d;t] flip `class`dst!(exec class from d; sum each abs t -/: flip value flip value d)} [tra;] raze delete class from tes1 2297 2783072 q)// 9.18ms per run We have identified a performant approach and can store the lambda as apply_dist_manh : apply_dist_manh:{[d;t] dist:sum each abs t -/: flip value flip value d; flip `class`dst!(exec class from d; dist) } The test instance can be passed as a dictionary removing the class column; or as a table using the iterator Each after removing the column class . The returned result will be a list of tables if the parameter is passed as a table of length>1, otherwise, a single table. q) apply_dist_manh[tra;]each delete class from tes1 +`class`dst!("821416405098597339225158640481857093549275633632543101453206.. q) apply_dist_manh[tra]each 2#delete class from tes1 +`class`dst!("821416405098597339225158640481857093549275633632543101453206.. +`class`dst!("821416405098597339225158640481857093549275633632543101453206.. q) apply_dist_manh[tra;]raze delete class from tes1 class dst ------------ 8 590 2 540 .. The performance gained by dropping the function dist , and converting d and t into vectors before calculating the distance, will become crucial when using more complex metrics as examined below in Benchmarks. K-Nearest Neighbors and prediction¶ The classifier is almost complete. As the column p now holds the distance metric between the test instance and each training instance, to validate its behavior we shall find if there are any instances of tra where dst <590. Nearest Neighbor k=1¶ If k=1 the prediction is the class of the nearest neighbor – the instance with the smallest distance in the feature space. As all distance metrics are held in the column dst , we want to “select the class label of the row from the result of applying the distance metric where the distance is the minimum distance in the whole table”. In qSQL: q)select Prediction:class, dst from apply_dist_manh[tra;]raze[delete class from tes1] where dst=min dst Prediction dst ------------------- 8 66 q)select from tra where i in exec i from apply_dist_manh[tra;]raze[delete class from tes1] where dst=min dst class| a b c d e f g h i j k l m n o p -----| ---------------------------------------------- 8 | 76 85 0 100 16 63 81 31 69 0 6 25 34 64 100 95 Querying the virtual column i , we can find the row index of the nearest neighbor used for the prediction. Plotting this neighbor we see some points now overlap, while the distance between non-overlapping points is definitely minimal compared to the previous example (Figure 2-2): Figure 3: tes1 and its nearest neighbor Another approach to finding the k-nearest neighbor is to sort the result of apply_dist_manh in ascending order by dst . This way, the first k rows are the k-nearest neighbors. Taking only the first row is equivalent to selecting the nearest neighbor. q)select Prediction:class from 1#`dst xasc apply_dist_manh[tra;]raze delete class from tes1 Prediction ---------- 8 Or instead limit to the first row by using the row index: q)select Prediction:class from `dst xasc apply_dist_manh[tra;]raze[delete class from tes1] where i<1 Prediction ---------- 8 Sorting dst has the side effect of applying the sorted attribute to the column. This information allows kdb+ to use a binary search algorithm on that column. k>1¶ For k>1, the prediction is instead the predominant class among the k-nearest neighbors. Tweaking the above example so that 1 is abstracted into the parameter k : q)k:3 q)select Neighbor:class,dst from k#`dst xasc apply_dist_manh[tra;]raze delete class from tes1 Neighbor dst ------------- 8 66 8 70 8 75 q)k:5 q)select Neighbor:class,dst from k#`dst xasc apply_dist_manh[tra;]raze delete class from tes1 Neighbor dst ------------- 8 66 8 70 8 75 8 92 8 94 At this point, knowing which results are the k-nearest neighbors, the classifier can be updated to have class “prediction” functionality. The steps needed to build the k-NN circle table can be summarized into a function get_nn : q)get_nn:{[k;d] select class,dst from k#`dst xasc d} Prediction test¶ The next step is to make a prediction based on the classes identified in the k-NN circle. This will be done by counting the number of occurrences of each class among the k-nearest neighbors and picking the one with the highest count. As the k-NN circle returned by get_nn is a table, the qSQL fby keyword can be used to apply the aggregating keyword count to the virtual column i , counting how many rows per each class are in the in k-NN circle table, and compare it with the highest count: qpredict:{ 1#select Prediction:class from x where ((count;i)fby class)=max(count;i)fby class } For k>1, it is possible that fby can return more than one instance, should there not be a prevailing class. Given that fby returns entries in the same order they are aggregated, class labels are returned in the same order they are found. Thus, designing the classifier to take only the first row of the results has the side effect of defaulting the behavior of predict to k=1. Consider this example: foo1:{ select Prediction:class from x where ((count;i)fby class)=max (count;i)fby class } foo2:{ 1#select Prediction:class from x where ((count;i)fby class)=max (count;i)fby class } q)dt:([]nn:"28833"; dst:20 21 31 50 60) // dummy table, random values q)(foo1;foo2)@\: dt +(,`Prediction)!,"8833" // foo1, no clear class +(,`Prediction)!,,"2" // foo2, default to k=1; take nearest neighbor Let’s now test predict with k=5 and tes1 : q) predict get_nn[5;] apply_dist_manh[tra;]raze delete class from tes1 Prediction ---------- 8 Spot on! Accuracy checks¶ Now we can feed the classifier with the whole test set, enriching the result of predict with a column Test , which is the class label of the test instance, and a column Hit , which is only true when prediction and class label match: apply_dist: apply_dist_manh test_harness:{[d;k;t] select Test:t`class, Hit:Prediction=' t`class from predict get_nn[k] apply_dist[d] raze delete class from t } Running with k=5¶ q) R5:test_harness[tra;5;] peach 0!tes q) R5 +`Test`Hit!(,"8";,1b) +`Test`Hit!(,"8";,1b) +`Test`Hit!(,"8";,1b) +`Test`Hit!(,"9";,1b) .. As the result of test_harness is a list of predictions. It needs to be razed into a table to extract the overall accuracy stats. The accuracy measure is the number of hits divided by the number of predictions made for the correspondent validation class: q)select Accuracy:avg Hit by Test from raze R5 Test| Accuracy ----| --------- 0 | 0.9614325 1 | 0.9505495 2 | 0.9945055 3 | 0.9940476 4 | 0.978022 5 | 0.9761194 6 | 1 7 | 0.9532967 8 | 0.9970238 9 | 0.9672619 Running with k=3¶ q)R3:test_harness[tra;3;] peach 0!tes q)select Accuracy:avg Hit by Test from raze R3 Test| Accuracy ----| --------- 0 | 0.9641873 1 | 0.9587912 2 | 0.9917582 3 | 0.9940476 4 | 0.9807692 5 | 0.9701493 6 | 1 7 | 0.967033 8 | 0.9970238 9 | 0.9583333 Running with k=1¶ q)R1:test_harness[tra;1;] peach 0!tes q)select Accuracy:avg Hit by Test from raze R1 Test| Accuracy ----| --------- 0 | 0.9614325 1 | 0.9505495 2 | 0.9917582 3 | 0.9940476 4 | 0.978022 5 | 0.961194 6 |1 7 | 0.956044 8 | 0.9940476 9 | 0.9583333 q)select Accuracy: avg Hit from raze R1 Accuracy --------- 0.974271 Further approaches¶ Use secondary threads¶ Testing and validation phases will benefit significantly from the use of secondary threads in kdb+, applied through the use of the peach keyword: -s 0 (0 secondary processes) – Run time: ~13s q)\ts:1000 test_harness[tra;1;] each 0!test 13395217 2617696 q)// ~ 13.4s -s 4 (4 secondary processes) – Run time: ~4s q)\ts:1000 test_harness[tra;1;] peach 0!test 3951224 33712 q)// ~3.9s The following sections will make use of four secondary processes when benchmarking. Euclidean or Manhattan distance?¶ The Euclidean distance metric can be intuitively implemented as: apply_dist_eucl:{[d;t] dist:{sqrt sum x xexp 2}each t -/: flip value flip value d; flip `class`dst!(exec class from d; dist) } However, the implementation can be further optimized by squaring each difference instead of using xexp : q)\ts:1000 r1:{[d;t] {x xexp 2}each t -/: flip value flip value d}[tra;] raze delete class from tes1 42296 4782304 q)// Slower and requires more memory q)\ts:1000 r2:{[d;t] {x*x} t -/: flip value flip value d}[tra;] raze delete class from tes1 4511 3241920 The function xexp has two caveats: - for an exponent of 2 it is faster not to use xexp and, instead, multiply the base by itself - it returns a float, as our dataset uses integers, so every cell in the result set will increase in size by 4 bytes q)min over r1=r2 1b // Same values q)r1~r2 0b // Not the same objects q)exec distinct t from meta r1 ,"f" q)exec distinct t from meta r2 ,"i" // {x*x} preserves the datatype q)-22!'(r1;r2) 966903 487287 // Different size, xexp returns a 64bit datatype Choosing the optimal implementation, we can benchmark against the full test set: q)apply_dist: apply_dist_eucl q)\ts R1:test_harness[tra;1;] peach 0!tes 4720 33856 q)// ~1 seconds slower than the Manhattan distance benchmark q)select Accuracy: avg Hit from raze R1 Accuracy --------- 0.9774157 q)select Accuracy:avg Hit by Test from raze R1 Test| Accuracy ----| --------- 0 | 0.9752066 1 | 0.9587912 2 | 0.9945055 3 | 0.9910714 4 | 0.9752747 5 | 0.9701493 6 | 1 7 | 0.956044 8 | 0.9970238 9 | 0.9583333 Benchmarks¶ For the purpose of this benchmark, get_nn could be adjusted. Its implementation was to reduce the output of apply_dist to a table of two columns, sorted on the distance metric. However, if we wanted to benchmark multiple values of k , get_nn would sort the distance metric tables on each k iteration, adding unnecessary compute time. With that in mind, it can be replaced by: {[k;d} k#\:`dst xasc d} Where k can now be a list, but d is only sorted once. Changing test_harness to leverage this optimization: test_harness:{[d;k;t] R:apply_dist[d] raze delete class from t; select Test:t`class, Hit:Prediction=' t`class,k from raze predict each k#\:`dst xasc R } Manhattan distance: q)apply_dist:apply_dist_manh // Manhattan Distance q)// 2 cores, 4 secondary threads (-s 4) q)\ts show select Accuracy:avg Hit by k from raze test_harness[tra;1+til 10]peach 0!tes k | Accuracy --| --------- 1 | 0.974271 2 | 0.974271 3 | 0.9779874 4 | 0.9779874 5 | 0.9768439 6 | 0.9777015 7 | 0.976558 8 | 0.9762722 9 | 0.9734134 10| 0.9745569 4892 4198224 q)4892%count tes 1.398513 q)// ~1.4 ms average to classify one instance q)// 4 cores, 8 secondary threads (-s 8) q)\ts show select Accuracy:avg Hit by k from raze test_harness[tra;1+til 10]peach 0!tes k | Accuracy --| --------- 1 | 0.974271 2 | 0.974271 3 | 0.9779874 4 | 0.9779874 5 | 0.9768439 6 | 0.9777015 7 | 0.976558 8 | 0.9762722 9 | 0.9734134 10| 0.9745569 2975 4198224 q)2975%count tes 0.850486 q)// .8 ms average to classify one instance Euclidean distance: q)apply_dist:apply_dist_eucl // Euclidean Distance q)// 2 cores, 4 secondary threads (-s 4) q)\ts show select Accuracy: avg Hit by k from raze test_harness[tra;1+til 10]peach 0!tes k | Accuracy --| --------- 1 | 0.9774157 2 | 0.9774157 3 | 0.9782733 4 | 0.9788451 5 | 0.9771298 6 | 0.9777015 7 | 0.9759863 8 | 0.9768439 9 | 0.9757004 10| 0.9759863 6717 4196416 q)6717%count tes 1.92 // ~1.9ms average to classify one instance // 4 cores, 8 secondary threads (-s 8) q)\ts show select Accuracy:avg Hit by k from raze test_harness[tra;1+til 10]peach 0!tes k | Accuracy --| --------- 1 | 0.9774157 2 | 0.9774157 3 | 0.9782733 4 | 0.9788451 5 | 0.9771298 6 | 0.9777015 7 | 0.9759863 8 | 0.9768439 9 | 0.9757004 10| 0.9759863 3959 4200144 q)3959%count tes 1.13 q)// ~1.1ms average to classify one instance Figure 4: Accuracy comparison between Euclidean and Manhattan Conclusions¶ In this paper, we saw how trivial it is to implement a k-NN classification algorithm with kdb+. Using tables and qSQL it can be implemented with three select statements at most, as shown in the util library at kxcontrib/wp-knn. We also briefly saw how to use iterators to optimize the classification time, and how data structures can influence performance comparing tables and vectors. Benchmarking this lazy implementation, with a random dataset available on the UCI website and using the Euclidean distance metric showed an average prediction accuracy of ~97.7%. The classification time can vary greatly, based on the number of cores and secondary threads used. With 2 cores and 4 secondary threads (-s 4 ) the classification time of a single instance after optimization of the code was ~1.9ms per instance and the total validation time decreased significantly when using 4 cores and 8 secondary threads (-s 8 ), showing how kdb+ can be used to great effect for machine-learning purposes, even with heavy-compute implementations such as the k-NN. Author¶ Emanuele Melis works for KX as kdb+ consultant. Currently based in the UK, he has been involved in designing, developing and maintaining solutions for equities data at a world-leading financial institution. Keen on machine learning, Emanuele has delivered talks and presentations on pattern-recognition implementations using kdb+.
xbar ¶ Round down x xbar y xbar[x;y] Where x is a non-negative numeric atomy is numeric or temporal returns y rounded down to the nearest multiple of x . xbar is a multithreaded primitive. q)3 xbar til 16 0 0 0 3 3 3 6 6 6 9 9 9 12 12 12 15 q)2.5 xbar til 16 0 0 0 2.5 2.5 2.5 5 5 5 7.5 7.5 7.5 10 10 10 12.5 q)5 xbar 11:00 + 0 2 3 5 7 11 13 11:00 11:00 11:00 11:05 11:05 11:10 11:10 Interval bars are useful in aggregation queries. To get last price and total size in 10-minute bars: q)select last price, sum size by 10 xbar time.minute from trade where sym=`IBM minute| price size ------| ----------- 09:30 | 55.32 90094 09:40 | 54.99 48726 09:50 | 54.93 36511 10:00 | 55.23 35768 ... Group symbols by closing price: q)select sym by 5 xbar close from daily where date=last date close| sym -----| ---------------------- 25 | `sym$`AIG`DOW`GOOG`PEP,... 30 | `sym$,`AAPL,... 45 | `sym$`HPQ`ORCL,... ... You can use bin to group at irregular intervals. q)x:`s#10:00+00:00 00:08 00:13 00:27 00:30 00:36 00:39 00:50 q)select count i by x x bin time.minute from ([]time:`s#10:00:00+asc 100?3600) minute| x ------| -- 10:00 | 8 10:08 | 13 10:13 | 24 10:27 | 4 10:30 | 9 10:36 | 3 10:39 | 19 10:50 | 20 A month is (internally) the count of months since 2000, so you can use 3 xbar to calculate quarters. q)`date$3 xbar `month$2019.11.19 / beginning of a quarter 2019.10.01 q)`date$3+3 xbar `month$2019.11.19 / beginning of next quarter 2020.01.01 q)-1+`date$3+3 xbar `month$2019.11.19 / end of that quarter 2019.12.31 Duplicate keys or column names Duplicate keys in a dictionary or duplicate column names in a table will cause sorts and grades to return unpredictable results. Implicit iteration¶ xbar is an atomic function. It applies to dictionaries and keyed tables q)(3;4 5)xbar(10;20 -30) 9 20 -30 q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)3 xbar d a| 9 -21 3 b| 3 3 -6 q)3 xbar k k | a b ---| ------ abc| 9 3 def| -21 3 ghi| 3 -6 Domain and range¶ The following shows the resulting output type given the input type of x and y . The character representation of the datatypes referenced can be found here . xbar| b g x h i j e f c s p m d z n u v t ----| ----------------------------------- b | i . i i i j f f i . p m d z n u v t g | . . . . . . . . . . . . . . . . . . x | i . i i i j f f i . p m d z n u v t h | i . i i i j f f i . p m d z n u v t i | i . i i i j f f i . p m d z n u v t j | j . j j j j f f j . p m d z n u v t e | e . e e e e f f e . p m d z n u v t f | f . f f f f f f f . f f z z f f f f c | . . . . . . f f . . p m d z n u v t s | . . . . . . . . . . . . . . . . . . p | p . p p p p f f p . . . . . . . . . m | m . m m m m f f m . . . . . . . . . d | d . d d d d z z d . . . . . . . . . z | z . z z z z z z z . . . . . . . . . n | j . j j j j f f j . p m d z n u v t u | u . u u u u f f u . . . . . . . . . v | v . v v v v f f v . . . . . . . . . t | t . t t t t f f t . . . . . . . . . For example, rounding down timespans to the nearest multiple of a long will produce a timespan. q)2 xbar 00:00:00.000000001 00:00:00.000000002 00:00:00.000000013 0D00:00:00.000000000 0D00:00:00.000000002 0D00:00:00.000000012 q)type 2 xbar 00:00:00.000000001 00:00:00.000000002 00:00:00.000000013 16h The possible range of output types are ijfpmdznuvte . xgroup ¶ Groups a table by values in selected columns x xgroup y xgroup[x;y] Where y is a table passed by valuex is a symbol atom or vector of column names iny returns y grouped by x . It is equivalent to doing a select … by on y , except that all the remaining columns are grouped without having to be listed explicitly. q)`a`b xgroup ([]a:0 0 1 1 2;b:`a`a`c`d`e;c:til 5) a b| c ---| --- 0 a| 0 1 1 c| ,2 1 d| ,3 2 e| ,4 q)\l sp.q q)meta sp / s and p are both columns of sp c | t f a ---| ----- s | s s p | s p qty| i q)`p xgroup sp / group by column p p | s qty --| ------------------------------- p1| `s$`s1`s2 300 300 p2| `s$`s1`s2`s3`s4 200 400 200 200 p3| `s$,`s1 ,400 p4| `s$`s1`s4 200 300 p5| `s$`s4`s1 100 400 p6| `s$,`s1 ,100 q)select s,qty by p from sp / equivalent select statement p | s qty --| ------------------------------- p1| `s$`s1`s2 300 300 p2| `s$`s1`s2`s3`s4 200 400 200 200 p3| `s$,`s1 ,400 p4| `s$`s1`s4 200 300 p5| `s$`s4`s1 100 400 p6| `s$,`s1 ,100 q)ungroup `p xgroup sp / ungroup flattens the groups p s qty --------- p1 s1 300 p1 s2 300 p2 s1 200 p2 s2 400 p2 s3 200 p2 s4 200 p3 s1 400 .. Duplicate keys or column names Duplicate keys in a dictionary or duplicate column names in a table will cause sorts and grades to return unpredictable results. xrank ¶ Group by value x xrank y xrank[x;y] Where x is a long atomy is of sortable type returns for each item in y the bucket into which it falls, represented as a long from 0 to x-1 . If the total number of items is evenly divisible by x , then each bucket will have the same number of items; otherwise some bucket sizes will differ by 1 dispersed throughout the result. xrank is right-uniform. q)4 xrank til 8 / equal size buckets 0 0 1 1 2 2 3 3 q)4 xrank til 9 / 1 bucket size differs 0 0 0 1 1 2 2 3 3 q)7 xrank til 9 / multiple bucket sizes differ 0 0 1 2 3 3 4 5 6 q) q)3 xrank 1 37 5 4 0 3 / outlier 37 does not get its own bucket 0 2 2 1 0 1 q)3 xrank 1 7 5 4 0 3 / same as above 0 2 2 1 0 1 Example using stock data: q)show t:flip `val`name!((20?20);(20?(`MSFT`ORCL`CSCO))) val name -------- 17 MSFT 1 CSCO 14 CSCO 13 ORCL 13 ORCL 9 ORCL ... q)select Min:min val,Max:max val,Count:count i by bucket:4 xrank val from t bucket| Min Max Count ------| ------------- 0 | 0 7 5 1 | 9 12 5 2 | 13 15 5 3 | 15 17 5 Duplicate keys in a dictionary or duplicate column names in a table will cause sorts and grades to return unpredictable results.
// @kind function // @category automl // @desc The application of AutoML on training and testing data, // applying cross validation and hyperparameter searching methods across a // range of machine learning models, with the option to save outputs. // @param graph {dictionary} Fully connected graph nodes and edges following // the structure outlined in `graph/Automl_Graph.png` // @param features {dictionary|table} Unkeyed tabular feature data or a // dictionary outlining how to retrieve the data in accordance with // `.ml.i.loadDataset` // @param target {dictionary|any[]} Target vector of any type or a dictionary // outlining how to retrieve the target vector in accordance with // `.ml.i.loadDataset` // @param ftype {symbol} Feature extraction type (`nlp/`normal/`fresh) // @param ptype {symbol} Problem type being solved (`reg/`class) // @param params {dictionary|char[]|::} One of the following: // 1. Path relative to `.automl.path` pointing to a user defined JSON file // for modifying default parameters // 2. Dictionary containing the default behaviours to be overwritten // 3. Null (::) indicating to run AutoML using default parameters // @return {dictionary} Configuration produced within the current run of AutoML // along with a prediction function which can be used to make predictions // using the best model produced fit:{[graph;features;target;ftype;ptype;params] runParams:`featureExtractionType`problemType`startDate`startTime! (ftype;ptype;.z.D;.z.T); // Retrieve default parameters parsed at startup and append necessary // information for further parameter retrieval modelName:enlist[`savedModelName]!enlist`$problemDict`modelName; configPath:$[type[params]in 99 10 -11h; enlist[`configPath]!enlist params; params~(::); ()!(); '"Unsupported input type for 'params'" ]; automlConfig:paramDict[`general],paramDict[ftype],modelName; automlConfig:automlConfig,configPath,runParams; // Default = accept data from process. Overwritten if dictionary input features:$[99h=type features;features;`typ`data!(`process;features)]; target:$[99h=type target;target;`typ`data!(`process;target)]; graph:.ml.addCfg[graph;`automlConfig;automlConfig]; graph:.ml.addCfg[graph;`featureDataConfig;features]; graph:.ml.addCfg[graph;`targetDataConfig ;target]; graph:.ml.connectEdge[graph;`automlConfig;`output;`configuration;`input]; graph:.ml.connectEdge[graph;`featureDataConfig;`output;`featureData;`input]; graph:.ml.connectEdge[graph;`targetDataConfig;`output;`targetData;`input]; modelOutput:.ml.execPipeline .ml.createPipeline graph; modelInfo:exec from modelOutput where nodeId=`saveMeta; modelConfig:modelInfo[`outputs;`output]; predictFunc:utils.generatePredict modelConfig; `modelInfo`predict!(modelConfig;predictFunc) }[graph] // @kind function // @category automl // @desc Retrieve a previously fit AutoML model and associated workflow // to be used for predictions // @param modelDetails {dictionary} Information regarding the location of // the model and metadata within the outputs directory // @return {dictionary} The predict function (generated using // utils.generatePredict) and all relevant metadata for the model getModel:{[modelDetails] pathToOutputs:utils.modelPath modelDetails; pathToMeta:hsym`$pathToOutputs,"config/metadata"; config:utils.extractModelMeta[modelDetails;pathToMeta]; loadModel:utils.loadModel config; modelConfig:config,enlist[`bestModel]!enlist loadModel; predictFunc:utils.generatePredict modelConfig; `modelInfo`predict!(modelConfig;predictFunc) } // @kind function // @desc Delete an individual model or set of models from the output directory // @param config {dictionary} configuration outlining what models are to be // deleted, the provided input must contain `savedModelName mapping to a // string (potentially wildcarded) or a combination of `startDate`startTime // where startDate and startTime can be a date and time respectively or a // wildcarded string. // @return {::} does not return any output unless as a result of an error deleteModels:{[config] pathStem:raze path,"/outputs/"; configKey:key config; if[all `startDate`startTime in configKey; utils.deleteDateTimeModel[config;pathStem] ]; if[`savedModelName in configKey; utils.deleteNamedModel[config;pathStem] ]; } // @kind function // @category automl // @desc Generate a new JSON file for use in the application of AutoML // via command line or as an alternative to the param file in .automl.fit. // @param fileName {string|symbol} Name for generated JSON file to be // stored in 'code/customization/configuration/customConfig' // @return {::} Returns generic null on successful invocation and saves a copy // of the file 'code/customization/configuration/default.json' to the // appropriately named file newConfig:{[fileName] fileNameType:type fileName; fileName:$[10h=fileNameType; fileName; -11h=fileNameType; $[":"~first strFileName;1_;]strFileName:string fileName; '`$"fileName must be string, symbol or hsym" ]; customPath:"/code/customization/configuration/customConfig/"; fileName:raze[path],customPath,fileName; filePath:hsym`$utils.ssrWindows fileName; if[not()~key filePath; ignore:utils.ignoreWarnings; index:$[ignore=2;0;1]; $[ignore=2;{'x};ignore=1;-1;]utils.printWarnings[`configExists]index ]; defaultConfig:read0 `$path,"/code/customization/configuration/default.json"; h:hopen filePath; {x y,"\n"}[h]each defaultConfig; hclose h; } // @kind function // @category automl // @desc Run AutoML based on user provided custom JSON files. This // function is triggered when executing the automl.q file. Invoking the // functionality is based on the presence of an appropriately named // configuration file and presence of the run command line argument on // session startup i.e. // $ q automl.q -config myconfig.json -run // This function takes no parameters as input and does not returns any // artifacts to be used in process. Instead it executes the entirety of the // AutoML pipeline saving the report/model images/metadata to disc and exits // the process // @param testRun {boolean} Is the run being completed a test or not, running // in test mode results in an 'exit 1' from the process to indicate that the // test failed, otherwise for debugging purposes the process is left 'open' // to allow a user to drill down into any potential issues. runCommandLine:{[testRun] // update graphDebug behaviour such that command line run fails loudly .ml.graphDebug:1b; ptype:`$problemDict`problemType; ftype:`$problemDict`featureExtractionType; dataRetrieval:`$problemDict`dataRetrievalMethod; errorMessage:"`problemType,`featureExtractionType and `dataRetrievalMethods", " must all be fully defined"; if[any(raze ptype,ftype,raze dataRetrieval)=\:`;'errorMessage]; data:utils.getCommandLineData dataRetrieval; errorFunction:{[err] -1"The following error occurred '",err,"'";exit 1}; automlRun:$[testRun; .[fit[;;ftype;ptype;::];data`features`target;errorFunction]; fit[;;ftype;ptype;::] . data`features`target]; automlRun } // @kind function // @category Utility // @desc Update print warning severity level // @param warningLevel {long} 0, 1 or 2 long denoting how severely warnings are // to be handled. // - 0 = Ignore warnings completely and continue evaluation // - 1 = Highlight to a user that a warning was being flagged but continue // - 2 = Exit evaluation of AutoML highlighting to the user why this happened // @return {::} Update the global utils.ignoreWarnings with new level updateIgnoreWarnings:{[warningLevel] if[not warningLevel in til 3; '"Warning severity level must a long 0, 1 or 2." ]; utils.ignoreWarnings::warningLevel } // @kind function // @category Utility // @desc Update logging and printing states // @return {::} Change the boolean representation of utils.logging // and .automl.utils.printing respectively updateLogging :{utils.logging ::not utils.logging} updatePrinting:{utils.printing::not utils.printing} ================================================================================ FILE: ml_automl_code_commandLine_cli.q SIZE: 1,389 characters ================================================================================ // code/commandLine/cli.q - Command line input // Copyright (c) 2021 Kx Systems Inc // // Retrieve data from config file to build paramDict and problemDict. \d .automl // @kind description // @name pathGeneration // @desc If a user had defined that a configuration file should be used on // command line using the -config command line argument, this section will // retrieve the custom config from either the folder: // .automl.path,"/code/customization/configuration/customConfig/" // or the current directory. If no config command line argument is provided // the default JSON file will be used. cli.path:$[`config in key commandLineInput; cli.i.checkCustom commandLineInput`config; path,"/code/customization/configuration/default.json" ] // @kind description // @name systemConfig // @desc Parse the JSON file into a q dictionary and retrieve all configuration // information required for the application of AutoML in both command line // and non command line mode: // 'paramDict' -> all the AutoML parameters for customizing a run i.e. // 'seed'/'testingSize' etc. // 'problemDict' -> instructions regarding how the framework is to retrieve // data and name models cli.input:.j.k raze read0`$cli.path paramTypes:`general`fresh`normal`nlp paramDict:paramTypes!cli.i.parseParameters[cli.input]each paramTypes problemDict:cli.input`problemDetails ================================================================================ FILE: ml_automl_code_commandLine_utils.q SIZE: 2,427 characters ================================================================================ // code/commandLine/utils.q - Command line utility functions // Copyright (c) 2021 Kx Systems Inc // // Utility functions for the handling of command line arguments \d .automl // @kind function // @category cliUtility // @desc Retrieve the path to a custom JSON file to be used on command // line or as the final parameter to the .automl.run function. This file must // exist in either the users defined path relative to 'pwd' or in // "/code/customization/configuration/customConfig/" // @param fileName {string} JSON file to be retrieved or path to this file // @return {string} Full path to the JSON file if it exists or an error // indicating that the file could not be found cli.i.checkCustom:{[fileName] fileName:raze fileName; filePath:path,"/code/customization/configuration/customConfig/",fileName; $[not()~key hsym`$filePath; :filePath; not()~key hsym`$filePath:"./",fileName; :filePath; 'fileName," does not exist in current directory or '",path, "/code/configuration/customConfig/'" ] } // @kind function // @category cliUtility // @desc Parse the contents of the 'problemParameters' sections of the // JSON file used to define command line input and convert to an appropriate // kdb+ type // @param cliInput {string} The parsed content of the JSON file using .j.k // which have yet to be transformed into their final kdb+ type // @param sectionType {symbol} Name of the section within the // 'problemParameters' section to be parsed // @returns {dictionary} Mapping of parameters required by AutoML to an // assigned value cast appropriately cli.i.parseParameters:{[cliInput;sectionType] section:cliInput[`problemParameters;sectionType]; cli.i.convertParameters each section } // @kind function // @category cliUtility // @desc Main parsing function for the JSON parsing functionality this // applies the appropriate conversion logic to the value provided based on a // user assigned type // @param param {dictionary} Mapping of parameters required by specific // sections of AutoML to their value and associated type // @returns {dictionary} Mapping of parameters to their appropriate kdb+ type // converted values cli.i.convertParameters:{[param] $["symbol"~param`type; `$param`value; "lambda"~param`type; get param`value; "string"~param`type; param`value; (`$param`type)$param`value ] } ================================================================================ FILE: ml_automl_code_customization_check.q SIZE: 2,562 characters ================================================================================ // code/customization/check.q - Check and load optional functionality // Copyright (c) 2021 Kx Systems Inc // // This file includes the logic for requirement checks and loading of optional // functionality within the framework, namely dependencies for deep learning // or NLP models etc. \d .automl // @kind function // @category check // @desc Check if keras model can be loaded into the process // @return {boolean} 1b if keras can be loaded 0b otherwise check.keras:{ if[0~checkimport 0; backend:csym .p.import[`keras.backend][`:backend][]`; if[(backend~`tensorflow)&not checkimport 4;:1b]; if[(backend~`theano )&not checkimport 5;:1b]; ]; :0b } // Import checks and statements
-1"[down]loading citibike data"; b:"http://s3.amazonaws.com/tripdata/" m1:.ut.sseq[1] . 2014.09 2016.12m m1:m1 where m1 within (sd;ed) f1:,[;"-citibike-tripdata"] each string[m1] except\: "." .ut.download[b;;".zip";.ut.unzip] f1; m2:.ut.sseq[1] . 2017.01 2017.12m m2:m2 where m2 within (sd;ed) f2:,[;"-citibike-tripdata"] each string[m2] except\: "." .ut.download[b;;".csv.zip";.ut.unzip] f2; / data since 2018 has an extra column / m3:.ut.sseq[1] . 2018.01m,-1+"m"$.z.D / f3:,[;"_citibikenyc_tripdata"] each string[m3] except\: "." / -1"[down]loading citibike data"; / .ut.download[b;;".csv.zip";.ut.unzip] f3; process:{[month;f] -1"parsing ", string f; t:.Q.id ("IPPH*EEH*EEISHC";1#",") 0: f; t:lower[cols t] xcol t; -1"splaying tripdata"; .Q.dpft[`:citibike;month;`bikeid] `tripdata set t; } R:6371 / radius of earth in km PI:acos -1 radian:{[deg]deg*PI%180} haversine:{[lat0;lon0;lat1;lon1] a:a*a:sin .5*radian lat1-lat0; b:b*b:sin .5*radian lon1-lon0; a+:b*cos[radian lat0]*cos[radian lat1]; d:6371*2f*.qml.atan2[sqrt a;sqrt 1f-a]; d} -1"checking if downloads need splaying"; months:m1,m2 files:f1,f2 w:til count files if[not ()~key `:citibike;system"l citibike";w:where not months in month;system"cd ../"] months[w] process' `$(f1,f2)[w],\:".csv"; -1"loading citibike database"; \l citibike ================================================================================ FILE: funq_cloud9.q SIZE: 279 characters ================================================================================ cloud9.f:("sample-small.txt";"sample-medium.txt";"sample-large.txt") 2 cloud9.b:"http://lintool.github.io/Cloud9/docs/exercises/" -1"[down]loading cloud9 network graph"; .ut.download[cloud9.b;;"";""] cloud9.f; cloud9.l:flip raze {x[0],/:1_ x} each "J"$"\t" vs/: read0 `$cloud9.f ================================================================================ FILE: funq_cossim.q SIZE: 251 characters ================================================================================ \c 40 100 \l funq.q \l iris.q / cosine similarity (distance) X:.ml.normalize iris.X flip C:.ml.skmeans[X] over .ml.forgy[3] X / spherical k-means show m:.ml.mode each iris.y I:.ml.cgroup[.ml.cosdist;X;C] / classify avg iris.y=.ut.ugrp m!I / accuracy ================================================================================ FILE: funq_decisiontree.q SIZE: 6,442 characters ================================================================================ \c 20 100 \l funq.q \l iris.q \l weather.q \l winequality.q / http://www.cise.ufl.edu/~ddd/cap6635/Fall-97/Short-papers/2.htm / http://www.saedsayad.com/decision_tree.htm / Paper_3-A_comparative_study_of_decision_tree_ID3_and_C4.5.pdf / https://www.jair.org/media/279/live-279-1538-jair.pdf / http://www.ams.org/publicoutreach/feature-column/fc-2014-12 / http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/viewer.htm#statug_hpsplit_details06.htm -1"load weather data, remove the day column and move Play to front"; show t:weather.t -1"use the id3 algorithm to build a decision tree"; -1 .ml.ptree[0] tr:.ml.id3[();::] t; `:tree.dot 0: .ml.pgraph tr -1"the tree is built with triplets."; -1"the first value is the decision feature,"; -1"and the second value is operator to use on the feature"; -1"and the third value is a dictionary representing the leaves"; -1"we can then use the (p)redict (d)ecission (t)ree function to classify our data"; avg t.Play=p:.ml.pdt[tr] t / accuracy -1"since the test and training data are the same, it is no surprise we have 100% accuracy"; -1".ml.pdt does not fail on missing features. it digs deeper into the tree"; .ut.assert[.71428571428571431] avg t.Play=p:.ml.pdt[.ml.id3[();::] (1#`Outlook) _ t] t -1"id3 only handles discrete features. c4.5 handles continues features"; -1".ml.q45 implements many of the features of c4.5 including:"; -1"* information gain normalized by split info"; -1"* handling of continuous features"; -1"* use of Minumum Description Length Principal (MDL) "; -1" to penalize features with many distinct continuous values"; -1"* pre-prunes branches that create branches with too few leaves"; -1"* post-prunes branches that overfit by given confidence value"; -1"we can test this feature by changing humidity into a continuous variable"; show s:@[t;`Humidity;:;85 90 78 96 80 70 65 95 70 80 70 90 75 80f] -1"we can see how id3 creates a bushy tree"; -1 .ml.ptree[0] .ml.id3[();::] s; -1"while q45 picks a single split value"; z:@[{.qml.nicdf x};.0125;2.241403]; -1 .ml.ptree[0] tr:.ml.prune[.ml.perr[z]] .ml.q45[();::] s; .ut.assert[1f] avg s.Play=p:.ml.pdt[tr] s / accuracy -1"we can still handle null values by using the remaining features"; .ut.assert[`Yes] .ml.pdt[tr] d:`Outlook`Temperature`Humidity`Wind!(`Rain;`Hot;85f;`) -1"we can even can handle nulls in the training data by propagating them down the tree"; s:update Temperature:` from s where Humidity=70f -1 .ml.ptree[0] tr:.ml.q45[();::] s; .ut.assert[`No] .ml.pdt[tr] d -1 "we can also use the Gini impurity instead of entropy (faster with similar behavior)"; -1 .ml.ptree[0] tr:.ml.dt[.ml.gr;.ml.ogr;.ml.wgini;();::] t; d:`Outlook`Temperature`Humidity`Wind!(`Rain;`Hot;`High;`) / remove null .ut.assert[`No] .ml.pdt[tr] d -1 "we can also create an aid tree when the target is numeric"; -1 .ml.ptree[0] tr:.ml.aid[(1#`minsl)!1#3;::] update "e"$`Yes=Play from t; / regression tree .ut.assert[.2] .ml.pdt[tr] d -1 "we can also create a thaid tree for classification"; -1 .ml.ptree[0] tr:.ml.thaid[(1#`minsl)!1#3;::] t; / classification tree .ut.assert[`Yes] .ml.pdt[tr] d -1 "we can now split the iris data into training and test batches (w/ stratification)"; w:`train`test!3 1 show d:.ut.part[w;iris.t.species] iris.t -1 "note that stratification can work on any type of list or table"; .ut.part[w;;iris.t] count[iris.t]?5; .ut.part[w;select species from iris.t] iris.t; -1 "next we confirm relative frequencies of species are the same"; .ut.assert[1b] .ml.identical value count each group d.train.species -1 "then create a classification tree"; -1 .ml.ptree[0] tr:.ml.ct[();::] `species xcols d`train; -1 "testing the tree on the test set produces an accuracy of:"; avg d.test.species=p:.ml.pdt[tr] d`test -1 "we can save the decision tree into graphviz compatible format"; `:tree.dot 0: .ml.pgraph tr; -1 "using graphviz to convert the .dot file into a png"; @[system;"dot -Tpng -o tree.png tree.dot";0N!]; -1 "we can predict iris petal lengths with a regression tree"; -1 "first we need to one-hot encode the species"; t:"f"$.ut.onehot iris.t -1 "then split the data into training and test batches" show d:.ut.part[w;0N?] t -1 "and generate a regression tree"; -1 .ml.ptree[0] tr:.ml.rt[();::] `plength xcols d`train; -1 "we now compute the root mean square error (rmse)"; .ml.rms d.test.plength-p:.ml.pdt[tr] d`test -1 "using breiman algorithm, compute pruning alphas"; dtf:.ml.ct[();::] ef:.ml.wmisc / http://mlwiki.org/index.php/Cost-Complexity_Pruning t:([]z:`b`b`b`b`w`w`w`w`w`w`b`b`w`w`b`b) t:t,'([]x:1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4) t:t,'([]y:1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 ) -1 .ml.ptree[0] tr:dtf t; .ut.assert[0 0.125 0.125 0.25] first atr:flip .ml.dtmina[ef] scan (0f;tr) -1 "we then pick the alpha (and therefore subtree) with cross validation"; b:sqrt (1_a,0w)*a:atr 0 / geometric mean I:.ut.part[(k:10)#1;0N?] til count t show e:avg each t[`z][I]=p:.ml.dtcv[dtf;ef;b;t] .ml.kfold I -1 .ml.ptree[0] atr[1] 0N!.ml.imax 0N!avg e; -1 "the old (deprecated) interface splits the data before iterating"; .ut.assert[e] avg each ts[;`z]=p:.ml.dtxv[dtf;ef;b;ts:t@I] peach til k -1 "returning to the iris data, we can grow and prune that too"; -1 .ml.ptree[0] tr:dtf t:iris.t; .ut.assert[0 .01 .02 .02 .04 .88 1f] 3*first atr:flip .ml.dtmina[ef] scan (0f;tr) b:sqrt (1_a,0w)*a:atr 0 / geometric mean I:.ut.part[(k:10)#1;0N?] til count t show e:avg each t[`species][I]=p:.ml.dtcv[dtf;ef;b;t] .ml.kfold I -1 .ml.ptree[0] atr[1] 0N!.ml.imax 0N!avg e; -1 "or even grow and prune a regression tree with wine quality data"; d:.ut.part[`train`test!1 1;0N?] winequality.red.t dtf:.ml.rt[();::] ef:.ml.wmse -1 "the fully grown tree has more than 200 leaves!"; .ut.assert[1b] 200<0N!count .ml.leaves tr:dtf d`train -1 "we can improve this by performing k-fold cross validation"; -1 "first we find the list of critical alphas"; atr:flip .ml.dtmina[ef] scan (0f;tr) b:sqrt (1_a,0w)*a:atr 0 / geometric mean I:.ut.part[(k:5)#1;0N?] til count t:d`train -1 "then we compute the accuracy of each of these alphas with k-fold cv"; show e:avg each e*e:t[`quality][I]-p:(.ml.dtcv[dtf;ef;b;t]) .ml.kfold I -1 "finally, we pick the tree whose alpha had the min error"; -1 .ml.ptree[0] btr:atr[1] 0N!.ml.imin 0N!avg e; -1 "the pruned tree has less than 25 leaves"; .ut.assert[1b] 25>0N!count .ml.leaves btr -1 "and an rms less than .73"; .ut.assert[1b] .73>0N!.ml.rms d.test.quality - .ml.pdt[btr] d`test ================================================================================ FILE: funq_dji.q SIZE: 258 characters ================================================================================ dji.f:"dow_jones_index" dji.b:"http://archive.ics.uci.edu/ml/machine-learning-databases/" dji.b,:"00312/" -1"[down]loading dji data set"; .ut.download[dji.b;;".zip";.ut.unzip] dji.f; dji.t:("HSDEEEEJFFJEEFHF";1#",")0: ssr[;"$";""] each read0 `$dji.f,".data" ================================================================================ FILE: funq_em.q SIZE: 4,252 characters ================================================================================ \c 40 100 \l funq.q \l mnist.q \l iris.q / expectation maximization (EM) / binomial example / http://www.nature.com/nbt/journal/v26/n8/full/nbt1406.html n:10 x:"f"$sum each (1000110101b;1111011111b;1011111011b;1010001100b;0111011101b) THETA:.6 .5 / initial coefficients lf:.ml.binl[n] / likelihood function mf:.ml.wbinmle[n;0] / parameter maximization function phi:2#1f%2f / coins are picked with equal probability .ml.em[1b;lf;mf;x] . pT:(phi;flip enlist THETA) (.ml.em[1b;lf;mf;x]//) pT / call until convergence / which flips came from which THETA? pick maximum log likelihood
Streaming analytics with kdb+: Detecting card counters in Blackjack¶ With the growth and acceleration of the Internet of Things, data collection is increasingly being performed by sensors and smart devices. Sensors are able to detect just about any physical element from a multitude of machines, including mobile phones, vehicles, appliances and meters. Airlines are currently using smart devices to prevent failures, producing as much as 40 terabytes of data per hour per flight. Consuming and analyzing the massive amounts of data transmitted from sensing devices is considered the next Big Data challenge for businesses. A key problem in processing large quantities of data in real-time is the detection of event patterns, and this is why streaming analytics, also known as Event Stream Processing (ESP), is becoming a mainstream solution in IoT. ESP is computing that turns incoming data into more useful information, providing a better insight of what is happening. One of the early adopters of this type of technology was the financial-services industry, where it is used to identify opportunities in the market for traders and/or algorithmic trading systems. ESP software has been hugely impactful on the capital-markets industry, helping to identify opportunities or threats in a faster way, while removing emotion from the decision making. Anecdotally, Wall Street banks have been known to teach their traders how to card count in Blackjack, and famous traders like Blair Hull and Ed Thorp have translated card-counting techniques into financial-markets success. Blackjack is the most widely played casino game in the world. It generally takes up more tables in the pit area, employs more dealers, attracts more players and generates more revenue than any other table game. Unlike most casino games where the only factor that determines a win or a loss is luck, Blackjack is a game where skill plays a big part. Card counting is a strategy used in Blackjack to determine whether the next hand is likely to give a probabilistic advantage to the player or to the dealer. It can be used to decrease the casino’s house edge and allows the player to bet more with less risk and minimize losses during unfavorable counts. In a typical hand of Blackjack, the house has between .5% and 3% advantage over the player. However by card counting a player can have a 1%, 2% or 3% advantage over the house. There are a number of measures used to protect casinos against card counters. These countermeasures constrain the card counters that may be playing in the casino at any given time but heavily tax the casino’s efficiency, costing the casino industry millions of dollars every year. Examples of these countermeasures are shuffling more frequently, reducing the number of hands played, and not permitting mid-game entry. The purpose of this paper is to highlight a use case of kdb+ with ESPs to detect card counters in the game of Blackjack in real-time. kdb+ offers an unrivalled performance advantage when it comes to capturing, analyzing and storing massive amounts of data in a very short space of time, making it the data-storage technology of choice for many financial institutions across the globe, and more recently other industries such as telecommunications and pharma. All tests were run using kdb+ version 3.5 (2017.04.10) Implementation¶ The data-capture technology required for this software is already available in most casinos. Inbuilt and track information at the tables. The diagram below illustrates how this information might flow into the ESP in kdb+. The information from the card scanners and RFID chips are filtered into a standard kdb+ tick architecture; a feedhandler to convert the data to kdb+ format, a tickerplant to write to a log file for HDB writedown, publish to a chained tickerplant for the data to be enriched before going to the real-time database and the ESP. The ESP compares each player’s cards and bets against precompiled card-counting strategies (currently 23 known strategies: see Appendix), and determining whether the player is card counting and, if so, which strategy they are using. The casino would then be notified via a front-end GUI along with a degree of certainty. Analysis can be performed by humans on the real-time and historical databases to find patterns and determine new strategies to detect card counters, feeding this back into the ESP. Using the Bellagio Casino in Las Vegas as an example, with 80 Blackjack tables and assuming three players occupy all tables, an estimated 200,000 hands would be played each day. The comparison of these hands to the 23 card-counting strategies listed in the appendix would generate close to 14 million ticks per day. This figure would increase even further when applied to the online gaming industry, which has similar issues with card-counting bots. Blackjack simulator¶ In order to test kdb+’s ability to detect card counting in real time we first required a dataset that would mimic what we would expect to see in a casino. Previous analysis in this field used Monte Carlo simulation, which is a mathematical technique used to model the probability of different outcomes in a process that cannot easily be predicted. Instead, a Blackjack simulator was developed to act as a “dealer”. The Blackjack simulator, which is entirely written in q, has the capability of emulating the casino experience at any Blackjack table in the world, including any deck size, deck penetration, shuffle frequency and table size. This reduced the limitations of the testing, providing us with a real-time testing model for a real-world card-counting detection algorithm, and allowed us to create large datasets for multiple scenarios in a very short space of time. The table below from Casino Operations Management by Jim Kilby indicates the average number of hands played at a Blackjack table per hour. We have also included the average observed times to play the same number of hands using the Blackjack simulator and the card-counting algorithms. | players | hands per hour | time taken in kdb+ (seconds) | |---|---|---| | 1 | 209 | 1.741 | | 2 | 139 | 1.827 | | 3 | 105 | 2.083 | | 4 | 84 | 2.284 | | 5 | 70 | 2.781 | | 6 | 60 | 3.064 | | 7 | 52 | 3.420 | To ensure the quality of our data the simulator must deal cards in a random manner. Although kdb+ generates random numbers via the Roll operator, it is seeded. This means every time a new game starts the first player would get the exact same hands, therefore we added logic to update the seed parameter with a new figure on every start up. // update seed parameter q)system "S ",string[`float$.z.p]; // Define possible cards q)cards:`A`K`Q`J`10`9`8`7`6`5`4`3`2; // Define deck size q)ds:6; // Shuffle cards q)deck:{(neg count x)?x}(ds*52)#cards; q)deck `8`4`8`9`K`4`J`K`4`6`Q`7`2`10`6`8`9`Q`Q`6`9`Q`9`A`5`2`J`4`5`7`8`8.. Once the server ("dealer") receives a connection from a player they will be added to the game, taking their handle number and username, and sending them a request to place a bet. When all players have placed their bets the server will then deal the cards out in order of connection time, and return to the first player to request a next step i.e. hit, stick, split or double. q)stake[100] "Bet has been placed by caolanr" "Your first card is 6" "Dealers first card is 9" "Your second card is 2" "Dealer's second card is dealt face down" "**************************************" "Your hand is 6,2" "Your hand count is 8" "**************************************" "Hit or stick?" When all clients have played their hands, it moves on to the dealer process, which will continue to hit until its hand count is 17 or over. After this the server compares its count to any player left in the game indicating to each whether they won or lost, their profit and a summary of the hand. The server then checks if any new players have joined the table and prompts all players to place a bet for the next hand. q)hit[] "Hit by caolanr" "caolanr got a J" "caolanr's count is now 21" "caolanr is sticking on 21" "**************************************" "Everyone has played their hand, now it's the dealers turn" "Dealer has 9,J" "Dealers hand count is 19" "**************************************" "You win!" "caolanr's profit is $100!" "**************************************" " Results table for the round;" player name cards cnt dealer dealerCnt bet profit ------------------------------------------------------- 1 caolanr `J`10 20 9 J 19 200 2 caolanr `6`2`3`J 21 9 J 19 200 "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~ Game over ~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" "The hand has commenced" "Please place your bets via the stake[] function" The next step was to create algorithms to act as players so that we could generate mass amounts of data to run statistical analysis on. To ensure the quality of the datasets we created several algorithms, some that used card counting of varying complexity and some that used randomly generated play. One of the random players, for example, doubles his bet when he wins and places the same bet if he loses. bet:{ // Get the bet placed by user from last round b:last exec bet from results where name=.z.u; // Get the profit from the last round wl:last exec profit from results where name=.z.u; // If player did not win bet same amount, if they won double the bet $[wl=0;b;b*2] } The card-counting algorithms are outlined in the next section. Card-counting algorithms¶ All card-counting methods require sound comprehension of basic strategy, which is a set of rules providing the player with the optimal way to play each hand based on the player’s first two cards and the dealers up card, for every possible card combination. Basic strategy can reduce the house edge to as little as .5%. If the player does not know how to execute the options available properly, counting cards will not be of use. With basic strategy, a number of permutations exist depending on the player’s hand type and the dealers shown card. The different hand types are; - Hard: any two-card total which does not include an Ace - Soft: any two-card total which includes an Ace - Pair: two of the same cards The card-counting algorithms described below will use basic strategy to decide whether they should hit (H), stick (S), double (D) or split (SP), by performing a look-up in the corresponding dictionaries. q)hard h | TWO THREE FOUR FIVE SIX SEVEN EIGHT NINE TEN ACE --| ------------------------------------------------ 3 | H H H H H H H H H H 4 | H H H H H H H H H H 5 | H H H H H H H H H H 6 | H H H H H H H H H H 7 | H H H H H H H H H H 8 | H H H D D H H H H H 9 | D D D D D H H H H H 10| D D D D D D D D H H 11| D D D D D D D D D D 12| H H S S S H H H H H 13| S S S S S H H H H H 14| S S S S S H H H H H 15| S S S S S H H H H H 16| S S S S S H H H H H 17| S S S S S S S S S S 18| S S S S S S S S S S 19| S S S S S S S S S S 20| S S S S S S S S S S 21| S S S S S S S S S S q)soft h | TWO THREE FOUR FIVE SIX SEVEN EIGHT NINE TEN ACE --| ------------------------------------------------ 13| H H D D D H H H H H 14| H H D D D H H H H H 15| H H D D D H H H H H 16| H H D D D H H H H H 17| D D D D D H H H H H 18| S D D D D S S H H S 19| S S S S D S S S S S 20| S S S S S S S S S S 21| S S S S S S S S S S q)pair h | TWO THREE FOUR FIVE SIX SEVEN EIGHT NINE TEN ACE --| ------------------------------------------------ 2 | SP SP SP SP SP SP H H H H 3 | SP SP SP SP SP SP SP H H H 4 | H H SP SP SP H H H H H 5 | D D D D D D D D H H 6 | SP SP SP SP SP SP H H H H 7 | SP SP SP SP SP SP SP H S H 8 | SP SP SP SP SP SP SP SP SP SP 9 | SP SP SP SP SP S SP SP S S 10| S S S S S S S S S S 11| SP SP SP SP SP SP SP SP SP SP The key on the left-hand side indicates the players total card value, and the key at the top is the dealers up card. Basic card-counting player¶ The Hi-Lo system is one of the most popular card-counting strategies and the easiest to learn. When using the Hi-Lo system, every card value in the deck is assigned a number which can be seen in the table below. card 2 3 4 5 6 7 8 9 10 A value 1 1 1 1 1 0 0 0 -1 -1 As cards are dealt out, a card-counting player must do the arithmetic according to each card. The count must begin after the deck is shuffled with the first card that is dealt. The larger the number becomes the more high-valued cards remain in the deck, and the player should increase their bet. If the count is negative then many of the 10 valued cards and Aces have been dealt, and the player should bet the table minimum. The basic card-counting algorithm will use the below function to determine the count using the Hi-Lo system. basicCount:`2`3`4`5`6`7`8`9`10`J`Q`K`A!1 1 1 1 1 0 0 0 -1 -1 -1 -1 -1 theCount:{ // Get all player cards from results table pc:raze exec cards from results; // Get dealer cards by round dc:raze exec dealer from select first dealer by round from results; // Get a list of all cards played c:pc,dc; // Get the running count runCount:sum basicCount[c]; // Get current deck size, here we assume 6 decks are used deckSize:6*52; // Calculate the true count runCount%(deckSize-count c)%52 } As can be seen it is using the true count, which is the running count divided by the number of decks remaining. A true count is designed to give the player a more accurate representation of the deck and how favorable it is. For example, a running count of +10 is much better if there are two decks remaining as opposed to five decks remaining. Intermediate card-counting player¶ The Zen Card-Counting system was created by Arnold Snyder, detailed in his book Blackbelt in Blackjack. What makes it more complex compared to the previous card-counting method is that the Zen Count is a multi-level system. Some cards are counted as two points and others as one point. It is much more efficient but at the same time more difficult to master and as such, it requires more practice. As this is a multi-level system, certain cards can have a value of ±1 and ±2, which is shown in the table below. card 2 3 4 5 6 7 8 9 10 A value 1 1 2 2 2 1 0 0 -2 -1 The Zen Count has some similarities to the Hi-Lo system, but it’s a little more complicated. They provide similar betting correlations, but the Zen Count provides a better estimate of changes to basic strategy as it relates to whether or not to take insurance, which is a side bet that the dealer has Blackjack if their up card is an Ace. Exact differences in the counting systems can be seen in the Appendix. Advanced card-counting player¶ Including basic strategy, variation is what separates the good card counters from the professionals. Basic strategy explains the best possible move for you to make on average. However, as the count increases or decreases, some of the moves basic strategy tells you to do may no longer be the correct decision. Looking at three scenarios within the advanced card-counting algorithm: - True count is less than or equal to -1 - The player’s hand sum is 14 - The dealer’s first card is a 5 Basic strategy suggests a player should stick, however the variation strategy opts to double, due to the higher probability of obtaining a low card. Behind the scenes, the advanced algorithm, along with the Zen Card Counting system, uses the simple lookup dictionary of the basic strategy to determine initially whether to hit, stick, double or split. However, instead of returning this result immediately it then identifies its hand type (hard/soft/pair) so that it can check within the variation functions whether or not, given the current count, another result should be chosen. Below is an example of the variation function with a ‘hard’ hand type. // Variance with hard cards hardVAR:{[player;dealer;decision;trueCount] // Add players cards together c:(+)@/player; // Round trueCount figure trueCount:ceiling trueCount; // If any of the below match, return D (double) instead $[(trueCount;c;dealer)in(-2 10 9;0 9 3;1 11 11; 2 9 2;3 8 6;6 8 5;6 9 7); `D; decision] } // If player does not have an Ace and they don’t have a pair, // execute hardVAR function if[(not any 11 in playerCards)and not playerCards[0]~playerCards[1]; hardVAR[playerCards;dealerCard;ret;trueCount] ] Once all algorithms had been created, three card-counting algorithms with varying levels of complexity and three that used randomly-generated play, the next step was to run various scenarios using the blackjack simulator to create our datasets. These ranged from six players using randomly-generated play to a single card counter at the table, with different levels of deck sizes, deck penetration and shuffle frequency. The datasets were then analyzed and a detection algorithm was developed. Detection algorithm¶ Any card counter must use a sufficient minimum-to-maximum bet spread to ensure they win enough money to make their time at the Blackjack table worthwhile. As the count increases so should the players bet, as this is reducing the houses edge. Therefore, the detection algorithm will be checking the bet spreads of every player at the table against each of the card-counting strategies. kdb+ has an inbuilt covariance function which can be used to measure the strength of correlation between two or more sets of random variables. The algorithm will keep the true count for all of the counting strategies defined within it and compare each strategy to each player by plotting the pairs of covariances against each other, calculating the correlation and their r-squared values. Card counter vs random players¶ In Figure 1 below, we compare how each player’s bet varies with the running count. From our testing it was clear to distinguish the card counter from the randomly-generated players due to the positive covariance trend. The detection algorithm is not only able to identify if a player is counting cards but also which counting method they are using. We also ran a simulation using two players, one using random play and one card counting, to determine whether the detection algorithm would be able to establish which was which. The results can be seen in the graphs below. Card-counting strategy vs random player¶ As can be seen in Figure 2 below, player A’s betting patterns have a low correlation of 0.31 when compared with a specific card-counting method and therefore it can be determined it is not using this particular strategy. Card-counting strategy vs card counter¶ In Figure 3, player B has a high correlation of 0.98, which indicates a very strong relationship between the detection algorithm and their strategy. The r-squared value of 0.98 also signifies that the data is closely fitted to the regression line. Detection-oriented ESP is focused on identifying combinations of event patterns or situations. The current detection system has been pre-programmed with a number of different strategies that it will be able to detect using linear regressions as shown by the detection algorithm vs player charts. The results of the regressions are then fed back to a human to determine on what decision should be made based off the results. Velocity and veracity are two of the major challenges associated with real-time event processing. Within any fraud-detection system, it is imperative that any anomalies are identified quickly and that the accuracy is extremely high to limit the number of false positives. Conclusion¶ With the universal growth of sensors and smart devices there is an increasing challenge to analyze an ever-growing stream of data in real time. The ability to react quickly to changing trends and delivering up-to-date business intelligence can be a decisive factor for a company’s success or failure. A key problem in real-time processing is the detection of event patterns in data streams. One solution to this is Event Stream Processing (ESP). This white paper focused on the idea of ESP, using the game of Blackjack as a proxy. The purpose of the paper was to highlight the importance of proactive monitoring and operational intelligence by providing real-time alerts and insight into pertinent information, enabling a casino to operate smarter, faster and more efficiently. Firstly we showed how an ESP could be implemented in a real-world scenario using the kdb+ tick architecture. We were able to build a Blackjack dealer simulator in kdb+ and with the help of kdb+’s inbuilt Roll and seed operators, ensure the randomness of the cards being dealt. Several card-counting algorithms were then created, along with randomly-generated-play algorithms, to allow us to create millions of rows of testing data. After running analyses on the data created, we were then able to develop a card-counting detection algorithm to handle the task at hand with a high degree of accuracy. Although we are able to update the ESP with new card-counting algorithms with ease once known, the detection system could be further developed by leveraging Machine Learning to configure potential card counting strategies as they occur. ESP has evolved from an emerging technology to an essential platform of various industry verticals. The technology's most consistent growth has been in banking, serving fraud detection, algorithmic trading and surveillance. There has also been considerable growth in other industries including healthcare, telecommunications, manufacturing, utilities and aerospace. Authors¶ Caolan Rafferty works for KX as a kdb+ consultant. Based in Hong Kong, he maintains an eFx trading platform at a major investment bank. He has developed a range of applications for some of the world’s largest financial institutions. Caolan also helped in building the data-science training program within First Derivatives. Krishan Subherwal works for KX as a kdb+ consultant and has developed data and analytics systems in a range of asset classes for some of the world’s largest financial institutions. Currently based in London, Krishan is working with an investment-management firm within their Data Engineering team. Appendix – Card counting strategies¶ The table below lists card-counting strategies with their values for each card as well as their betting correlation, playing efficiency and insurance correlation. strategy A 2 3 4 5 6 7 8 9 10 BC PE IC ------------------------------------------------------------------------ Canfield Expert 0 0 1 1 1 1 1 0 -1 -1 .87 .63 .76 Canfield Master 0 1 1 2 2 2 1 0 -1 -2 .92 .67 .85 Hi-Lo -1 1 1 1 1 1 0 0 0 -1 .97 .51 .76 Hi-Opt I 0 0 1 1 1 1 0 0 0 -1 .88 .61 .85 Hi-Opt II 0 1 1 2 2 1 1 0 0 -2 .91 .67 .91 KISS 2 0 0/1 1 1 1 1 0 0 0 -1 .90 .62 .87 KISS 3 -1 0/1 1 1 1 1 1 0 0 -1 .98 .56 .78 K-O -1 1 1 1 1 1 1 0 0 -1 .98 .55 .78 Mentor -1 1 2 2 2 2 1 0 0 -1 .97 .62 .80 Omega II 0 1 1 2 2 2 1 0 -1 -2 .92 .67 .85 Red Seven -1 1 1 1 1 1 0/1 0 0 -1 .98 .54 .78 REKO -1 1 1 1 1 1 1 0 0 -1 .98 .55 .78 Revere Adv. +/- 0 1 1 1 1 1 0 0 -1 -1 .89 .59 .76 Revere Point Count -2 1 2 2 2 2 1 0 0 -2 .99 .55 .78 Revere RAPC -4 2 3 3 4 3 2 0 -1 -3 1.00 .53 .71 Revere 14 Count 0 2 2 3 4 2 1 0 -2 -3 .92 .65 .82 Silver Fox -1 1 1 1 1 1 1 0 -1 -1 .96 .53 .69 UBZ 2 -1 1 2 2 2 2 1 0 0 -2 .97 .62 .84 Uston Adv. +/- -1 0 1 1 1 1 1 0 0 -1 .95 .55 .76 Uston APC 0 1 2 2 3 2 2 1 -1 -3 .91 .69 .90 Uston SS -2 2 2 2 3 2 1 0 -1 -2 .99 .54 .73 Wong Halve -1 0.5 1 1 1.5 1 0.5 0 -0.5 -1 .99 .56 .72 Zen Count -1 1 1 2 2 2 1 0 0 -2 .96 .63 .85 - Betting Correlation (BC) - the correlation between card point values and the effect of removal of cards. It is used to predict how well a counting system predicts good betting situations and can approach 1.00 (100% correlation). - Playing Efficiency (PE) - indicates how well a counting system handles changes in playing strategy. - Insurance Correlation (IC) - the correlation between card point values and the value of cards in Insurance situation. A point value of -9 for tens and +4 for all other cards would be perfect for predicting if an Insurance bet should be placed. - 0/1 - indicates the value is either 0 or 1 depending on the suit of the card. Source: www.qfit.com
/- open handle to log file openlog:{[lgfile] lgfileexists:type key lgfile; /- check if log file is present on disk .lg.o[`openlog; $[lgfileexists; "opening log file : "; "creating new log file : "],string lgfile]; /- create log file if[not lgfileexists; .[set;(lgfile;());{[lgf;err] .lg.e[`openlog;"cannot create new log file : ",string[lgf]," : ", err]}[lgfile]]]; /- backup upd & redefine for counting updold:`. `upd; @[`.;`upd;:;{[t;x] .u.icounts[t]+:count x;}]; /- set pub and log count .u.i:.u.j:@[-11!;lgfile;-11!(-2;lgfile)]; /- restore upd @[`.;`upd;:;updold]; /- check if log file is corrupt if[0<=type .u.i; .lg.e[`openlog;"log file : ",(string lgfile)," is corrupt. Please remove and restart."]]; /- open handle to logfile hopen lgfile } /- subscribe to tickerplant and refresh tickerplant settings subscribe:{[] s:.sub.getsubscriptionhandles[`;.ctp.tickerplantname;()!()]; if[count s; subproc:first s; .ctp.tph:subproc`w; /- get tickerplant date - default to today's date refreshtp @[tph;".u.d";.z.D]; .lg.o[`subscribe;"subscribing to ", string subproc`procname]; r:.sub.subscribe[subscribeto;subscribesyms;schema;replay;subproc]; if[`d in key r;.u.d::r[`d]]; if[(`icounts in key r) & (not createlogfile); /- dict r contains icounts & not using own logfile subtabs:$[subscribeto~`;key r`icounts;subscribeto],(); .u.jcounts::.u.icounts::$[0=count r`icounts;()!();subtabs!enlist [r`icounts]subtabs]; ] ]; } /- write to tickerplant log writetolog:{[t;x] /- if x is not a table, make it a table if[not 98h=type x;x:flip cols[value t]!(),/:x]; .u.l enlist (`upd;t;x); .u.j+:1; } /- tick by tick publish tickpub:{[t;x] .ps.publish[t;x]; .u.i:.u.j; .u.icounts[t]+:count x; } /- batch publish batchpub:{[t;x] insert[t;x]; .u.jcounts[t]+:count x; } /- publish to subscribers publishalltables:{[] pubtables:$[any null .ctp.subscribeto;tables[`.];.ctp.subscribeto],(); .ps.publish'[pubtables;value each pubtables]; cleartables[pubtables]; /- update .u.i with .u.j .u.i:.u.j; .u.icounts:.u.jcounts; } /- dictionary containing tablename!schema tableschemas:()!() /- clear each table and reapply attributes cleartables:{[t] /- restore default table schemas, removes data. @[`.;t;:;tableschemas t]; } /- create tickerplant log file name createlogfilename:{[d] ` sv (.ctp.logdir;`$string[.proc.procname],"_",string d) } /- called at end of day to refresh tickerplant settings refreshtp:{[d] /- close .u.l if opened if[@[value;`.u.l;0]; @[hclose;.u.l;()]]; /- reset log and publish count .u.i:.u.j:0; .u.icounts::.u.jcounts::(`symbol$())!0#0,(); /- create log file if required if[createlogfile; .u.L:createlogfilename[d]; if[clearlogonsubscription;clearlog .u.L]; ]; /- log file handle .u.l:$[createlogfile;openlog .u.L;1i]; /- set date .u.d:d; } /- returns true if tickerplant is not connected notpconnected:{[] 0 = count select from .sub.SUBSCRIPTIONS where procname in .ctp.tickerplantname, active} /- redefine .z.pc to detect loss of tickerplant connection .dotz.set[`.z.pc;{[x;y]if[.ctp.tph=y; .lg.e[`.z.pc;"lost connection to tickerplant : ",string .ctp.tickerplantname];exit 0]; x@y}[@[value;.dotz.getcommand[`.z.pc];{{;}}]]] /- define upd based on user settings upd:$[createlogfile; $[pubinterval;{[t;x] writetolog[t;x];batchpub[t;x];};{[t;x] writetolog[t;x];tickpub[t;x];}]; $[pubinterval;batchpub;tickpub]]; /- ctp sub method, returns logfile, i and icounts as well as schema sub:{[subtabs;subsyms] r:(`schema`icounts`i`logfile`d)!(); /- get schema & subscribe r[`schema]:$[-11h=type subtabs;first;::] .u.sub\:[subtabs,();subsyms]; /- add icounts if subscribing to all syms if[subscribesyms~`;r[`icounts]:.u.icounts]; /- if logfile, add logfile & i if[createlogfile;r[`i]:.u.i;r[`logfile]:.u.L]; /- add date r[`d]:.u.d; r } \d .u /- publishes all tables then clears them, pass on .u.end to subscribers end:{[d] .lg.o[`end;"end of day invoked"]; /- publish and clear all the tables .ctp.publishalltables[]; /- roll over the log you need a new log for next days data .ctp.refreshtp[d+1]; /- push endofday messages to subscribers (neg union[@[value;(`.stpps.allsubhandles;`);()]; @[{union/[(value x)[;;0]]};`.u.w;()]])@\:(`.u.end;d) } \d . /- set upd function in the top level name space, provided it isn't already defined if[not `upd in key `.; upd:.ctp.upd]; /- pubsub must be initialised sooner to enable tickerplant replay publishing to work .ps.initialise[]; /- check if tickerplant is available and if not exit with error .servers.startupdepnamecycles[.ctp.tickerplantname;.ctp.tpconnsleep;.ctp.tpcheckcycles]; /- subscribe to tickerplant .ctp.subscribe[]; /- add subscribed table schemas to .ctp.tableschemas, used in cleartables .ctp.tableschemas:{x!(0#)@'value@'x} (),$[any null .ctp.subscribeto;tables[`.];.ctp.subscribeto]; /- set timer for batch update publishing if[.ctp.pubinterval; .timer.rep[.proc.cp[];0Wp;.ctp.pubinterval;(`.ctp.publishalltables;`);1h;"Publishes batch updates to subscribers";1b]]; ================================================================================ FILE: TorQ_code_processes_compression.q SIZE: 723 characters ================================================================================ \d .cmp inputcsv:@[value;`inputcsv;.proc.getconfigfile["compressionconfig.csv"]]; // compression config file to use hdbpath:@[value;`hdbpath;`:hdb] // hdb directory to compress maxage:@[value;`maxage;365] // the maximum date range of partitions to scan exitonfinish:@[value;`exitonfinish;1b] // exit the process when compression is complete if[not count key hsym .cmp.hdbpath; .lg.e[`compression; err:"invalid hdb path ",(string .cmp.hdbpath)];'err]; /- run the compression .cmp.compressmaxage[hsym .cmp.hdbpath;.cmp.inputcsv;.cmp.maxage] if[exitonfinish; .lg.o[`compression; "finished compression"]; exit 0] ================================================================================ FILE: TorQ_code_processes_discovery.q SIZE: 2,104 characters ================================================================================ // Discovery service to allow lookup of clients // Discovery service attempts to connect to each process at start up // after that, each process should attempt to connect back to the discovery service // Discovery service only gives out information on registered services - it doesn't really need to have connected to them // The reason for having a connection is just to get the attributes. // initialise connections .servers.startup[] // subscriptions - handles to list of required proc types subs:(`int$())!() register:{ // add the new handle .servers.addw .z.w; // If there already was an entry for the same host:port as the supplied handle, close it and delete the entry // this is to handle the case where the discovery service connects out, then the process connects back in on a timer if[count toclose:exec i from .servers.SERVERS where not w=.z.w,hpup in exec hpup from .servers.SERVERS where w=.z.w; .servers.removerows toclose]; // publish the updates new:select proctype,procname,hpup,attributes from .servers.SERVERS where w=.z.w; (neg ((where ((first new`proctype) in/: subs) or subs~\:enlist`ALL) inter key .z.W) except .z.w)@\:(`.servers.procupdate;new); } // get a list of services getservices:{[proctypes;subscribe] .servers.cleanup[]; if[subscribe; subs[.z.w]:proctypes,()]; distinct select procname,proctype,hpup,attributes from .servers.SERVERS where proctype in ?[(proctypes~`ALL) or proctypes~enlist`ALL;proctype;proctypes],not proctype=`discovery} // add each handle @[.servers.addw;;{.lg.e[`discovery;x]}] each exec w from .servers.SERVERS where .dotz.liveh w, not hpup in (exec hpup from .servers.nontorqprocesstab); // try to make each server connect back in / (neg exec w from .servers.SERVERS where .dotz.liveh w)@\:"@[value;(`.servers.autodiscovery;`);()]"; (neg exec w from .servers.SERVERS where .dotz.liveh w,not hpup in exec hpup from .servers.nontorqprocesstab)@\:(`.servers.autodiscovery;`); // modify .z.pc - drop items out of the subscription dictionary .dotz.set[`.z.pc;{subs::(enlist y) _ subs; x@y}@[value;.dotz.getcommand[`.z.pc];{;}]] ================================================================================ FILE: TorQ_code_processes_dqc.q SIZE: 15,402 characters ================================================================================ / - default parameters \d .dqe configcsv:@[value;`.dqe.configcsv;first .proc.getconfigfile["dqcconfig.csv"]]; // loading up the config csv file dqcdbdir:@[value;`dqcdbdir;`:dqcdb]; // location of dqcdb database detailcsv:@[value;`.dqe.detailcsv;first .proc.getconfigfile["dqedetail.csv"]]; // csv file that contains information regarding dqc checks utctime:@[value;`utctime;1b]; // define whether the process is on UTC time or not partitiontype:@[value;`partitiontype;`date]; // set type of partition (defaults to `date) writedownperiod:@[value;`writedownperiod;0D01:00:00]; // dqc periodically writes down to dqcdb, writedownperiod determines the period between writedowns .servers.CONNECTIONS:distinct .servers.CONNECTIONS,`tickerplant`rdb`hdb`dqe`dqedb`dqcdb // set to only the processes it needs getpartition:@[value;`getpartition; // determines the partition value {{@[value;`.dqe.currentpartition; (`date^partitiontype)$(.z.D,.z.d).dqe.utctime]}}]; detailcsv:@[value;`.dqe.detailcsv;first .proc.getconfigfile["dqedetail.csv"]]; // location of description of functions testing:@[value;`.dqe.testing;0b]; // testing varible for unit tests, default to 0b compcounter:([id:`long$()]counter:`long$();procs:();results:()); // table that results return to when a comparison is being performed
bind:{[sess;customDict] defaultKeys:`dn`cred`mech; defaultVals:```; defaultDict:defaultKeys!defaultVals; if[customDict~(::);customDict:()!()]; if[99h<>type customDict;'"customDict must be (::) or a dictionary"]; updDict:defaultDict,customDict; bindSession:.ldap.bind_s[sess;;;]. updDict defaultKeys; bindSession } login:{[user;pass] / validate login attempt incache:.ldap.cache user; / get user from inputs dict:`dn`cred!(.ldap.buildDN user;pass); if[incache`blocked; if[null blocktime; / if null blocktime then user is blocked .ldap.out"authentication attempts for user ",dict[`bind_dn]," are blocked"; :0b]; $[.z.p<bt:incache[`time]+.ldap.blocktime; / block user if blocktime has not elapsed [.ldap.out"authentication attempts for user ",dict[`bind_dn]," are blocked until ",string bt; :0b]; update attempts:0, blocked:0b from `.ldap.cache where user=user]; ]; authorised:$[all ( / check if previously used details match incache[`success]; / previous attempt was a success incache[`time]>.z.p-.ldap.checktime; / previous attemp occured within checktime period incache[`pass]~np:md5 pass / same password was used ); enlist[`ReturnCode]!enlist 0i; .[.ldap.bind;(.ldap.sessionID;dict);enlist[`ReturnCode]!enlist -2i] / attempt authentication ]; `.ldap.cache upsert (user;np;`$.ldap.server;.ldap.port;.z.p; $[0=authorised[`ReturnCode];0;1+0^incache`attempts] ;authorised[`ReturnCode]~0i;0b); / upsert details of current attempt $[authorised[`ReturnCode]~0i; / display authentication status message .ldap.out"successfully authenticated user ",; .ldap.err"failed to authenticate user ",.ldap.err2string[authorised[`ReturnCode]],] dict[`dn]; if[.ldap.checklimit<=.ldap.cache[user]`attempts; / if attempt limit reached then block user .[`.ldap.cache;(user;`blocked);:;1b]; .ldap.out"limit reached, user ",dict[`dn]," has been locked out"]; :authorised[`ReturnCode]~0i; }; if[enabled; libfile:hsym ` sv lib,`so; / file containing ldap library if[()~key libfile; / check ldap library file exists .lg.e[`ldap;"cannot find library file: ",1_string libfile]]; initialise hsym .ldap.lib; / initialise ldap library .dotz.set[`.z.pw;{all(.ldap.login;x).\:(y;z)}@[value;.dotz.getcommand[`.z.pw];{{[x;y]1b}}]]; / redefine .z.pw ]; ================================================================================ FILE: TorQ_code_handlers_logusage.q SIZE: 6,818 characters ================================================================================ / log external (.z.p* & .z.exit) usage of a kdb+ session // based on logusage.q from code.kx // http://code.kx.com/wsvn/code/contrib/simon/dotz/ // Modifications : // usage table is stored in memory // Data is written to file as ASCII text // Added a new LEVEL - LEVEL 0 = nothing; 1=errors only; 2 = + open and queries; 3 = log queries before execution also \d .usage // table to store usage info usage:@[value;`usage;([]time:`timestamp$();id:`long$();timer:`long$();zcmd:`symbol$();proctype:`symbol$(); procname:`symbol$(); status:`char$();a:`int$();u:`symbol$();w:`int$();cmd:();mem:();sz:`long$();error:())] // Check if the process has been initialised correctly if[not @[value;`.proc.loaded;0b]; '"environment is not initialised correctly to load this script"] // Flags and variables enabled:@[value;`enabled;1b] // whether logging is enabled logtodisk:@[value;`logtodisk;1b] // whether to log to disk or not logtomemory:@[value;`logtomemory;1b] // write query logs to memory ignore:@[value;`ignore;1b] // check the ignore list for functions to ignore ignorelist:@[value;`ignorelist;(`upd;"upd")] // the list of functions to ignore flushinterval:@[value;`flushinterval;0D00:30:00] // default value for how often to flush the in-memory logs flushtime:@[value;`flushtime;0D03] // default value for how long to persist the in-memory logs suppressalias:@[value;`suppressalias;0b] // whether to suppress the log file alias creation logtimestamp:@[value;`logtimestamp;{[x] {[].proc.cd[]}}] // function to generate the log file timestamp suffix logroll:@[value;`logroll;1b] // whether to automatically roll the log file LEVEL:@[value;`LEVEL;3] // Log level id:@[value;`id;0j] nextid:{:id+::1} // A handle to the log file logh:@[value;`logh;0] // write a query log message, direct to stdout if running in finspace write:{ $[.finspace.enabled; @[neg 1;format x;()]; if[logtodisk;@[neg logh;format x;()]]]; if[logtomemory; `.usage.usage upsert x]; ext[x]} // extension function to extend the logging e.g. publish the log message ext:{[x]} // format the string to be written to the file format:$[`jsonlogs in key .proc.params; {.j.j (`p`id`time`zcmd`proctype`procname`type`ip`user`handle`txtc`meminfo`length`errorcheck`level)!x,`USAGE}; {"|" sv -3!'x} ]; // flush out some of the in-memory stats flushusage:{[flushtime] delete from `.usage.usage where time<.proc.cp[] - flushtime;} createlog:{[logdir;logname;timestamp;suppressalias] basename:"usage_",(string logname),"_",(string timestamp),".log"; // Close the current log handle if there is one if[logh; @[hclose;logh;()]]; // Open the file .lg.o[`usage;"creating usage log file ",lf:logdir,"/",basename]; logh::hopen hsym`$lf; // Create an alias if[not suppressalias; .proc.createalias[logdir;basename;"usage_",(string logname),".log"]]; } // read in a log file readlog:{[file] // Remove leading backtick from symbol columns, convert a and w columns back to integers update zcmd:`$1 _' string zcmd, procname:`$1 _' string procname, proctype:`$1 _' string proctype, u:`$1 _' string u, a:"I"$-1 _' a, w:"I"$-1 _' w from // Read in file @[{update "J"$'" " vs' mem from flip (cols .usage.usage)!("PJJSSSC*S***JS";"|")0:x};hsym`$file;{'"failed to read log file : ",x}]} // roll the logs // inmemorypersist = the number rolllog:{[logdir;logname;timestamp;suppressalias;persisttime] if[logtodisk; createlog[logdir;logname;timestamp;suppressalias]]; flushusage[persisttime]} rolllogauto:{rolllog[getenv`KDBLOG;.proc.procname;logtimestamp[];.usage.suppressalias;.usage.flushtime]} // Get the memory info - we don't want to log the physical memory each time meminfo:{5#system"w"} logDirect:{[id;zcmd;endp;result;arg;startp] / log complete action if[LEVEL>1;write(startp;id;`long$.001*endp-startp;zcmd;.proc.proctype;.proc.procname;"c";.z.a;.z.u;.z.w;.dotz.txtC[zcmd;arg];meminfo[];0Nj;"")];result} logBefore:{[id;zcmd;arg;startp] / log non-time info before execution if[LEVEL>2;write(startp;id;0Nj;zcmd;.proc.proctype;.proc.procname;"b";.z.a;.z.u;.z.w;.dotz.txtC[zcmd;arg];meminfo[];0Nj;"")];} logAfter:{[id;zcmd;endp;result;arg;startp] / fill in time info after execution if[LEVEL>1;write(endp;id;`long$.001*endp-startp;zcmd;.proc.proctype;.proc.procname;"c";.z.a;.z.u;.z.w;.dotz.txtC[zcmd;arg];meminfo[];-22!result;"")];result} logError:{[id;zcmd;endp;arg;startp;error] / fill in error info if[LEVEL>0;write(endp;id;`long$.001*endp-startp;zcmd;.proc.proctype;.proc.procname;"e";.z.a;.z.u;.z.w;.dotz.txtC[zcmd;arg];meminfo[];0Nj;error)];'error} p0:{[x;y;z;a]logDirect[nextid[];`pw;.proc.cp[];y[z;a];(z;"***");.proc.cp[]]} p1:{logDirect[nextid[];x;.proc.cp[];y z;z;.proc.cp[]]} p2:{id:nextid[];logBefore[id;x;z;.proc.cp[]];logAfter[id;x;.proc.cp[];@[y;z;logError[id;x;.proc.cp[];z;start;]];z;start:.proc.cp[]]} // Added to allow certain functions to be excluded from logging p3:{if[ignore; if[0h=type z;if[any first[z]~/:ignorelist; :y@z]]]; p2[x;y;z]} if[enabled; // Create a log file rolllogauto[]; // If the timer is enabled, and logrolling is set to true, try to log the roll file on a daily basis if[logroll; $[@[value;`.timer.enabled;0b]; [.lg.o[`init;"adding timer function to roll usage logs on a daily schedule starting at ",string `timestamp$(.proc.cd[]+1)+00:00]; .timer.rep[`timestamp$.proc.cd[]+00:00;0Wp;1D;(`.usage.rolllogauto;`);0h;"roll query logs";1b]]; .lg.e[`init;".usage.logroll is set to true, but timer functionality is not loaded - cannot roll usage logs"]]]; if[flushtime>0; $[@[value;`.timer.enabled;0b]; [.lg.o[`init;"adding timer function to flush in-memory usage logs with interval: ",string flushinterval]; .timer.repeat[.proc.cp[];0Wp;flushinterval;(`.usage.flushusage;flushtime);"flush in memory usage logs"]]; .lg.e[`init;".usage.flushtime is greater than 0, but timer functionality is not loaded - cannot flush in memory tables"]]];
max , maxs , mmax ¶ max ¶ Maximum max x max[x] Where x is a non-symbol sortable list, returns the maximum of its items. The maximum of an atom is itself. Nulls are ignored, except that if the items of x are all nulls, the result is negative infinity. q)max 2 5 7 1 3 7 q)max "genie" "n" q)max 0N 5 0N 1 3 / nulls are ignored 5 q)max 0N 0N / negative infinity if all null -0W q)select max price by sym from t / use in a select statement max is an aggregate function. It is equivalent to |/ . domain: b g x h i j e f c s p m d z n u v t range: b . x h i j e f c . p m d z n u v t max is a multithreaded primitive. maxs ¶ Maximums maxs x maxs[x] Where x is a non-symbol sortable list, returns the running maximums of its prefixes. Nulls are ignored, except that initial nulls are returned as negative infinity. q)maxs 2 5 7 1 3 2 5 7 7 7 q)maxs "genie" "ggnnn" q)maxs 0N 5 0N 1 3 / initial nulls return negative infinity -0W 5 5 5 5 maxs is a uniform function. It is equivalent to |\ . domain: b g x h i j e f c s p m d z n u v t range: b . x h i j e f c . p m d z n u v t mmax ¶ Moving maximums x mmax y mmax[x;y] Where x is a positive int atomy is a non-symbol sortable list returns the x -item moving maximums of y , with nulls after the first replaced by the preceding maximum. The first x items of the result are the maximums of the items so far, and thereafter the result is the moving maximum. q)3 mmax 2 7 1 3 5 2 8 2 7 7 7 5 5 8 q)3 mmax 0N -3 -2 0N 1 0 / initial null returns negative infinity -0W -3 -2 -2 1 1 / remaining nulls replaced by preceding max mmax is a uniform function. Domain and range: b g x h i j e f c s p m d z n u v t ---------------------------------------- b | b g x h i j e f c s p m d z n u v t g | . . . . . . . . . . . . . . . . . . x | b g x h i j e f c s p m d z n u v t h | b g x h i j e f c s p m d z n u v t i | b g x h i j e f c s p m d z n u v t j | b g x h i j e f c s p m d z n u v t e | . . . . . . . . . . . . . . . . . . f | . . . . . . . . . . . . . . . . . . c | . . . . . . . . . . . . . . . . . . s | . . . . . . . . . . . . . . . . . . p | . . . . . . . . . . . . . . . . . . m | . . . . . . . . . . . . . . . . . . d | . . . . . . . . . . . . . . . . . . z | . . . . . . . . . . . . . . . . . . n | . . . . . . . . . . . . . . . . . . u | . . . . . . . . . . . . . . . . . . v | . . . . . . . . . . . . . . . . . . t | . . . . . . . . . . . . . . . . . . Range: bcdefghijmnpstuvxz Implicit iteration¶ max , maxs , and mmax apply to dictionaries and tables. q)max`a`b!(10 21 3;4 5 6) 10 21 6 q)max flip`a`b!(10 21 3;4 5 6) a| 21 b| 6 q)maxs`a`b!(10 21 3;4 5 6) a| 10 21 3 b| 10 21 6 q)maxs flip`a`b!(10 21 3;4 5 6) a b ---- 10 4 21 5 21 6 q)2 mmax flip`a`b!(10 21 3;4 5 6) a b ---- 10 4 21 5 21 6 q)2 mmax`a`b!(10 21 3;4 5 6) a| 10 21 3 b| 10 21 6 q)2 mmax ([k:`abc`def`ghi]a:10 21 3;b:4 5 6) k | a b ---| ---- abc| 10 4 def| 21 5 ghi| 21 6 Aggregating nulls¶ avg , min , max and sum are special: they ignore nulls, in order to be similar to SQL92. But for nested x these functions preserve the nulls. q)max (1 2;0N 4) 1 4 md5 ¶ Message Digest hash md5 x md5[x] Where x is a string, returns as a bytestream its MD5 (Message-Digest algorithm 5) hash. q)md5 "this is a not so secret message" 0x6cf192c1938b79012c323fa30e62787e MD5 is a widely used, Internet standard (RFC 1321), hash function that computes a 128-bit hash, commonly used to check the integrity of files. It is not recommended for serious cryptographic protection, for which strong hashes should be used. med ¶ Median med x med[x] Where x is a numeric list returns its median. q)med 10 34 23 123 5 56 28.5 q)select med price by sym from trade where date=2001.10.10,sym in`AAPL`LEH med is an aggregate function, equivalent to {avg x (iasc x)@floor .5*-1 0+count x,:()} Domain and range¶ domain: b g x h i j e f c s p m d z n u v t range: f . f f f f f f f . f f f f f f f f Implicit iteration¶ med applies to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)med d 7 -8 -1.5 q)med t a| 3 b| -6 q)med k a| 3 b| -6 Partitions and segments¶ med signals a part error when running a median over partitions, or segments. (Since V3.5 2017.01.18; from V3.0 it signalled a rank error.) This is deliberate, as previously med was returning median of medians for such cases. This should now be explicitly coded as a cascading select. select med price by sym from select price, sym from trade where date within 2001.10.10 2001.10.11, sym in `AAPL`LEH meta ¶ Metadata for a table meta x meta[x] Where x is a - table in memory or memory mapped (by value or reference) - filesymbol for a splayed table returns a table keyed by column name, with columns: c column name t data type f foreign key (enums) a attribute q)\l trade.q q)show meta trade c | t f a -----| ----- time | t sym | s price| f size | i q)show meta `trade c | t f a -----| ----- time | t sym | s price| f size | i q)`sym xasc`trade; / sort by sym thereby setting the `s attribute q)show meta trade c | t f a -----| ----- time | t sym | s s price| f size | i The t column denotes the column type. A lower-case letter indicates atomic entry and an upper-case letter indicates a list. q)show u:([] code:`F1; vr:(enlist 2.3)) code vr -------- F1 2.3 q)meta u c | t f a ----| ----- code| s vr | f q)show v:([] code:`F2; vr:(enlist (5.4; 43.2))) code vr ------------- F2 5.4 43.2 q)meta v c | t f a ----| ----- code| s vr | F The result of meta does not tell you whether a table in memory can be splayed, only the first item in each column is examined A splayed table with a symbol column needs its corresponding sym list. KDB+ 4.0 2020.10.02 Copyright (C) 1993-2020 Kx Systems m64/ 12()core 65536MB sjt mackenzie.local 127.0.0.1 EXPIRE .. q)load `:db/sym / required for meta to describe db/tr `sym q)meta `:db/tr c | t f a -----| ----- date | d time | u vol | j inst | s price| f Loading (memory mapping) a database handles this. ❯ q db KDB+ 4.0 2020.10.02 Copyright (C) 1993-2020 Kx Systems m64/ 12()core 65536MB sjt mackenzie.local 127.0.0.1 EXPIRE 2021.05.27 [email protected] #59875 q)\v `s#`sym`tr q)meta tr c | t f a -----| ----- date | d time | u vol | j inst | s price| f min , mins , mmin ¶ Minimum/s min ¶ Minimum min x min[x] Where x is a non-symbol sortable list, returns its minimum. The minimum of an atom is itself. Nulls are ignored, except that if the argument has only nulls, the result is infinity. q)min 2 5 7 1 3 1 q)min "genie" "e" q)min 0N 5 0N 1 3 / nulls are ignored 1 q)min 0N 0N / infinity if all null 0W q)select min price by sym from t / use in a select statement min is an aggregate function, equivalent to &/ . min is a multithreaded primitive. mins ¶ Minimums mins x mins[x] Where x is a non-symbol sortable list, returns the running minimums of the prefixes. Nulls are ignored, except that initial nulls are returned as infinity. q)mins 2 5 7 1 3 2 2 2 1 1 q)mins "genie" "geeee" q)mins 0N 5 0N 1 3 / initial nulls return infinity 0W 5 5 1 1 mins is a uniform function, equivalent to &\ . mmin ¶ Moving minimums x mmin y mmin[x;y] Where y is a non-symbol sortable list and x is a - positive int atom, returns the x -item moving minimums ofy , with nulls treated as the minimum value; the firstx items of the result are the minimums of the terms so far, and thereafter the result is the moving minimum - 0 or a negative int, returns y q)3 mmin 0N -3 -2 1 -0W 0 0N 0N 0N -3 -0W -0W q)3 mmin 0N -3 -2 1 0N -0W / null is the minimum value 0N 0N 0N -3 0N 0N mmin is a uniform function. Domain and range¶ min and mins domain: b g x h i j e f c s p m d z n u v t range: b . x h i j e f c . p m d z n u v t mmin b g x h i j e f c s p m d z n u v t ---------------------------------------- b | b g x h i j e f c s p m d z n u v t g | . . . . . . . . . . . . . . . . . . x | b g x h i j e f c s p m d z n u v t h | b g x h i j e f c s p m d z n u v t i | b g x h i j e f c s p m d z n u v t j | b g x h i j e f c s p m d z n u v t e | . . . . . . . . . . . . . . . . . . f | . . . . . . . . . . . . . . . . . . c | . . . . . . . . . . . . . . . . . . s | . . . . . . . . . . . . . . . . . . p | . . . . . . . . . . . . . . . . . . m | . . . . . . . . . . . . . . . . . . d | . . . . . . . . . . . . . . . . . . z | . . . . . . . . . . . . . . . . . . n | . . . . . . . . . . . . . . . . . . u | . . . . . . . . . . . . . . . . . . v | . . . . . . . . . . . . . . . . . . t | . . . . . . . . . . . . . . . . . . Range: bcdefghijmnpstuvxz Implicit iteration¶ min , mins , and mmin apply to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 21 3;4 5 6) q)min d 4 5 3 q)min t a| 3 b| 4 q)min k a| 3 b| 4 q)mins t a b ---- 10 4 10 4 3 4 q)2 mmin k k | a b ---| ---- abc| 10 4 def| 10 4 ghi| 3 5 Aggregating nulls¶ avg , min , max and sum are special: they ignore nulls, in order to be similar to SQL92. But for nested x these functions preserve the nulls. q)min (1 2;0N 4) 0N 2 $ Matrix Multiply, mmu ¶ Matrix multiply, dot product x mmu y mmu[x;y] x$y $[x;y] Where x and y are both float vectors or matrixes, returns their matrix- or dot-product. count y must match count x wherex is a vectorcount first x wherex is a matrix q)a:2 4#2 4 8 3 5 6 0 7f q)b:4 3#"f"$til 12 q)a mmu b 87 104 121 81 99 117 q)c:3 3#2 4 8 3 5 6 0 7 1f q)1=c mmu inv c 100b 010b 001b q)(1 2 3f;4 5 6f)$(7 8f;9 10f;11 12f) 58 64 139 154 q)1 2 3f$4 5 6f /dot product of two vectors 32f Working in parallel¶ Use secondary threads via peach . q)mmu[;b]peach a 87 104 121 81 99 117
List programs¶ From GeeksforGeeks Python Programming Examples Follow links to the originals for more details on the problem and Python solutions. Interchange first and last elements in a list¶ >>> lis = [12, 35, 9, 56, 24] >>> lis[0], lis[-1] = lis[-1], lis[0] >>> lis [24, 35, 9, 56, 12] -1 is not an index in q, so we need the end index: count[x]-1 . This gives equivalent q expressions. q)lis:12 35 9 56 24 q)lis[fl]:lis reverse fl:0,count[lis]-1 q)lis 24 35 9 56 12 Swap two items in a list¶ def swapPositions(list, pos1, pos2): list[pos1], list[pos2] = list[pos2], list[pos1] return list >>> swapPositions([23, 65, 19, 90], 0, 2) [19, 65, 23, 90] swapPositions:{@[x;y,z;:;x z,y]} q)swapPositions[23 65 19 90;0;2] 19 65 23 90 Functional Amend @ lets us specify the list, the indexes to amend, the function to apply (in this case Assign : ) and the replacement values. Functional Amend can modify a persisted list at selected indexes without reading the entire list into memory – very efficient for long lists. Functional Amend is used here to apply Assign (: ) at selected indexes of the list. The effect is simply to replace selected items. Other operators – or functions – can be used instead of Assign. Remove Nth occurrence of the given word¶ def RemoveIthWord(lst, word, N): newList = [] count = 0 for i in lst: if(i == word): count = count + 1 if(count != N): newList.append(i) else: newList.append(i) return newList >>> RemoveIthWord(["geeks", "for", "geeks"], "geeks", 2) ['geeks', 'for'] >>> RemoveIthWord(["can", "you", "can", "a", "can", "?"], "can", 1) ['you', 'can', 'a', 'can', '?'] RemoveIthWord:{[lst;wrd;i] lst (til count lst) except (sums lst~\:wrd)?i} q)RemoveIthWord[("geeks";"for";"geeks");"geeks";2] "geeks" "for" q)RemoveIthWord[("can";"you";"can";"a";"can";"?");"can";1] "you" "can" "a" "can" "?" In q, til count x returns all the indexes of list x . (So x til count x is always x .) lst~\:wrd flags the items of lst that match wrd . We just need to find where the i th flag occurs and omit it from the indexes. If item exists in a list¶ >>> 4 in [ 1, 6, 3, 5, 3, 4 ] True q)4 in 1 6 3 5 3 4 1b Similarly whether a list is an item in a list of lists. >>> [1, 1, 1, 2] in [[1, 1, 1, 2], [2, 3, 4], [1, 2, 3], [4, 5, 6]] True q)1 1 1 2 in (1 1 1 2; 2 3 4; 1 2 3; 4 5 6) 1b Clear a list¶ >>> lst = [1, 2, 3] >>> del lst[:] >>> lst [] q)lst:1 2 3 / initialize list q)lst:0#lst / take 0 items q)lst q) Clearing a list means removing all its items while retaining its datatype. 0# is perfect for this. Reverse a list¶ >>> [ele for ele in reversed([10, 11, 12, 13, 14, 15])] [15, 14, 13, 12, 11, 10] q)reverse 10 11 12 13 14 15 15 14 13 12 11 10 Count occurrences of an item in a list¶ >>> [8, 6, 8, 10, 8, 20, 10, 8, 8].count(8) 5 q)sum 8 6 8 10 8 20 10 8 8 = 8 5i Just as you were taught in school, = tests equality. Like other q operators, iteration is implicit, so below 8 6 8 10 8 20 10 8 8 = 8 returns a list of flags: 101010011b . Which we sum. Second-largest number in a list¶ >>> sorted([10, 20, 4, 45, 99])[-2] 45 q)(desc 10 20 4 45 99)1 45 N largest items from a list¶ >>> sorted([4, 5, 1, 2, 9], reverse = True)[0:2] [9, 5] >>> sorted([81, 52, 45, 10, 3, 2, 96] , reverse = True)[0:3] [96, 81, 52] q)2#desc 4 5 1 2 9 9 5 q)3#desc 81 52 45 10 3 2 96 96 81 52 Even numbers from a list¶ >>> [num for num in [2, 7, 5, 64, 14] if num % 2 == 0] [2, 64, 14] q){x where 0=x mod 2} 2 7 5 64 14 2 64 14 Odd numbers in a range¶ >>> [num for num in range(4,15) if num % 2] [5, 7, 9, 11, 13] q)range:{x+til y-x-1} q){x where x mod 2} range[4;15] 5 7 9 11 13 15 Count even and odd numbers in a list¶ >>> lst = [10, 21, 4, 45, 66, 93, 11] >>> odd = sum([num % 2 for num in lst]) >>> [len(lst)-odd, odd] [3, 4] q)lst: 10 21 4 45 66 93 11 q)odd:sum lst mod 2 q)(count[lst]-odd),odd 3 4 Positive items of a list¶ >>> [num for num in [12, -7, 5, 64, -14] if num>0] [12, 5, 64] q){x where x>0} 12 -7 5 64 -14 12 5 64 Remove multiple items from a list¶ The examples given in the linked page show two problems. The first is to remove from one list all items that are also items of another. >>> [item for item in [12, 15, 3, 10] if not item in [12, 3]] [15, 10] q)12 15 3 10 except 12 3 15 10 The second is to remove items from a range of indexes. def removeRange(lst, bgn, end): del lst[bgn:end] return lst >>> removeRange([11, 5, 17, 18, 23, 50], 1, 5) [11, 50] range:{x+til y-x-1} removeRange:{x(til count x)except range[y;z-1]} til count x gives all the indexes of x. (So x til count x is always x .) q)removeRange[11 5 17 18 23 50;1;5] 11 50 Remove empty tuples from a list¶ >>> tuples = [(), ('ram','15','8'), (), ('laxman', 'sita'), ('krishna', 'akbar', '45'), ('', ''), ()] >>> [t for t in tuples if t] [('ram', '15', '8'), ('laxman', 'sita'), ('krishna', 'akbar', '45'), ('', '')] q)tuples:(();("ram";"15";"8");();("laxman";"sita");("krishna";"akbar";"45");("";"");()) q)tuples where 0<count each tuples ("ram";"15";"8") ("laxman";"sita") ("krishna";"akbar";"45") ("";"") Duplicates from a list of integers¶ >>> lst = [10, 20, 30, 20, 20, 30, 40, 50, -20, 60, 60, -20, -20] >>> frq = [lst.count(itm) for itm in lst] >>> itms = dict(list(zip(lst,frq))).items() >>> [itm[0] for itm in itms if itm[1]>1] [20, 30, -20, 60] q)lst: 10 20 30 20 20 30 40 50 -20 60 60 -20 -20 q)where 1<count each group lst 20 30 -20 60 The q solution follows the Python: group returns a dictionary. Its keys are the unique values of the list, its values the indexes where they appear. q)group 10 20 30 20 20 30 40 50 -20 60 60 -20 -20 10 | ,0 20 | 1 3 4 30 | 2 5 40 | ,6 50 | ,7 -20| 8 11 12 60 | 9 10 count each replaces the values with their lengths; then 1< with flags. q)1<count each group 10 20 30 20 20 30 40 50 -20 60 60 -20 -20 10 | 0 20 | 1 30 | 1 40 | 0 50 | 0 -20| 1 60 | 1 Finally, where , applied to a dictionary of flags, returns the flagged indexes (keys). Cumulative sum of a list¶ >>> import numpy as np >>> np.cumsum([10, 20, 30, 40, 50]) array([ 10, 30, 60, 100, 150]) q)sums 10 20 30 40 50 10 30 60 100 150 Break a list into chunks of size N¶ >>> lst = ['geeks','for','geeks','like','geeky','nerdy','geek','love','questions','words','life'] >>> n = 4 >>> [lst[i * n:(i + 1) * n] for i in range((len(lst) + n - 1) // n )] [['geeks', 'for', 'geeks', 'like'], ['geeky', 'nerdy', 'geek', 'love'], ['questions', 'words', 'life']] Q has a keyword for this. q)lst:("geeks";"for";"geeks";"like";"geeky";"nerdy";"geek") q)lst,:("love";"questions";"words";"life") q)4 cut lst ("geeks";"for";"geeks";"like") ("geeky";"nerdy";"geek";"love") ("questions";"words";"life") Sort values of one list by values of another¶ >>> list1 = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] >>> list2 = [ 0, 1, 1, 0, 1, 2, 2, 0, 1] >>> [x for _, x in sorted(zip(list2,list1))] ['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g'] q)l1:"abcdefghi" q)l2:0 1 1 0 1 2 2 0 1 q)l1 iasc l2 "adhbceifg" Keyword iasc grades a list, returning the indexes that would put it in ascending order. But the list lengths must match: q)l3:"geeksforgeeks" q)l4:0 1 10 1 2 2 0 1 q)(count[l4]#l3)iasc l4 "gkreesgfo" Remove empty list from list¶ >>> lst = [5, 6, [], 3, [], [], 9] >>> [itm for itm in lst if itm != []] [5, 6, 3, 9] q)lst: (5; 6; (); 3; (); (); 9) q)lst where not lst~\:() 5 6 3 9 Incremental range initialization in matrix¶ >>> r, c, rang = [4, 3, 5] >>> [[rang * c * y + rang * x for x in range(c)] for y in range(r)] [[0, 5, 10], [15, 20, 25], [30, 35, 40], [45, 50, 55]] q)rc:4 3; rang:5 q)rang*rc#til prd rc 0 5 10 15 20 25 30 35 40 45 50 55 The product of 4 3 is 12. til gives us the first 12 integers and 4 3# arranges them as a 4×3 matrix. It remains only to multiply by 5. Occurrence counter in list of records¶ >>> from collections import Counter >>> lst = [('Gfg',1),('Gfg',2),('Gfg',3),('Gfg',1),('Gfg',2),('is',1),('is',2)] >>> res = {} >>> for key,val in lst: ... res[key] = [val] if key not in res else res[key] + [val] ... >>> {key: dict(Counter(val)) for key, val in res.items()} {'Gfg': {1: 2, 2: 2, 3: 1}, 'is': {1: 1, 2: 1}} q)lst:((`Gfg;1); (`Gfg;2); (`Gfg;3); (`Gfg;1); (`Gfg;2); (`is;1); (`is;2)) q){key[g]!(count'')group each y value g:group x}. flip lst Gfg| 1 2 3!2 2 1 is | 1 2!1 1 Flipping the list produces two lists: symbols and integers. q)flip lst Gfg Gfg Gfg Gfg Gfg is is 1 2 3 1 2 1 2 Passed by Apply (. ), they appear in the lambda as x and y respectively. Grouping the symbols returns a dictionary. Its values are lists of indexes into lst . q){[x;y]group x}. flip lst Gfg| 0 1 2 3 4 is | 5 6 Applying y (the list of integers) to these indexes gives us two lists of integers. q){y value group x}. flip lst 1 2 3 1 2 1 2 We use (count'')group each to get a frequency-count dictionary for each list. q){(count'')group each y value group x}. flip lst 1 2 3!2 2 1 1 2!1 1 It remains only to compose them as the values in a dictionary with the symbols as keys. Group similar value list to dictionary¶ >>> l1 = [4, 4, 4, 5, 5, 6, 6, 6, 6] >>> l2 = ['G', 'f', 'g', 'i', 's', 'b', 'e', 's', 't'] >>> {key : [l2[idx] ... for idx in range(len(l2)) if l1[idx]== key] ... for key in set(l1)} {4: ['G', 'f', 'g'], 5: ['i', 's'], 6: ['b', 'e', 's', 't']} q)l1:4 4 4 5 5 6 6 6 6 q)l2:"Gfgisbest" q)l2 group l1 4| "Gfg" 5| "is" 6| "best" See Duplicates from a list of integers for how group works. Reverse sort matrix row by Kth column¶ >>> lst = [['Manjeet', 65], ['Akshat', 42], ['Akash', 38], ['Nikhil', 192]] >>> sorted(lst, key = lambda ele: ele[1], reverse = True) [['Nikhil', 192], ['Manjeet', 65], ['Akshat', 42], ['Akash', 38]] q)lst:((`Manjeet;65); (`Akshat;42); (`Akash;38); (`Nikhil;192)) q)lst idesc lst[;1] `Nikhil 192 `Manjeet 65 `Akshat 42 `Akash 38 lst is a list of tuples, so lst[;1] is a list of the second element of each tuple. q)lst[;1] 65 42 38 192 idesc grades a list: returns the indexes that would put it into sorted order. q)idesc lst[;1] 3 0 1 2 Remove record if Nth column is K¶ >>> lst = [(5, 7), (6, 7, 8), (7, 8, 10), (7, 1)] >>> [itm for itm in lst if itm[1] != 7] [(7, 8, 10), (7, 1)] q)lst:((5 7); (6 7 8); (7 8 10); (7 1)) q)lst where lst[;1]<>7 7 8 10 7 1 Pairs with sum equal to K in tuple list¶ >>> prs = [(4, 5), (6, 7), (3, 6), (1, 2), (1, 8)] >>> [pr for pr in prs if 9 == sum(pr)] [(4, 5), (3, 6), (1, 8)] q)prs:((4 5); (6 7); (3 6); (1 2); (1 8)) q)prs where 9 = sum each prs 4 5 3 6 1 8 Merge consecutive empty strings¶ >>> lst = ['Gfg', '', '', '', 'is', '', '', 'best', ''] >>> [lst[i] for i in range(0, len(lst)-1) if (i==0)or(len(lst[i])>0)or(len(lst[i-1])>0)] ['Gfg', '', 'is', '', 'best'] q)lst:("Gfg"; ""; ""; ""; "is"; ""; ""; "best") q)lst where not(and)prior(count each lst)=0 "Gfg" "" "is" "" "best" (count each lst)=0 flags the empty strings. (and)prior flags empty strings preceded by another empty string. A more efficient Python solution would also count the string lengths once only. >>> r = range(0, len(lst)-1) >>> flags = [len(lst[i])>0 for i in r] >>> [lst[i] for i in r if (i==0) or flags[i] or flags [i-1]] ['Gfg', '', 'is', '', 'best'] Flattening a list¶ from collections.abc import Iterable def flatten(param): for item in param: if isinstance(item, Iterable): yield from flatten(item) else: yield item >>> lst = [1, 2, 4, [5432, 34, 232, 345], [123, [543, 45]], 56] >>> list(flatten(lst)) [1, 2, 4, 5432, 34, 232, 345, 123, 543, 45, 56] In q, we can make use of converge q)lst:(1; 2; 4; (5432; 34; 232; 345); (123; (543; 45)); 56) q)raze over lst 1 2 4 5432 34 232 345 123 543 45 56 Numeric sort in mixed-pair string list¶ >>> lst = ["Manjeet 5", "Akshat 7", "Akash 6", "Nikhil 10"] >>> sorted(lst, reverse = True, key = lambda ele: int(ele.split()[1])) ['Nikhil 10', 'Akshat 7', 'Akash 6', 'Manjeet 5'] q)lst:("Manjeet 5";"Akshat 7"; "Akash 6"; "Nikhil 10") q)lst idesc first(" I";" ")0: lst "Nikhil 10" "Akshat 7" "Akash 6" "Manjeet 5" This form of the File Text operator 0: interprets delimited character strings, most commonly from CSVs. First even number in list¶ def firstEven(lst): for ele in lst: if not ele % 2: return ele >>> lst = [43, 9, 6, 72, 8, 11] >>> firstEven(lst) 6 q)lst:43 9 6 72 8 11 q)first lst where 0 = lst mod 2 6 The naïve q solution computes the modulo of each item in the list. This may be all right for a short list, but a long list wants an algorithm that stops at an even number. Start with a lambda: {lst[y],1+y} . The reference to y tells us it is a binary function, with default argument names x and y . There is no reference to x so we know its result depends only on its second argument, y . In fact, it returns another pair: the value of lst at y , and the next index, y+1 . Projecting Apply onto the lambda gives us a unary function that takes a pair as its argument. q) .[{lst[y],1+y};] 1 0 43 1 Using the Do form of the Scan iterator to apply it twice q)2 .[{lst[y],1+y};]\1 0 1 0 43 1 9 2 we see the initial state (1 0 ) followed by the first two items of lst paired with their (origin-1) indexes. The Do form of the iterator uses an integer to specify the number of iterations. In the While form of the iterator, we replace the integer with a test function. Iteration continues until the test function returns zero. q){first[x]mod 2} .[{lst[y],1+y};]\1 0 1 0 43 1 9 2 6 3 The Over iterator performs the same computation as Scan, but returns only the last pair. From which we select the first item. q)first{first[x]mod 2} .[{lst[y],1+y};]/1 0 6 Storing elements greater than K as dictionary¶ >>> lst = [12, 44, 56, 34, 67, 98, 34] >>> {idx: ele for idx, ele in enumerate(lst) if ele > 50} {2: 56, 4: 67, 5: 98} q)lst: 12 44 56 34 67 98 34 q){i!x i:where x>50} lst 2| 56 4| 67 5| 98 Remove duplicate words from strings in list¶ lst = ['gfg, best, gfg', 'I, am, I', 'two, two, three' ] >>> [set(strs.split(", ")) for strs in lst] [{'best', 'gfg'}, {'I', 'am'}, {'three', 'two'}] q)lst:("gfg, best, gfg"; "I, am, I"; "two, two, three") q){distinct", "vs x}each lst "gfg" "best" ,"I" "am" "two" "three" ", "vs splits a string by the delimiter ", " ; distinct returns the unique items of a list; each applies the lambda to each string. Difference of list keeping duplicates¶ >>> L1 = [4, 5, 7, 4, 3] >>> L2 = [7, 3, 4] >>> [L1.pop(L1.index(idx)) for idx in L2] >>> L1 [5, 4] q)L1: 4 5 7 4 3 q)L2: 7 3 4 q)L1 (til count L1) except L1?L2 5 4 til count L1 returns all the indexes of L1 . (So L1 til count L1 would match L1 .) L1?L2 finds the first occurrences of the items of L2 in L1 , which get removed from the list of indexes of L1. Pairs with multiple similar values in dictionary¶ >>> lst = [{'Gfg' : 1, 'is' : 2}, {'Gfg' : 2, 'is' : 2}, {'Gfg' : 1, 'is' : 2}] >>> [sub for sub in lst if len([ele for ele in lst if ele['Gfg'] == sub['Gfg']]) > 1] [{'Gfg': 1, 'is': 2}, {'Gfg': 1, 'is': 2}] q)lst: ((`Gfg`is!1 2); (`Gfg`is!2 2); (`Gfg`is!1 2)) q)lst where lst in lst where (lst?lst)<>til count lst Gfg is ------ 1 2 1 2 (x?x)<>til count x flags the duplicate items of x . The complete expression returns them. In q, a list of dictionaries with the same keys is a table. Identify election winner¶ >>> votes = ['john','johnny','jackie','johnny','john','jackie','jamie','jamie', ... 'john','johnny','jamie','johnny','john'] >>> from collections import Counter >>> c = Counter(votes) >>> m = max(c.values()) >>> winners = [n for n, v in c.items() if v == m] >>> sorted([[n, len(n)] for n in winners], reverse=True)[0][0] 'johnny' q)votes:`john`johnny`jackie`johnny`john`jackie`jamie`jamie`john`johnny`jamie`johnny`john q)ce:count each q)first {x idesc ce x} where {x=max x} ce group string votes "johnny" The q solution follows the same strategy as the Python. We group the votes by candidate and count them. q)ce group string votes "john" | 4 "johnny"| 4 "jackie"| 2 "jamie" | 3 Then where {x=max x} selects the keys with the maximum value. q)where {x=max x} ce group string votes "john" "johnny" It remains only to sort the winners’ names in descending order of length and select the first. Group anagrams¶ >>> s = 'cat dog tac god act' >>> w = s.split(' ') >>> a = [''.join(sorted(wrd)) for wrd in w] >>> ' '.join([y for x,y in sorted(zip(a, w))]) 'act cat tac dog god' q)s:"cat dog tac god act" q)" " sv {x iasc asc each x} " " vs s "cat tac act dog god" In Python, [y for x,y in sorted(zip(a, w))] sorts the words by their alphabetized versions. In q, {x iasc asc each x} does it. In both solutions the rest of the code splits and reforms the input and output strings. The Python sort returns the anagrams in alpha order in each group. To achieve the same in q, suffix each alphabetized word with the original: q)" " sv {x iasc {asc[x],x} each x}" " vs s "act cat tac dog god" Size of largest subset of anagrams¶ >>> words = ["ant", "magenta", "magnate", "tan", "gnamate"] >>> from collections import Counter >>> max(Counter([''.join(sorted(w)) for w in words]).values()) 3 q)words:string `ant`magenta`magnate`tan`gnamate q)max count each group asc each words 3 The q solution implements the Python method. Sort each string: anagrams match. q)asc each words `s#"ant" `s#"aaegmnt" `s#"aaegmnt" `s#"ant" `s#"aaegmnt" Group for a frequency count and find the maximum value.
// @private // @kind function // @category hyperparameterUtility // @desc Uniform number generator // @param randomType {symbol} Type of random search, denoting the namespace // to use // @param low {long} Lower bound // @param high {long} Higher bound // @param paramType {char} Type of parameter, e.g. "i", "f", etc // @param params {number[]} Parameters // @return {number[]} Uniform numbers hp.i.uniform:{[randomType;low;high;paramType;params] if[high<low;'"upper bound must be greater than lower bound"]; hp.i[randomType][`uniform][low;high;paramType;params] } // @private // @kind function // @category hyperparameterUtility // @desc Generate list of log uniform numbers // @param randomType {symbol} Type of random search, denoting the namespace // to use // @param low {number} Lower bound as power of 10 // @param high {number} Higher bound as power of 10 // @param paramType {char} Type of parameter, e.g. "i", "f", etc // @param params {number[]} Parameters // @return {number[]} Log uniform numbers hp.i.logUniform:xexp[10]hp.i.uniform:: // @private // @kind function // @category hyperparameterUtility // @desc Random uniform generator // @param low {number} Lower bound as power of 10 // @param high {number} Higher bound as power of 10 // @param paramType {char} Type of parameter, e.g. "i", "f", etc // @param n {long} Number of hyperparameter sets // @return {number[]} Random uniform numbers hp.i.random.uniform:{[low;high;paramType;n] low+n?paramType$high-low } // @private // @kind function // @category hyperparameterUtility // @desc Sobol uniform generator // @param low {number} Lower bound as power of 10 // @param high {number} Higher bound as power of 10 // @param paramType {char} Type of parameter, e.g. "i", "f", etc // @param sequence {float[]} Sobol sequence // @return {number[]} Uniform numbers from sobol sequence hp.i.sobol.uniform:{[low;high;paramType;sequence] paramType$low+(high-low)*sequence } ================================================================================ FILE: ml_ml_xval_xval.q SIZE: 20,884 characters ================================================================================ // xval/xval.q - Cross validation // Copyright (c) 2021 Kx Systems Inc // // Cross validation, grid/random/Sobol-random hyperparameter search and multi- // processing procedures \d .ml // @kind function // @category xv // @desc Cross validation for ascending indices split into k-folds // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function which takes data as input // @return {any} Output of function applied to each of the k-folds xv.kfSplit:xv.i.applyIdx xv.i.idxR . xv.i`splitIdx`groupIdx // @kind function // @category xv // @desc Cross validation for randomized non-repeating indices split // into k-folds // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function which takes data as input // @return {any} Output of function applied to each of the k-folds xv.kfShuff:xv.i.applyIdx xv.i.idxN . xv.i`shuffIdx`groupIdx // @kind function // @category xv // @desc Stratified k-fold cross validation with an approximately equal // distribution of classes per fold // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function which takes data as input // @return {any} Output of function applied to each of the k-folds xv.kfStrat:xv.i.applyIdx xv.i.idxN . xv.i`stratIdx`groupIdx // @kind function // @category xv // @desc Roll-forward cross validation procedure // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function which takes data as input // @return {any} Output of function applied to each of the chained // iterations xv.tsRolls:xv.i.applyIdx xv.i.idxR . xv.i`splitIdx`tsRollsIdx // @kind function // @category xv // @desc Chain-forward cross validation procedure // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function which takes data as input // @return {any} Output of function applied to each of the chained // iterations xv.tsChain:xv.i.applyIdx xv.i.idxR . xv.i`splitIdx`tsChainIdx // @kind function // @category xv // @desc Percentage split cross validation procedure // @param pc {float} (0-1) representing the percentage of validation data // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function which takes data as input // @return {any} Output of function applied to each of the k-folds xv.pcSplit:xv.i.applyIdx{[pc;n;features;target] split:{[pc;x;y;z](x;y)@\:/:(0,floor n*1-pc)_til n:count y}; n#split[pc;features;target] } // @kind function // @category xv // @desc Monte-Carlo cross validation using randomized non-repeating // indices // @param pc {float} (0-1) representing the percentage of validation data // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function which takes data as input // @return {any} Output of function applied to each of the k-folds xv.mcSplit:xv.i.applyIdx{[pc;n;features;target] split:{[pc;x;y;z](x;y)@\:/:(0,floor count[y]*1-pc)_{neg[n]?n:count x}y}; n#split[pc;features;target] } // @kind function // @category xv // @desc Default scoring function used in conjunction with .ml.xv/gs/rs // methods // @param function {fn} Takes empty list, parameters and data as input // @param p {dictionary} Hyperparameters // @param data {any[][]} ((xtrain;xtest);(ytrain;ytest)) format // @return {float[]} Scores outputted by function applied to p and data xv.fitScore:{[function;p;data] fitFunc:function[][p]`:fit; scoreFunc:.[fitFunc;numpyArray each data 0]`:score; .[scoreFunc;numpyArray each data 1]` } // Hyperparameter search procedures // @kind function // @category gs // @desc Cross validated parameter grid search applied to data with // ascending split indices // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function that takes parameters and data as input // and returns a score // @param p {dictionary} Dictionary of hyperparameters // @param tstTyp {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout set // @return {table|list} Scores for hyperparameter sets on // each of the k folds for all values of h and additionally returns the best // hyperparameters and score on the holdout set for 0 < h <=1. gs.kfSplit:hp.i.search hp.i.xvScore[hp.i.gsGen;xv.kfSplit] // @kind function // @category gs // @desc Cross validated parameter grid search applied to data with // shuffled split indices // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function that takes parameters and data as input // and returns a score // @param p {dictionary} Dictionary of hyperparameters // @param tstTyp {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout set // @return {table|list} Scores for hyperparameter sets on each of // the k folds for all values of h and additionally returns the best // hyperparameters and score on the holdout set for 0 < h <=1. gs.kfShuff:hp.i.search hp.i.xvScore[hp.i.gsGen;xv.kfShuff] // @kind function // @category gs // @desc Cross validated parameter grid search applied to data with an // equi-distributions of targets per fold // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function that takes parameters and data as input // and returns a score // @param p {dictionary} Dictionary of hyperparameters // @param tstTyp {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout set // @return {table|list} Scores for hyperparameter sets on each of // the k folds for all values of h and additionally returns the best // hyperparameters and score on the holdout set for 0 < h <=1. gs.kfStrat:hp.i.search hp.i.xvScore[hp.i.gsGen;xv.kfStrat] // @kind function // @category gs // @desc Cross validated parameter grid search applied to roll forward // time-series sets // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function that takes parameters and data as input // and returns a score // @param p {dictionary} Dictionary of hyperparameters // @param tstTyp {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout set // @return {table|list} Scores for hyperparameter sets on each of // the k folds for all values of h and additionally returns the best // hyperparameters and score on the holdout set for 0 < h <=1. gs.tsRolls:hp.i.search hp.i.xvScore[hp.i.gsGen;xv.tsRolls] // @kind function // @category gs // @desc Cross validated parameter grid search applied to chain forward // time-series sets // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function that takes parameters and data as input // and returns a score // @param p {dictionary} Dictionary of hyperparameters // @param tstTyp {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout set // @return {table|list} Scores for hyperparameter sets on each of // the k folds for all values of h and additionally returns the best // hyperparameters and score on the holdout set for 0 < h <=1. gs.tsChain:hp.i.search hp.i.xvScore[hp.i.gsGen;xv.tsChain] // @kind function // @category gs // @desc Cross validated parameter grid search applied to percentage // split dataset // @param pc {float} (0-1) representing percentage of validation data // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function that takes parameters and data as input // and returns a score // @param p {dictionary} Dictionary of hyperparameters // @param tstTyp {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout set // @return {table|list} Scores for hyperparameter sets on each of // the k folds for all values of h and additionally returns the best // hyperparameters and score on the holdout set for 0 < h <=1. gs.pcSplit:hp.i.search hp.i.xvScore[hp.i.gsGen;xv.pcSplit] // @kind function // @category gs // @desc Cross validated parameter grid search applied to randomly // shuffled data and validated on a percentage holdout set // @param pc {float} (0-1) representing percentage of validation data // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param function {fn} Function that takes parameters and data as input // and returns a score // @param p {dictionary} Dictionary of hyperparameters // @param tstTyp {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout set // @return {table|list} Scores for hyperparameter sets on each of // the k folds for all values of h and additionally returns the best // hyperparameters and score on the holdout set for 0 < h <=1. gs.mcSplit:hp.i.search hp.i.xvScore[hp.i.gsGen;xv.mcSplit]
Alternative in-memory layouts¶ Prior to kdb+, as veterans will remember, some schemas used nested data per symbol. The `g# attribute allowed us to move away from those more complicated designs and queries to long flat tables with fast access via the group attribute. There is however a third layout for in-memory data, using a dictionary of symbols!tables, which might be relevant to your particular use case. q)/Load some dummy data from nyse taq q)/and store as symbols!tables to demonstrate in-memory usage q)\ts t:(`u#sym)!{[x;y]update time:`s#time from select from x where sym=y}[select from trade where date=last date;]each sym 3896 4346890128 q)count each t / simple count per sym A | 14195 AA | 88962 AA.PR | 25 AADR | 13 AAIT | 8 AAL | 42609 AALC.P | 392 AAMC | 711 AAME | 154 AAN | 6698 ... q)sum count each t / total row count for trades 30035729 q)meta t`GOOG / to get GOOG trade, we just do t`GOOG c | t f a -----| ----- date | d sym | s time | t s ex | c cond | C size | i price| e stop | b corr | i seq | j cts | c trf | c q)last each t`GOOG`CSCO / get last trades date sym time ex cond size price stop corr seq cts trf ---------------------------------------------------------------------------- 2014.01.15 GOOG 19:56:10.575 D "@ TI" 78 1146.5 0 0 2279567 N Q 2014.01.15 CSCO 19:47:39.458 P "@FTI" 37 22.8 0 0 2180880 N q)(t[`GOOG`CSCO])asof\:(enlist`time)!enlist 09:30t / last trade for GOOG and CSCO as of 09:30 date sym ex cond size price stop corr seq cts trf -------------------------------------------------------------- 2014.01.15 GOOG Q "@FTI" 50 1152.01 0 0 5831 N 2014.01.15 CSCO Q "T " 1268 22.53 0 0 14380 N etc. q)\ts last each value t / last trade for every symbol 11 3165104 q)/ vwap for whole day for all symbols in 5 minute bins q)\ts raze {0!select first sym,size wavg price by 5 xbar time.minute from x} each value t 942 21631792 q)/ Use multiple secondary threads for queries! e.g. using 4 threads - almost linear scaling. q)\ts raze {0!select first sym,size wavg price by 5 xbar time.minute from x} peach value t 269 21002352 q)/ vwaps for a selection of symbols q)sym where sym like "GO*" `GOF`GOGO`GOL`GOLD`GOM`GOMO`GOOD`GOOD.N`GOOD.O`GOOD.P`GOOG`GORO`GOV`GOVT q)\ts raze {0!select first sym,size wavg price by 5 xbar time.minute from x} peach t sym where sym like "GO*" 1 9776 q)/ Set default schema q)t:(`u#enlist`)!enlist flip`time`sym`price`size!(`s#`timespan$();`symbol$();`float$();`int$()) q)t`BADSYM / non-existent symbol lookup uses prototype from first element of dict time sym price size ------------------- q)/ upd function for rdb to receive data from ticker plant and upsert into dicts of syms!tables. q)/ Allows log file playback by creating flips from value list. q)upd:{[t;d]if[not type d;d:flip(cols value[t]`)!d;];@[t;key g;,;d value g:group d`sym];} q)/ end-of-day persist to hdb q)\ts trade:raze t asc key[t] except ` / re-organize data to flat layout, dropping the ` entry 426 1477313216 q).Q.dpft[`:db;2007.07.23;`sym;`trade] / save the re-organized flat layout q)/ At end of day, if you're short on memory and need to avoid going q)/ through the above for flat layout to save, you can do the following q)/ primeSym get the unique vector of symbols used across the tables, q)/ and verifies that they all exist in path/sym file q)primeSym:{[path;dict](` sv path,`sym)?{distinct x,{distinct x,distinct y}/[(enlist 0#`),y where 11h=type each y:value flip y]}/[(enlist 0#`),value dict];} q)/ dpfdot saves each table enumerating and appending them to disk one table at a time. q)dpfdot:{[d;p;f;tname]t:value tname;primeSym[d;t];t:k!t k iasc k:key t;{[d;t;colnames]@[d;colnames;;]'[@[count[t]#(,);0;:;:];{$[11h=type x;`sym?x;x]}each t@\:colnames];}[d:.Q.par[d;p;tname];value t]each colnames:cols first t;@[;f;`p#]@[d;`.d;:;f,colnames except f];} q)\ts dpfdot[`:db;2014.01.14;`sym;`t] / t is a dict of tables, i.e. syms!tables. 30 MM trades, 7869 syms, saved to ssd 3444 179274224 Asynchronous Callbacks¶ Overview¶ The construct of an asynchronous remote call with callback is not built into interprocess communication (IPC) syntax in q, but it is not difficult to implement. Callback implementation is straightforward if you understand basic IPC in kdb+. Basics: Interprocess Communication Q for Mortals: §11.6 Interprocess Communication Here are some points to keep in mind. First, be sure to employ async calls on both the client and the server; otherwise, a deadlock can ensue. For example, due to the single-threaded nature of q, if the client makes a synch call, the attempt to call back to the client from the server function blocks because the original synch call is still being processed and will consequently wait forever. (Recall that an async call uses neg h where h is an open connection handle.) Second, it is safest to make remote calls with the IPC form that calls a function by name. One such approach is q)(neg h) (`proc; arg1;..;argn ; `callback) Here `proc is a symbol representing the name of the “remote” function to be called, arg1 , … , argn are the data arguments to be passed to the remote calculation and `callback is a symbol containing the name of the client function for proc to call back. If the remote function takes no argument, pass :: as its argument. Next, ensure that the “remote” function on the server is expecting the name of the callback routine as one of its arguments. For example, a call of the form given in the previous paragraph assumes that proc has the signature, q)proc:{[arg1; ;argn ; callname] } Finally, in the remote function, obtain the open handle of the calling process from the system variable .z.w . Use this link back to the caller to invoke the callback function. Examples¶ These examples use 0N! to force its argument to the console (i.e., stdout) and then sinks its result to avoid duplicate display in some circumstances. Unary function¶ In the simplest case, the client makes an asynchronous call to a unary “remote” function on the server, passing the name of a unary function in its workspace for the remote function to call once it completes. For those who know about such things, the callback represents a continuation for the remote function. Create a kdb+ instance to act as a server, by listening on a port and defining a proc on the server as, q)\p 5000 / listen on port 5000 q)serverFunc:{0N!x;} / server function q)proc:{serverFunc x; h:.z.w; (neg h) (y; 43)} / function for client to call In this case, the data for proc is passed in the implicit parameter x and the callback function name is passed in the implicit parameter y . Here the expression serverFunc x stands for the actual calculations performed on the server. Now execute the following on the client. Note that the sample communication handle assumes that the server process is listening on port 5000 on the same machine as the client. Substitute your actual values. q)clientFunc:{0N!x;} q)h:hopen `::5000 q)(neg h) (`proc; 42; `clientFunc) ... q)hclose h This says make an async call to the remote function proc , passing it the argument 42 and the symbol `clientFunc representing the name of the callback function. The result is that 42 is displayed on the server and then 43 is displayed on the client. Function with multiple parameters¶ If you need to call a remote function that has more than two data parameters, you cannot use implicit parameters on the server as above. You can either define explicit parameters or encapsulate the arguments in a list. We show the latter here. Define the following on the server, q)\p 5000 / listen on port 5000 q)add3:{x+y+z} / server function q)proc3:{ echo r:add3 . x; (neg .z.w) (y; r)} / function for client to call Here the data for proc3 is passed as a list in the implicit parameter x , while the callback function name is passed as y . Note the use of . (Apply) to evaluate a non-unary function on a list of arguments. Now execute the following on the client. q)clientFunc:{0N!x;} q)h:hopen `::5000 q)(neg h) (`proc3; 1 2 3; `clientFunc) ... q)hclose h This expression makes an async call to the remote function proc3 , passing it the list argument 1 2 3 and the symbol `clientFunc representing the name of the callback function. The result is that 6 is displayed on the server and then 6 is displayed on the client. Function wrapper¶ An arbitrary function on the server does not have the appropriate signature to accept a callback. This example shows a simple wrapper function that permits any reasonable multivalent function to be called asynchronously with its result returned to the caller. Define the following on the server. q)\p 5000 / listen on port 5000 q)add3:{x+y+z} / server function q)marshal:{(neg .z.w) (z; (value x) . y)} / function for client to call Here the function marshal expects the name of a non-unary function in the first parameter, an argument list for the wrapped function in the second argument and the name of the callback function used to pass back the result in the third argument. Now execute the following on the client. q)clientFunc:{0N!x;} q)h:hopen `::5000 q)(neg h) (`marshal; `add3; 1 2 3; `clientFunc) ... q)hclose h This expression makes an async call to the remote function marshal , asking it to invoke the remote function add3 with list argument 1 2 3 and to pass the result back to clientFunc . The result is that the list is summed on the server and then 6 is displayed on the client. Anonymous functions¶ It is also possible to send a function to be executed remotely on the server using an alternative form of IPC. In this case, nothing needs to be defined on the server in advance. Here we show an example sending an anonymous function that returns its value to the client. Start a kdb+ server listening on a chosen port e.g. $ q -p 5000 Execute the following on the client. q)clientFunc:{0N!x;} q)h:hopen `::5000 q)(neg h) ({(neg .z.w) (z; x*y)}; 6; 7; `clientFunc) This expression makes an async call sending: - a function that multiplies two arguments and returns the result with a callback - the arguments 6 and 7 - and the name of clientFunc for the callback The result is that 6 and 7 are multiplied on the server and then 42 is displayed on the client. Warning Give careful consideration before using this style IPC in a production environment as a client can bring down an unprotected server. A kdb+ server can be protected by authorising which services are permitted to run.
\d .u jcounts:(`symbol$())!0#0,(); icounts:(`symbol$())!0#0,(); / set up dictionary for per table counts ld:{if[not type key L::`$(-10_string L),string x;.[L;();:;()]];i::j::@[-11!;L;i::-11!(-2;L)];jcounts::icounts;if[0 < type i;-2 (string L)," is a corrupt log. Truncate to length ",(string last i)," and restart";exit 1];hopen L}; tick:{init[];if[not min(`time`sym~2#key flip value@)each t;'`timesym];@[;`sym;`g#]each t;d::.eodtime.d;if[l::count y;L::`$":",y,"/",x,10#".";l::ld d]}; endofday:{end d;d+:1;icounts::(`symbol$())!0#0,();if[.z.p>.eodtime.nextroll:.eodtime.getroll[.z.p];system"t 0";'"next roll is in the past"];.eodtime.dailyadj:.eodtime.getdailyadjustment[];if[l;hclose l;l::0(`.u.ld;d)]}; ts:{if[.eodtime.nextroll < x;if[d<("d"$x)-1;system"t 0";'"more than one day?"];endofday[]]}; if[system"t"; .dotz.set[`.z.ts;{pub'[t;value each t];@[`.;t;@[;`sym;`g#]0#];i::j;icounts::jcounts;ts .z.p}]; upd:{[t;x] if[not -12=type first first x; if[.z.p>.eodtime.nextroll;.z.ts[] ]; /a:"n"$a; a:.z.p+.eodtime.dailyadj; x:$[0>type first x; a,x; (enlist(count first x)#a),x ] ]; t insert x; jcounts[t]+::count first x; if[l;l enlist (`upd;t;x);j+:1]; } ]; if[not system"t";system"t 1000"; .dotz.set[`.z.ts;{ts .z.p}]; upd:{[t;x]ts .z.p; a:.z.p+.eodtime.dailyadj; if[not -12=type first first x; /a:"n"$a; x:$[0>type first x; a,x; (enlist(count first x)#a),x ] ]; f:key flip value t;pub[t;$[0>type first x;enlist f!x;flip f!x]];if[l;l enlist (`upd;t;x);i+:1;icounts[t]+::count first x];}]; \d . src:$["/" in src;(1 + last src ss "/") _ src; src]; / if src contains directory path, remove it .u.tick[src;ssr[$[count .proc.params`tplogdir;raze .proc.params`tplogdir;""];"\\";"/"]]; \ globals used .u.w - dictionary of tables->(handle;syms) .u.i - msg count in log file .u.j - total msg count (log file plus those held in buffer) .u.t - table names .u.L - tp log filename, e.g. `:./sym2008.09.11 .u.l - handle to tp log file .u.d - date /test >q tick.q >q tick/ssl.q /run >q tick.q sym . -p 5010 /tick >q tick/r.q :5010 -p 5011 /rdb >q sym -p 5012 /hdb >q tick/ssl.q sym :5010 /feed ================================================================================ FILE: TorQ_code_profiler_top.q SIZE: 1,096 characters ================================================================================ // top.q by kx - https://code.kx.com/q/kb/profiler/ if[`l64<>.z.o; -2"error: linux 64-bit is required";exit 1] if[4>.z.K;-2"error: KDB+ 4.0 or higher required, currently using KDB+ ", .Q.f[1;.z.K];exit 1] if[2=count .z.x;(`$"::",.z.x 0).z.i;system"l ",.z.x 1;exit 0] if[1<>count .z.x;-2"usage: q ",string[.z.f]," script.q|pid";exit 1] qcmd:$[""~getenv[`QCMD];"q ";getenv[`QCMD]," "] i:0p;T:([name:();file:();line:();col:();text:()]total:0#0;self:0#0) pct:{.01*"j"$1e4*x};nm:{[n;f;l;c;t] -20 sublist$[""~n;$[""~f;t;f,":",string[l],":",string c];n]} top:{x xdesc `total`self xcols 0!update total:pct total%sum self,pct self%sum self,name:nm'[name;file;line;col;text]from T} .dotz.set[`.z.ts;{@[{t:``pos _ select from .Q.prf0 p where not .Q.fqk each file; T+:select total:1 by name,file,line,col,text from t; if[count T;T[last[t]`name`file`line`col`text;`self]+:1]; if[00:00:01<.z.p-i;i::.z.p;1"\033c";show top`self]};::;{system"t 0";'x}]}] $[null p:"I"$.z.x 0;[system"p 0W";.dotz.set[`.z.pg;{p::x;system"p 0";system"t 10"}];system qcmd," "sv string[(.z.f;"j"$system"p")],.z.x];system"t 10"] ================================================================================ FILE: TorQ_code_rdb_apidetails.q SIZE: 435 characters ================================================================================ // Add to the api functions \d .api if[not`add in key `.api;add:{[name;public;descrip;params;return]}] add[`.rdb.moveandclear;1b;"Move a variable (table) from one namespace to another, deleting its contents. Useful during the end-of-day roll down for tables you do not want to save to the HDB";"[symbol: the namespace to move the table from; symbol:the namespace to move the variable to; symbol: the name of the variable]";"null"] ================================================================================ FILE: TorQ_code_rdb_endofperiod.q SIZE: 223 characters ================================================================================ /-End of period function in the top level namespace endofperiod:{[currp;nextp;data] .lg.o[`endofperiod;"Received endofperiod. currentperiod, nextperiod and data are ",(string currp),", ", (string nextp),", ", .Q.s1 data]}; ================================================================================ FILE: TorQ_code_rdb_rdbstandard.q SIZE: 672 characters ================================================================================ // Get the relevant RDB attributes .proc.getattributes:{`partition`tables!(.rdb.getpartition[],();tables[])} \d .rdb /- Move a table from one namespace to another /- this could be used in the end-of-day function to move the heartbeat and logmsg /- tables out of the top level namespace before the save down, then move them /- back when done. moveandclear:{[fromNS;toNS;tab] if[tab in key fromNS; set[` sv (toNS;tab);0#fromNS tab]; eval(!;enlist fromNS;();0b;enlist enlist tab)]} upd:@[value;`upd;{insert}]; //value of upd \d . /-set the upd function in the top level namespace upd:.rdb.upd .u.end:{[d] .rdb.endofday[d;()!()]}; ================================================================================ FILE: TorQ_code_segmentedtickerplant_pubsub.q SIZE: 902 characters ================================================================================ // Get pubsub common code .proc.loadf[getenv[`KDBCODE],"/common/pubsub.q"]; // Define UPD and ZTS wrapper functions // Check for end of day/period and call inner UPD function .stpps.upd.def:{[t;x] if[.stplg.nextendUTC<now:.z.p;.stplg.checkends now]; // Type check allows update messages to contain multiple tables/data $[0h<type t;.stplg.updmsg'[t;x;now+.eodtime.dailyadj];.stplg.updmsg[t;x;now+.eodtime.dailyadj]]; .stplg.seqnum+:1; }; // Don't check for period/day end if process is chained STP .stpps.upd.chained:{[t;x] now:.z.p; $[0h<type t;.stplg.updmsg'[t;x;now+.eodtime.dailyadj];.stplg.updmsg[t;x;now+.eodtime.dailyadj]]; .stplg.seqnum+:1; }; // Call inner ZTS function and check for end of day/period .stpps.zts.def:{ .stplg.ts now:.z.p; .stplg.checkends now }; // Don't check for period/day end if process is chained STP .stpps.zts.chained:{ .stplg.ts now:.z.p }; ================================================================================ FILE: TorQ_code_segmentedtickerplant_sctp.q SIZE: 2,520 characters ================================================================================ \d .sctp chainedtp:@[value;`chainedtp;0b]; // switches between STP and SCTP codebase loggingmode:@[value;`loggingmode;`none]; // [none|create|parent] determines whether SCTP creates its own logs, uses STP logs or does neither tickerplantname:@[value;`tickerplantname;`stp1]; // tickerplant name to try and make a connection to tpconnsleep:@[value;`tpconnsleep;10]; // number of seconds between attempts to connect to source tickerplant tpcheckcycles:@[value;`tpcheckcycles;0W]; // number of times the process will check for an available tickerplant subscribeto:@[value;`subscribeto;`]; // list of tables to subscribe for subscribesyms:@[value;`subscribesyms;`]; // list of syms to subscription to replay:@[value;`replay;0b]; // replay the tickerplant log file schema:@[value;`schema;1b]; // retrieve schema from tickerplant // subscribe to segmented tickerplant subscribe:{[] s:.sub.getsubscriptionhandles[`;tickerplantname;()!()]; if[count s; subproc:first s; `.sctp.tph set subproc`w; // get tickerplant date - default to today's date .lg.o[`subscribe;"subscribing to ", string subproc`procname]; r:.sub.subscribe[subscribeto;subscribesyms;schema;replay;subproc]; if[`d in key r;.u.d::r[`d]]; if[(`icounts in key r) & (loggingmode<>`create); // dict r contains icounts & not using own logfile subtabs:$[subscribeto~`;key r`icounts;subscribeto],(); .u.jcounts::.u.icounts::$[0=count r`icounts;()!();subtabs!enlist [r`icounts]subtabs]; ] ]; } // Initialise chained STP init:{ // Load in timer and subscription code and set top-level end of day/period functions .proc.loadf[getenv[`KDBCODE],"/common/timer.q"]; .proc.loadf[getenv[`KDBCODE],"/common/subscriptions.q"]; `endofperiod set {[x;y;z] .stplg.endofperiod[x;y;z]}; `endofday set {[x;y] .stplg.endofday[x;y]}; // Initialise connections and subscribe to main STP .servers.startupdepnamecycles[.sctp.tickerplantname;.sctp.tpconnsleep;.sctp.tpcheckcycles]; .sctp.subscribe[]; }; \d . // Make the SCTP die if the main STP dies .dotz.set[`.z.pc;{[f;x] @[f;x;()]; if[.sctp.chainedtp; if[.sctp.tph=x; .lg.e[`.z.pc;"lost connection to tickerplant : ",string .sctp.tickerplantname];exit 1]] }@[value;.dotz.getcommand[`.z.pc];{{}}]]; // Extract data from incoming table as a list upd:{[t;x] x:value flip x; .u.upd[t;x] } ================================================================================ FILE: TorQ_code_segmentedtickerplant_stplog.q SIZE: 12,346 characters ================================================================================ // Utilites for periodic tp logging in stp process // Live logs and handles to logs for each table currlog:([tbl:`symbol$()]logname:`symbol$();handle:`int$()) // View of log file handles for faster lookups loghandles::exec tbl!handle from currlog \d .stplg // Name of error log file errorlogname:@[value;`.stplg.errorlogname;`segmentederrorlogfile] // Create stp log directory // Log structure `:stplogs/date/tabname_time createdld:{[name;date] if[not count dir:hsym .stplg.kdbtplog;.lg.e[`stp;"log directory not defined"];exit 1]; .os.md dir; .os.md .stplg.dldir:` sv dir,`$raze/[string name,"_",date]; }; // Functions to generate log names in one of five modes
/ Converts a timestamp in the specified timezone into another specified timezone / NOTE: This conversion is done via UTC so will be slower than converting to/from UTC / @param timestamp (Timestamp|TimestampList) The timestamps to convert / @param sourceTimezone (Symbol) The timezone that the specified timestamps are currently in / @param targetTimezone (Symbol) The timezone to convert to / @throws InvalidSourceTimezoneException If the timezone specified is not present in the configuration / @throws InvalidTargetTimezoneException If the timezone specified is not present in the configuration / @see .tz.timezones / @see .tz.utcToTimezone / @see .tz.timezoneToUtc .tz.timezoneToTimezone:{[timestamp; sourceTimezone; targetTimezone] if[not sourceTimezone in .tz.timezones`timezoneID; '"InvalidSourceTimezoneException"; ]; if[not targetTimezone in .tz.timezones`timezoneID; '"InvalidTargetTimezoneException"; ]; :.tz.utcToTimezone[;targetTimezone] .tz.timezoneToUtc[timestamp; sourceTimezone]; }; / Loads the timezone CSV file into memory / @see .tz.cfg.csvTypes / @see .tz.csvSrcPath / @see .tz.timezones .tz.i.loadTimezoneCsv:{ timezones:.csv.load[.tz.cfg.csvTypes; .tz.csvSrcPath]; timezones:update gmtOffset:.convert.msToTimespan 1000*gmtOffset from timezones; timezones:update localDateTime:gmtDateTime+gmtOffset from timezones; timezones:update `g#timezoneID from `gmtDateTime xasc timezones; .log.if.info "Timezone Conversion configuration loaded [ Timezone Count: ",string[count timezones]," ]"; .tz.timezones:timezones; }; ================================================================================ FILE: kdb-common_src_util.q SIZE: 6,700 characters ================================================================================ // Utility Functions // Copyright (c) 2014 - 2018 Sport Trades Ltd // Documentation: https://github.com/BuaBook/kdb-common/wiki/util.q .require.lib each `type`time; / We define the use of the system command argument "-e" to also define if the / process is started in debug mode or not. For kdb >= 3.5, only 1 now means / debug mode / @returns (Boolean) If the current process is in debug mode or not .util.inDebugMode:{ :1i = system "e" }; / @returns (Boolean) True if the process is bound to a port, false if not .util.isListening:{ `boolean$system"p" }; / Simple wrapper around the system command. Throws an exception if the command fails / @throws SystemCallFailedException If the system command does not complete successfully .util.system:{[cmd] .log.if.debug "Running system command: \"",cmd,"\""; @[system;cmd;{.log.if.error "System call failed: ",x; '"SystemCallFailedException"}] }; / Rounds floats to the specified precision / @param p (Integer) The precision to round to / @param x (Real|Float) The value to round / @returns (Real|Float) The rounded value .util.round:{[p;x](`long$n*x)%n:prd p#10}; / Round integers to the specified number of significant figures / @param p (Integer) The precision to round to / @param x (Short|Integer|Long|Real|Float) The value to round / @returns (Short|Integer|Long|Real|Float) The rounded value returned in the same type as provided / @see .util.round .util.roundSigFig:{[p;x] dec:string[x]?"."; srcType:.Q.t abs type x; if[p <= dec; n:prd (dec - p)#10; :srcType $ (`long$x % n) * n; ]; :srcType $ .util.round[p - dec; x]; }; / Extended version of the standard trim function. As well as removing spaces, it also removes / new line and tab characters / @param str (String) The string to trim / @returns (String) The string with characters trimmed .util.trim:{[str] :{y _ x}/[str;(first;{ -1*-1+y-last x }[;count str])@\:where not any str =/:(" ";"\n";"\t";"\r")]; }; / Useful for dictionaries with symbols and / or string in them .util.zeroFill:{@[x;where not abs[type each $[.Q.qt x;cols x;x]]in 2 10 11h;0b^]}; / Improved version of null to also detect empty lists and dictionaries / @returns (Boolean) If the specified object is null or empty .util.isEmpty:{ :(all/) null x; }; / Pivot function / @param t (Table) The table to pivot. NOTE: Should be unkeyed and contain no enumerated columns / @param c (Symbol) The column to pick for the pivot. Each distinct value of this column will be used as a column in the pivot / @param r (Symbol|SymbolList) The columns that will form the rows of the pivot. Can have multiple here / @param d (Symbol) The column of data that is pivoted / @returns (Table) The pivoted data .util.pivot:{[t;c;r;d] colData:?[t;();();(distinct;c)]; pvCols: {[t;c;r;cd;d] :r xkey ?[t;enlist (=;c;$[.type.isSymbol cd;enlist;::] cd);0b;(r,.type.ensureSymbol cd)!(r,d)]; }[t;c;r;;d] each colData; :(,'/) pvCols; }; / Unenumerates any enumerated columns of the specified table / @param t (Table) Table to process. NOTE: Should be unkeyed / @returns (Table) The same table with any enumerated columns unenumerated .util.unenumerate:{[t] enumCols:where .type.isEnumeration each .Q.V t; if[.util.isEmpty enumCols; :t; ]; :@[t;enumCols;get]; }; / Renames columns in the specified table / @param t (Table) / @param oldC (Symbol|SymbolList) Existing column(s) in table to rename / @param newC (Symbol|SymbolList) Column name(s) to replace with / @throws InvalidColumnToRenameException If any of the existing columns specified do not exist .util.renameColumn:{[t;oldC;newC] if[not .type.isTable t; '"IllegalArgumentException"; ]; tCols:cols t; if[not all oldC in tCols; '"InvalidColumnToRenameException"; ]; selectCols:@[tCols;tCols?oldC;:;newC]!tCols; :?[t;();0b;selectCols]; }; / Modified .Q.s to not obey the console height and width limits as specified / by system"c". NOTE: For tables, the console height and width limits will / still apply to list-type cells / @see .Q.S k).util.showNoLimit:{ :$[(::)~x;"";`/:$[10h=@r:@[.Q.S[2#0Wi-1;0];x;::];,-3!x;r]]; }; / Modified .Q.s to allow output to be tabbed by the specified number of tabs. Useful for / formatting of log output / @see .Q.s .util.showTabbed:{[tabCount;x] if[not .type.isString x; x:.Q.s x; ]; sep:"\r\n" where "\r\n" in x; tabs:raze tabCount#enlist "\t"; :tabs,(sep,tabs) sv sep vs x; }; / NOTE: This function only works for in-memory tables in the root namespace / @param tbls (SymbolList) Optional parameter. If specified, will return row counts only for specified tables / @returns (Dict) Root namespace tables and the count of each of them .util.getTableCounts:{[tbls] $[.util.isEmpty tbls; tbls:tables[]; tbls:tables[] inter (),tbls ]; :tbls!count each get each tbls; }; / Removes all data from the specified root namespace table / @param x (Symbol) The table to clear / @throws InvalidTableException If the table does not exist in the root namespace .util.clearTable:{ if[not x in tables[]; '"InvalidTableException"; ]; set[x; 0#get x]; }; / String find and replace. If multiple 'find' arguments are supplied the equivalent number of / replace arguments must also be specified / @param startString (String) The string to find and replace within / @param find (String|StringList) The string or strings to find / @param replace (String|StringList) The string or strings to replace with .util.findAndReplace:{[startString;find;replace] :(ssr/)[startString; find; replace]; }; / Garbage collection via .Q.gc with timing and logging with regard to the amount of memory to returned to the OS / @returns (Dict) The difference in memory values (from .Q.w) before and after the garbage collection / @see .Q.w / @see .Q.gc .util.gc:{ beforeStats:.Q.w[]; gcStartTime:.time.now[]; .log.if.info "Running garbage collection"; .Q.gc[]; diffStats:beforeStats - .Q.w[]; $[0f = diffStats`heap; .log.if.info "Garbage collection complete. No memory returned to OS"; / else .log.if.info "Garbage collection complete [ Returned to OS (from heap): ",string[.util.round[2;] %[;1024*1024] diffStats`heap]," MB ] [ Time: ",string[.time.now[] - gcStartTime]," ]" ]; :diffStats; }; / @returns (Boolean) True if the OpenSSL libraries have been loaded into the kdb+ process, false otherwise .util.isTlsAvailable:{ sslStatus:@[-26!; (::); { (`TLS_NOT_AVAILABLE; x) }]; :not `TLS_NOT_AVAILABLE~first sslStatus; }; ================================================================================ FILE: kdb-common_src_wsc.q SIZE: 2,266 characters ================================================================================ // WebSocket Client Library // Copyright (c) 2020 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/wsc.q .require.lib each `type`ns`http; / If true, all new WebSocket connections created will be logged to the '.ipc.outbound' table. On library init, / the 'ipc' library will be loaded. .wsc.cfg.logToIpc:1b; / The valid URL schemes to attempt a WebSocket connection to .wsc.cfg.validUrlSchemes:`ws`wss; .wsc.init:{ if[.wsc.cfg.logToIpc; .require.lib`ipc; ]; }; / Create a WebSocket connection to the specified URL / @param url (String) The target server to create a WebSocket connection to / @returns (Integer) A valid handle to communicate with the target server / @throws ZWsHandlerNotSetException If '.z.ws' is not set prior to calling this function / @throws InvalidWebSocketUrlException If the URL does not being with 'ws://' or 'wss://' / @throws WebSocketConnectionFailedException If the connection fails / @see .http.i.getUrlDetails / @see .http.i.buildRequest / @see .http.i.send / @see .ipc.outbound .wsc.connect:{[url] if[not .type.isString url; '"InvalidArgumentException"; ]; if[not .ns.isSet `.z.ws; .log.if.error "'.z.ws' handler function must be set prior to opening any outbound WebSocket"; '"ZWsHandlerNotSetException"; ]; schemePrefixes:string[.wsc.cfg.validUrlSchemes],\:"://*"; if[not any url like/: schemePrefixes; .log.if.error "Invalid URL scheme specified. Must be one of: ",", " sv schemePrefixes; '"InvalidWebSocketUrlException"; ]; .log.if.info "Attempting to connect to ",url," via WebSocket"; urlParts:.http.i.getUrlDetails url; wsResp:.http.i.send[urlParts; .http.i.buildRequest[`GET; urlParts; ()!(); ""]]; handle:first wsResp; if[null handle; .log.if.error "Failed to connect to ",url," via WebSocket. Error: ",last wsResp; '"WebSocketConnectionFailedException"; ]; .log.if.info "Connected to ",url," via WebSocket [ Handle: ",string[handle]," ]"; .log.if.debug "WebSocket response:\n",last wsResp; if[.wsc.cfg.logToIpc; `.ipc.outbound upsert (handle; `$raze urlParts`scheme`baseUrl`path; .time.now[]); ]; :handle; }; ================================================================================ FILE: kdb_q_authz_ro_authz_ro.q SIZE: 5,289 characters ================================================================================ /// // Default authorization (authz) handlers for q (.z.ps / .z.pg). // Only useful if used in conjunction with authentication (authn) handlers! // i.e. : .z.pw / .z.ac // The use of setters / getters for global variables facilitates namespace aliasing. // List of users who will get their parse trees evaluated with // the full power of "eval". // Takes precedence over roUsers. .finos.authz_ro.priv.rwUsers:enlist .z.u
/// // Cut a path into a list of directory names and file name // @param path as a string. // @return A list in the form (dir1;dir2;...;file). E.g. "aa/bb/cc" -> ("aa";"bb";"cc") .finos.dep.splitPath:{[path] path:"",path; if[0=count path; :()]; match:path ss .finos.dep.pathSeparators; enlist[first[match]#path],1_/:match cut path}; .finos.dep.joinPath:{[paths] paths:"",/:paths; .finos.dep.pathSeparator sv paths}; { if[.z.K<4.0; //check for existence, such that user can override with a more accurate path, e.g. without resolving symlinks if[()~key`.finos.dep.root; .finos.dep.root:.finos.dep.cutPath[first -3#value{}][0]]; .finos.dep.priv.currentFile:.finos.dep.joinPath(.finos.dep.root;"finos_init.q"); ]; path:.finos.dep.cutPath[.finos.dep.currentFile[]][0]; system"l ",.finos.dep.joinPath(path;"dep";"include.q"); .finos.dep.include"dep/dep.q"; .finos.dep.regModule["finos/kdb";"1.0";path;"";""]; .finos.dep.list["finos/kdb";`loaded]:1b; if[.z.K<4.0; .finos.dep.priv.currentFile:string .z.f; ]; }[]; ================================================================================ FILE: kdb_q_html_html.q SIZE: 1,010 characters ================================================================================ // This table breaks the default CSV renderer. // t:([]a:1 2 3;b:(("foo";"bar");enlist"baz";("quux";`quuux;"quuuux"))) // 0N!.h.tx[`csv]t; // The code below checks for nested types and catenates entries with a delimiter. // Delimiter for multi-valued table cells. .finos.html.compoundDelim:"/" .finos.html.nestedListToStringVec:{[compoundCol] {[rowList] $[0>type rowList ;string rowList ;.finos.html.compoundDelim sv{$[10h=type x;x;string x]} each rowList]}each compoundCol} .finos.html.stringifyCompoundCols:{[tableVal] // Get the name of nested cols that need converting. Leave string columns alone. nestedCols:exec c from meta tableVal where t in\: (" ",.Q.A except "C"); // Functional form of "update" to yield table that can // be converted to CSV. ![tableVal;();0b;nestedCols!flip(count[nestedCols]#`.finos.html.nestedListToStringVec;nestedCols)]} // Plug in this more lenient CSV renderer as the default. .h.tx[`csv]:{.q.csv 0: .finos.html.stringifyCompoundCols x} ================================================================================ FILE: kdb_q_inithook_inithook.q SIZE: 10,519 characters ================================================================================ //Probably this should NOT be replaced by a finos logging function, //unless it supports contexts such that this can be its own context. //The reason is that log might be initialized and redirected to a file, //and log initialization itself might be done in an inithook. //That would cause a break in where the log goes (stdout vs log file) //which can make support tasks more difficult. .finos.init.log:{-1 string[.z.P]," .finos.init ",x}; .finos.init.showState:{ .finos.init.log "\nhooks:\n",.Q.s[.finos.init.priv.hooks],"services:",.Q.s .finos.init.priv.services; }; .finos.init.add2:{[requires;funName;provides;userParams] requires: (`$()),requires; provides: (`$()),provides; if[not -11h = type funName; '".finos.init.add2 expected type -11, found ",string[type funName],": ",.Q.s1[funName]]; .finos.init.priv.addDependency[requires;funName;provides]; if[0 < count exec fun from .finos.init.priv.hooks where fun = funName; .qcommon.priv.basicLogError ".finos.init.add: Tried to register a duplicate hook: ",.Q.s1 funName; '"duplicate_hook"]; `.finos.init.priv.hooks upsert (funName;requires;provides;userParams); .finos.init.priv.finished::0b; .finos.init.priv.scheduleExecute[]; }; .finos.init.priv.defaultUserParams:()!(); .finos.init.add:{[requires;funName;provides] .finos.init.add2[requires;funName;provides;.finos.init.priv.defaultUserParams]}; .finos.init.before:{[funName] //use on the provides list to force an inithook to run before another if[not funName in exec fun from .finos.init.priv.hooks; '".finos.init.before invalid inithook name: ",.Q.s1 funName]; newCond:`$".finos.init.before:",string[funName]; .finos.init.priv.hooks[funName;`requires]:distinct .finos.init.priv.hooks[funName;`requires],newCond; newCond}; .finos.init.after:{[funName] //use on the requires list to force an inithook to run after another if[not funName in exec fun from .finos.init.priv.hooks; '".finos.init.after: invalid inithook name: ",.Q.s1 funName]; newCond:`$".finos.init.after:",string[funName]; .finos.init.priv.hooks[funName;`provides]:distinct .finos.init.priv.hooks[funName;`provides],newCond; newCond}; .finos.init.provide:{[service] .finos.init.priv.services: distinct .finos.init.priv.services,service; .finos.init.priv.dependency.addProviderDependency[service]; .finos.init.priv.scheduleExecute[]; }; .finos.init.setGlobal:{[name;val] name set val; .finos.init.provide[name]; }; .finos.init.getTimeout:{.finos.init.priv.initTimeout}; .finos.init.setTimeout:{ if[not type[x] in -16 -17 -18 -19h ; '".finos.init.setTimeout expects time or timespan"]; .finos.init.priv.initTimeout:x; }; .finos.init.setDefaultUserParams:{[newUserParams] if[not type[newUserParams]=99h; '".finos.init.setDefaultUserParams expects a dictionary"]; .finos.init.priv.defaultUserParams:newUserParams; }; /******************************************************************************* /* Private functions and variables /******************************************************************************* .finos.init.priv.hooks: ([fun: `$()] requires: (); provides: (); userParams:()); .finos.init.priv.stat:([fun:`$()] elapsedTime:`timespan$()); .finos.init.priv.services: `$(); .finos.init.priv.finished: 1b; .finos.init.priv.debugRun: 0b; .finos.init.priv.initTimeout: 00:01; / Use this very carefully! .finos.init.priv.delete:{[hookNames] delete from `.finos.init.priv.hooks where fun in hookNames; }; //Can be overwritten by user. However there is only one of this, so if you end up fighting over it, //you are using the inithook API wrong. .finos.init.customStart:{}; .finos.init.priv.start:{ .finos.init.customStart[]; .finos.init.log "Initial hooks:\n",(.Q.s .finos.init.priv.hooks); }; //Can be overwritten by user. However there is only one of this, so if you end up fighting over it, //you are using the inithook API wrong. .finos.init.customEnd:{}; //these should be in util .finos.util.trp:{[fun;params;errorHandler] -105!(fun;params;errorHandler)}; .finos.util.try2:{[fun;params;errorHandler] .finos.util.trp[fun;params;{[errorHandler;e;t] -2"Error: ",e," Backtrace:\n",.Q.sbt t; errorHandler[e]}[errorHandler]]}; //Can be overwritten by user. .finos.init.errorHandler:{[hook;e] .finos.init.log:"Inithook ",.Q.s1[hook`fun]," died on error: ",e; exit 1; }; .finos.init.priv.executeOne:{ if[.finos.init.priv.finished; :0b]; if[0 = count .finos.init.priv.hooks; .finos.init.log "All hooks executed."; .finos.init.priv.finished:1b; .finos.init.customEnd[]; :0b]; hooks: select fun,provides,userParams from .finos.init.priv.hooks where not any each requires in\: () union/ provides, all each requires in\: .finos.init.priv.services; if[0 = count hooks; $[.finos.init.priv.debugRun; .finos.init.log "WARNING: Runnable hooks executed and can't progress! Check remaining hooks with .finos.init.state[]"; .finos.timer.addRelativeTimer[{.finos.init.priv.checkFinished[]};.finos.init.priv.initTimeout] ]; :0b ]; hook: first hooks; hookName: hook[`fun]; .finos.init.log "Executing ", string hookName; start:.z.P; res:$[.finos.init.priv.debugRun; (`success;hookName[]); .finos.util.try2[{(`success;value[x][])};enlist hookName;.finos.init.errorHandler[hook]] ]; end:.z.P; .finos.init.priv.stat[hookName;`elapsedTime]:end-start; delete from `.finos.init.priv.hooks where fun = hookName; if[`success=first res; .finos.init.priv.services: distinct .finos.init.priv.services,hook[`provides]; ]; 1b}; .finos.init.priv.timer:0Ni; .finos.init.priv.execute:{ while[.finos.init.priv.executeOne[]]; .finos.init.priv.timer:0Ni; }; .finos.init.priv.scheduleExecute:{ if[not null .finos.init.priv.timer; :(::)]; .finos.init.priv.timer:.finos.timer.addRelativeTimer[{.finos.init.priv.execute[x]};0]; }; .finos.init.debug:{ .include.handleErrors:0b; .finos.util.try2:{[fun;params;errorHandler].[fun;params]}; .finos.init.priv.debugRun::1b; .finos.init.priv.execute[]; }; //Can be overwritten by user. .finos.init.customTimeout:{}; .finos.init.priv.checkFinished:{ .finos.init.customTimeout[]; if[(not .finos.init.priv.finished) and (not .finos.init.priv.debugRun); notProvided:(distinct raze exec requires from .finos.init.priv.hooks)except .finos.init.priv.services,raze exec provides from .finos.init.priv.hooks; msg: "ERROR: Init hooks not finished within ", (string .finos.init.priv.initTimeout), "ms!\n", "Waiting: ",.Q.s1[exec fun from .finos.init.priv.hooks],$[0<count notProvided;" Services not provided: ",.Q.s1[notProvided];""]; .finos.init.log msg; .finos.init.state[]; .alarm.dev.critical[`inithooksNoProgress;`;msg]; exit 1]; }; //monitoring dependencies of the inithooks .finos.init.priv.dependency.provideCount:(`$())!`int$(); .finos.init.priv.dependency.edges:([] from:`$(); to:`$()); .finos.init.priv.dependency.nodes:([name: `$()] label: `$(); nodeType: `$()); .finos.init.priv.dependency.escapeDot:{ `$ ssr[; ".";"_"] string x}; .finos.init.priv.addDependency:{[requires;funName;provides] requiresEscaped:.finos.init.priv.dependency.escapeDot each requires; funNameEscaped:.finos.init.priv.dependency.escapeDot[funName]; providesEscaped:.finos.init.priv.dependency.escapeDot each provides; .finos.init.priv.dependency.nodes[funNameEscaped]:(funName;`function); //preconditions `.finos.init.priv.dependency.edges insert flip flip(requiresEscaped;funNameEscaped); .finos.init.priv.dependency.nodes[providesEscaped]:flip flip(requires;`condition);
Permissions with kdb+¶ kdb+ processes often contain sensitive, proprietary information in the form of data or proprietary code. Thus, it is important to restrict the access to this information. kdb+ offers a number of in-built access functions. This paper discusses various methods in which a permissioning and entitlements system can be implemented in kdb+ by extending these in-built functions, allowing access to sensitive information to be controlled and restricted, exposing data to some clients but not to others. Commercial-grade products KX offers commercial-grade products to manage entitlements as well as other aspects of administration for kdb+. While this paper attempts to shed some light on the various approaches available to developers wishing to implement a permissioning system in kdb+, the approach presented here is merely intended as a starting point, and as such it should not be considered secure. Some workarounds to the system described here are discussed in the paper. Tests performed using kdb+ 3.0 (2013.04.05) Restricting access to a kdb+ server¶ The first step in securing and permissioning a kdb+ server is to control who can and cannot connect to it. This is done using a combination of: - The –u command-line option - The .z.pw callback –u command-line option¶ If specified, the –u command-line option is the first check a kdb+ process will make when a user tries to connect. At startup, the –u option should point to a password file which maps usernames to passwords. The passwords can be stored either as plaintext or as an MD5 hash. When given a string, the md5 keyword in kdb+ returns the hash of that string; when storing this value in the password file, the first two characters should be stripped. For example, the following two password files are equivalent: $ cat users user1:password $ cat users_encrypted user1:5f4dcc3b5aa765d61d8327deb882cf99 q)md5 "password" 0x5f4dcc3b5aa765d61d8327deb882cf99 When kdb+ is started with the –u option, any connecting user must specify a username and a password. If these do not match what is in the password file, then the user will not be allowed to access the server. After the user successfully gains access to the process, the –u option implements further restrictions: the user can only access files that are under the root directory of kdb+ server i.e. the directory in which the server was started. Consider the following directory structure: |-- file1.q `-- start_dir `-- file2.q If we start the server in the start_dir directory and the –u option is specified, connecting clients will have access to file2.q , since it is under the root directory, but file1.q will be restricted. If –U is used in place of –u , the username/password check remains but the filesystem restriction is lifted. Server1 (-u ): $ q -p 5001 -u ../passwordfiles/users_encrypted KDB+ 3.0 2013.04.05 Copyright (C) 1993-2013 Kx Systems l32/ 1()core 502MB tommartin debian-image 127.0.1.1 PLAY 2013.07.04 Server2 (-U ): $ q -p 6001 -U ../passwordfiles/users_encrypted KDB+ 3.0 2013.04.05 Copyright (C) 1993-2013 Kx Systems l32/ 1()core 502MB tommartin debian-image 127.0.1.1 PLAY 2013.07.04 Client: q)h:hopen`:localhost:5001 'access q)h:hopen`:localhost:5001:user1:pwd 'access q)//connect to the server which has filesystem restrictions q)h:hopen`:localhost:5001:user1:password q)//file2.q loads successfully as it’s under the root q)h(system;”l file2.q”) q)//file1.q produces an 'access error q)h(system;”l ../file1.q”) 'access q)//connect to server with no filesystem restrictions q)h:hopen`:localhost:6001:user1:password q)//file1.q loads successfully q)h(system;”l ../file1.q”) .z.pw ¶ The .z.pw callback is called immediately after successful –u /-U authentication (if specified at startup – otherwise .z.pw is the first authentication check done by a kdb+ process). It allows for further customizations of the authentication process. For instance, this callback could be used to call out to an external LDAP server against which a connecting user could be validated. kdb+ can also be integrated with Kerberos, but this is outside the scope of this paper. The .z.pw callback takes two arguments – a username and a password. If the validation check passes, 1b is returned and the user is granted access. Otherwise, 0b is returned and access to the server is denied. In an unrestricted process, this callback will always return 1b . .z.pw:{[u;p] 1b} Rather than using -u with a password file, we could instead maintain a table of users on our kdb+ server and use the .z.pw callback to validate connecting clients. First, define a simple table which stores users and their passwords. .perm.users:([user:`$()] password:()) Passwords can be stored in various ways, including plaintext, as a straight MD5 hash, or as an MD5 hash with added salt. For salt, we could just take a combination of the username and the specified password, apply an MD5 hash to it and use that as the stored password. q).perm.toString:{[x] $[10h=abs type x;x;string x]} q).perm.encrypt:{[u;p] md5 raze .perm.toString p,u} q).perm.add:{[u;p] `.perm.users upsert (u;.perm.encrypt[u;p]);} q).perm.add[;`password] each `user1`user2`user3; q).perm.users user | password -----| ---------------------------------- user1| 0x9022daebd17737ba0bd9cd4732ea66b6 user2| 0x6538d48739b8cb51beca1c7f65152d7f user3| 0xa757abc2c49f29cfd98bd5480b6fcdde Inside the .z.pw callback, we add some logic that does a lookup on the users table and retrieves the password. It compares the stored password with the password supplied by the client and grants access if they match. .z.pw:{[user;pwd] $[.perm.encrypt[user;pwd]~.perm.users[user][`password];1b;0b]} User classes¶ Restricting access is only the first step towards implementing a permissioning system in kdb+. Once a user has connected, we can control and restrict what the user can do. To achieve this, we split users into three distinct user classes: - Users can only execute certain stored procedures that are defined on the server. - Powerusers have more privileges than ordinary users. They can write free-form queries, but cannot write to the database unless they are executing a stored procedure. - Superusers can execute any code they wish. While queries can be executed synchronously (where the client expects a response and blocks until it receives one) or asynchronously (client expects no response), for the purposes of this paper we will restrict asynchronous queries (routed through the .z.ps handler) to superusers and instead focus on synchronous queries (routed through .z.pg ). Permissioning system schematic With this in mind, we re-define the users table to have an extra class column which indicates which class a user belongs to. q).perm.users:([user:`$()] class:`$(); password:()) q).perm.add:{[u;c;p] `.perm.users upsert (u;c;.perm.encrypt [u;p]);} q).perm.addUser:{[u;p] .perm.add[u;`user;p]} q).perm.addPoweruser:{[u;p] .perm.add[u;`poweruser;p]} q).perm.addSuperuser:{[u;p] .perm.add[u;`superuser;p]} q).perm.getClass:{[u] .perm.users[u][`class]} q).perm.isSU:{[u] `superuser~.perm.getClass[u]} q).perm.isPU:{[u] `poweruser~.perm.getClass[u]} q).perm.addUser[`user1;`password] q).perm.addPoweruser[`poweruser1;`password] q).perm.addSuperuser[`superuser1;`password] q).perm.users user | class password ----------| -------------------------------------------- user1 | user 0x9022daebd17737ba0bd9cd4732ea66b6 poweruser1| poweruser 0x1e948f5d3b634d15d91cfbaaa955e399 superuser1| superuser 0x9f233f505811d3fbdb2ee7a9bf5aa581 Having granted access to a user, we override the synchronous message handler .z.pg in order to determine what class the user belongs to and then act accordingly. .z.pg:{[query] user:.z.u; class:.perm.getClass[user]; $[class~`superuser; value query; class~`poweruser; .perm.pg.poweruser[user;query]; .perm.pg.user[user;query]] } Superusers¶ The most straightforward queries to validate are those pertaining to superusers. Since these users can execute any kind of query, no additional logic is required and we can simply evaluate the query. For the other two classes, we need to add logic to validate the query and if necessary, block it. Users¶ For users belonging to the ordinary-user class, the validation logic is relatively straightforward. Since these users can only execute predefined stored procedures, it is easy to identify when these users have attempted a restricted query. First, we define a stored procedure wrapper function, the arguments of which will be the stored procedure name and the arguments to pass it. Using a wrapper function provides a single point of entry for ordinary users and simplifies the validation logic. We also define a dictionary that maps stored-procedure names to the users who have permission to execute them. The wrapper function will do a lookup against this dictionary to see if the stored procedure exists and if the user has the necessary entitlements to execute the stored procedure. .perm.sprocs:()!() .perm.addSproc:{[s] .perm.sprocs,:enlist[s]!enlist enlist`} .perm.grantSproc:{[s;u] @[`.perm.sprocs;s;union;u];} .perm.revokeSproc:{[s;u] @[`.perm.sprocs;s;except;u];} .perm.parse:{[x] if[-10h=type x;x:enlist x]; $[10h=type x;parse x; x]} //Stored procedure wrapper function - Single point of entry .perm.executeSproc:{[s;params] user:.z.u; if[not s in key .perm.sprocs;'string[s]," is not a valid stored procedure"]; if[(not .perm.isSU user) and not user in .perm.sprocs[s]; '"You do not have permission to execute this stored procedure"]; f:$[1=count (value value s)[1];@;.]; f[s;params] } The validation logic is thus reduced to checking whether or not the user is calling the wrapper function. .perm.pg.user:{[user;query] em:"You only have permission to execute stored procedures: "; em,:".perm.executeSproc[sprocName;(list;of;params)]"; if[not ".perm.executeSproc"~.perm.toString first .perm.parse query;'em];value query} As a demonstration, we will define and register a stored procedure on a server and try to execute it from a client process. Server: getVWAP:{[s;ivl] select vwap:size wavg price by sym, bucket:ivl xbar time.minute from trade where sym in s } q).perm.addSproc[`getVWAP] q)//Sproc is registered, but no users have permission to execute it q).perm.sprocs getVWAP| Client: q)h:hopen`:localhost:5001:user1:password q)//try to execute a raw query q)h"select count i by sym from trade" 'You only have permission to execute stored procedures: .perm.executeSproc[sprocName;(list;of;params)] q)//try to execute a sproc that does not exist q)h".perm.executeSproc[`getVWAPP;(`AAPL;5)]" 'getVWAPP is not a valid stored procedure At this stage, the stored procedure exists on the server, but no users have permission to execute it. We grant permission to user1 on the server side: Server: q).perm.grantSproc[`getVWAP;`user1] On the client side, user1 can execute the stored procedure successfully: Client: q)h".perm.executeSproc[`getVWAP;(`AAPL;5)]" sym bucket| vwap -----------| -------- AAPL 09:00 | 440.8216 AAPL 09:05 | 440.8516 AAPL 09:10 | 440.9229 AAPL 09:15 | 440.9324 AAPL 09:20 | 440.9074 AAPL 09:25 | 440.8243 AAPL 09:30 | 440.7459 AAPL 09:35 | 440.6386 AAPL 09:40 | 440.6522 AAPL 09:45 | 440.5254 .. Powerusers¶ While the user and superuser classes have relatively simple validation logic, the poweruser class is slightly more complex. Like ordinary users, powerusers have the ability to execute stored procedures. They can also write raw, freeform queries, but we will add some additional logic here to enforce table- specific permissions, meaning a poweruser may be able to select from table A, but not from table B etc. Finally, we will enforce read-only entitlements on all powerusers. In order to properly enforce these restrictions, we need to parse and classify every query a poweruser attempts to execute. For the purposes of this paper, we will restrict this to classifying the various table operations (select , delete , insert , update and upsert ), though a fully-functional permissioning system would expand this to classify every type of query. The parse keyword in q can be used to generate a parse tree from a string query, allowing you to see its functional form. We use this to classify each type of query. For example, consider the following select statement. select open:first price, high:max price, low:min price, close:last price by sym from trade where date=2013.05.15 We can wrap this in a string and then parse it to see its functional form q)parse"select open:first price,high:max price,low:min price,close:last price by sym from trade where date=2013.05.15" ? `trade ,,(=;`date;2013.05.15) (,`sym)!,`sym `open`high`low`close!((*:;`price);(max;`price);(min;`price);(last;`pri ce)) Generally we could classify a select statement by saying: - It has 5 items - The first item is ? However, there are optional 5th and 6th arguments to a functional select statement. The fifth argument is used to select the first or last rows from a table, while the 6th argument allows you to extract rows from the table based on indexes. Our classification function for select statements is: .perm.is.select:{[x] (count[x] in 5 6 7) and (?)~first x} We make no distinction between select statements and exec statements. This function will return 1b for both. q)s: " open:first price, high:max price, low:min price, close:last price" q)s,: " by sym from trade where date=2013.05.15" q).perm.is.select parse "select",s 1b q).perm.is.select parse "exec",s 1b q).perm.is.select parse "update price+10 from trade" 0b While this logic successfully identifies any select statements, it’s also possible to view a table by simply typing its name. To incorporate this into our classification function, we first need to write some helper functions which will return a list of every table defined in a kdb+ session. //identify whether a variable name is a namespace .perm.isNamespace:{[x] if[-11h~type x;x:value x]; if[not 99h~type x;:0b]; (1#x)~enlist[`]!enlist(::) } //Recursively retrieve a list of every table in a namespace .perm.nsTables:{[ns] if[ns~`.;:system"a ."]; if[not .perm.isNamespace[ns];:()]; raze(` sv' ns,/:system"a ",string ns),.z.s'[` sv' ns,/:system"v ",string ns] } //Get a list of every table in every namespace .perm.allTables:{[] raze .perm.nsTables each `$".",/:string each `,key[`]} q).perm.allTables[] ,`.o.TI q)t:([]a:1 2 3) q).perm.allTables[] `t`.o.TI q).a.t:([]a:1 2 3) q).perm.allTables[] `t`.o.TI`.a.t Our select statement classification thus becomes: .perm.is.select:{[x] (any x~/: .perm.allTables[]) or (count[x] in 5 6 7) and (?)~first x } Expanding this to classify all table operations: .perm.is.select:{[x] (any x~/: .perm.allTables[]) or (count[x] in 5 6 7) and (?)~first x } .perm.is.update:{[x] (5=count x) and ((!)~first x) and 99h=type last x} .perm.is.delete:{[x] (5=count x) and ((!)~first x) and 11h=type last x} .perm.is.insert:{[x] (insert)~first x} .perm.is.upsert:{[x] (.[;();,;])~first x} We also define two utility functions which indicate whether an incoming query is a table operation, and what type of table operation it is: .perm.isTableQuery:{[x] any (value each `.perm.is,/:1_key[.perm.is])@\:x} .perm.getQueryType:{[x] f:`.perm.is,/:g:1_key[.perm.is]; first g where ((value each f)@\:x) } q).perm.getQueryType parse"select from trade" `select q).perm.getQueryType parse"update price:price%10 from trade" `update q).perm.getQueryType parse"delete from trade where size=0" `delete q).perm.getQueryType parse"`trade upsert (.z.t;`AAPL;440.1234;500000;`NYSE)" `upsert q).perm.getQueryType parse"`trade insert (.z.t;`AAPL;440.1234;500000;`NYSE)" `insert Now that the logic is in place to classify incoming table operations, we can add functionality to our permissioning system which allows us to grant table-specific and operation-specific entitlements to users. We maintain a table of table names and the types of operations each user is allowed to execute on that table. .perm.tables:([]table:`$();user:`$();permission:`$()) .perm.queries:`select`update`upsert`insert`delete; .perm.grant:{[t;u;p] if[not p in .perm.queries;'"Not a valid table operation"]; `.perm.tables insert (t;u;p); } .perm.revoke:{[t;u;p] delete from `.perm.tables where table=t,user=u,permission=p; } .perm.grantAll:{[t;u] .perm.grant[t;u;] each .perm.queries; } .perm.getUserPerms:{[t;u] exec distinct permission from .perm.tables where table=t, user=u } Then for our validation logic, we identify which table is being queried and what type of operation is being executed. We do a lookup on our permissions table to see if the user is allowed to attempt this particular operation on this particular table and if not we block the query. .perm.validateTableQuery:{[user;query] table:first $[-11h~type query;query;query 1]; p:.perm.getUserPerms[table;user]; qt:.perm.getQueryType[query]; if[not qt in p;'"You do not have ",string[qt]," permission on ",string[table]]; eval query } Our poweruser validation function becomes: .perm.pg.poweruser:{[user;query] if[".perm.executeSproc"~.perm.toString first .perm.parse query; :value query]; if[.perm.isTableQuery q:.perm.parse query; :.perm.validateTableQuery[user;q]] } Server: q).perm.grant[`quote;`poweruser1;`select] q).perm.grantAll[`trade;`poweruser1] q).perm.tables table user permission --------------------------- quote poweruser1 select trade poweruser1 select trade poweruser1 update trade poweruser1 upsert trade poweruser1 insert trade poweruser1 delete Client: q)h:hopen`:localhost:5001:poweruser1:password q)h"select from .perm.users" 'You do not have select permission on .perm.users q)//type table name is equivalent to select q)h".perm.users" 'You do not have select permission on .perm.users q)1#h"select from quote" time sym bid ask bsize asize ex ---------------------------------------------------- 09:00:00.863 AMZN 259.455 259.4499 100000 90000 NYSE q)1#h"quote" time sym bid ask bsize asize ex ---------------------------------------------------- 09:00:00.863 AMZN 259.455 259.4499 100000 90000 NYSE q)h"update mid:(bid+ask)%2 from quote" 'You do not have update permission on quote q)1#h"update vwap:size wavg price by sym from trade" time sym price size ex vwap ----------------------------------------------- 09:00:05.878 GOOG 875.2613 190000 BATS 876.9627 Reference: reval for read-only access Protecting proprietary code¶ kdb+ processes often contain a large amount of proprietary code that is exposed to all users that connect to it. Simply typing the name of a function will display its definition. Q scripts can be compiled into binary objects using the \_ scriptname.q system command. This creates the file scriptname.q_ . When this file is loaded into a q session, all code contained in the script is obscured. However like using the –b option to enforce write-only access, this solution hides the function definitions from every single user. Instead, we might prefer to be selective in who can and cannot see the definition of particular functions. On the kdb+ server, we maintain a list of functions/variables which we wish to obscure. To prevent users from seeing their definition, we must first analyze the various ways in which the definition of a function can be displayed in a kdb+ process. 1) Typing the name of the function q)getVWAP {[s;ivl] select vwap:size wavg price by sym, bucket:ivl xbar time.minute from trade where sym in s} 2) Using the value keyword on a function passed by reference q)value `getVWAP {[s;ivl] select vwap:size wavg price by sym, bucket:ivl xbar time.minute from trade where sym in s} 3) Using the value keyword on a function passed explicitly q)value getVWAP 0xa0a1a281a30a040005 `s`ivl `symbol$() ``trade (,`vwap)!,(wavg;`size;`price) `sym`bucket!(`sym;(k){x*y div x:$[16h=abs[@x];"j"$x;x]};`ivl;`time.minute)) ,(in;`sym;`s) ? "{[s;ivl] select vwap:size wavg price by sym, bucket:ivl xbar time.minute from trade where sym in s}" 4) Using the value keyword twice on a function passed by reference q)value value `getVWAP 0xa0a1a281a30a040005 `s`ivl `symbol$() ``trade (,`vwap)!,(wavg;`size;`price) `sym`bucket!(`sym;(k){x*y div x:$[16h=abs[@x];"j"$x;x]};`ivl;`time.minute)) ,(in;`sym;`s) ? "{[s;ivl] select vwap:size wavg price by sym, bucket:ivl xbar time.minute from trade where sym in s}" This is by no means an exhaustive list. For instance, (4) above could also be achieved using the following: q){@[value;x]}/[2;`getVWAP] However, for the purposes of the paper we will just use the four means described above. We then define the following which allow us to determine if a client is attempting to view restricted code: .perm.hiddenFuncs:(); .perm.hideFunction:{`.perm.hiddenFuncs?x;} .perm.hidden:{[query] vv:{(x;(value;x); (value;enlist x); (value;(value;enlist x)))} if[any .perm.parse[query] ~/: raze vv each .perm.hiddenFuncs; '"You don't have permission to view this function/variable"] } It would be beneficial to hide entire namespaces from clients. For instance, all of our validation logic is stored in the .perm namespace, and this is certainly something we would want to hide from clients. //Get all variables in a namespace .perm.nsFuncs:{[ns] if[ns~`.;:system"f ."]; if[not .perm.isNamespace[ns];:()]; raze(` sv' ns,/:system"f ",string ns),.z.s'[` sv' ns,/:system"v ",string ns] } .perm.hideNamespace:{[ns].perm.hideFunction each ns,.perm.nsFuncs[ns]} We can add this additional check to our poweruser validation function: .perm.pg.poweruser:{[user;query] if[".perm.executeSproc"~.perm.toString first .perm.parse query; :value query]; if[.perm.isTableQuery q:.perm.parse[query]; :.perm.validateTableQuery[user;q]]; .perm.hidden query; .perm.readOnly query } Server: q).perm.hideNamespace[`.perm] q).perm.hideFunction[`getVWAP] Client: q)h".perm.addUser" 'You don't have permission to view this function/variable q)h".perm.pg.poweruser" 'You don't have permission to view this function/variable q)h"getVWAP" 'You don't have permission to view this function/variable q)h"getOHLC" {[s] select open:first price, high:max price, low:min price, close:last price by sym from trade where sym in s} Code injection¶ While the approach presented in this paper so far has outlined various methods of restricting users from executing particular queries, it is not immune to circumvention. By formatting queries in certain ways, a user can bypass the parsing logic to execute a query that they do not have permission to execute. One such method would be to add a leading semi-colon to the query. q)h"select from .perm.users" 'You do not have select permission on .perm.users q)h";select from .perm.users" user | class password ----------| -------------------------------------------- user1 | user 0x9022daebd17737ba0bd9cd4732ea66b6 poweruser1| poweruser 0x1e948f5d3b634d15d91cfbaaa955e399 superuser1| superuser 0x9f233f505811d3fbdb2ee7a9bf5aa581 Parsing this query can shed some light on how it can be blocked: q)parse";select from .perm.users" ";" :: (?;`.perm.users;();0b;()) q)parse";;;;select from .perm.users" ";" :: :: :: :: (?;`.perm.users;();0b;()) To stop this form of injection, we can block any queries whose parse tree contains the generic null (::) . .perm.blockInjection:{[query] if[any (::)~/:.perm.parse query; '"Invalid Query"] } We add this function to our poweruser validation function: .perm.pg.poweruser:{[user;query] if[".perm.executeSproc"~.perm.toString first .perm.parse query; :value query]; if[.perm.isTableQuery q:.perm.parse[query]; :.perm.validateTableQuery[user;q]]; .perm.hidden query; .perm.blockInjection query; .perm.readOnly query } On the client, attempts to circumvent the restrictions are blocked. q)h".perm.users" 'You do not have select permission on .perm.users q)h";.perm.users" 'Invalid Query q)h";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;.perm.users" 'Invalid Query This has the knock-on effect of blocking all niladic functions that are called with the form function[] . As a workaround, functions that need no arguments should be called as function[`] . Another potential backdoor is the stored-procedure wrapper function. Since we allow write access when executing stored procedures, users can use code injection to update global variables on the server. For instance, instead of calling h".perm.executeSproc[`getVWAP;(`AAPL;5)]" user1 could instead call h".perm.executeSproc[[.perm.addSuperuser[`user1;`password];`getVWAP];(`AAPL;5)]" user1 has thus become a superuser and now has the ability to execute anything on the server. q)h"delete from `trade" 'You can only execute stored procedures:.perm.executeSproc[sp;(x;y;z)] q)h".perm.executeSproc[[.perm.addSuperuser[`user1;`password];`getVWAP] ;(`AAPL;5)]" sym bucket| vwap -----------| -------- AAPL 09:00 | 440.8216 AAPL 09:05 | 440.8516 AAPL 09:10 | 440.9229 AAPL 09:15 | 440.9324 AAPL 09:20 | 440.9074 AAPL 09:25 | 440.8243 .. q)h"delete from `trade" `trade q)h"trade" time sym price size ex ---------------------- The solution is to analyze the arguments the user is passing to .perm.executeSproc and determine if the user is attempting to write to the process, and if so reject the query. To do this, we create a slightly modified version of .perm.readOnly which suppresses all errors except for the noupdate error (i.e. the error that is signalled when attempting to writing to the process). .perm.readOnlyNoError:{[x] res:first{[q;exe]$[exe;@[value;q;{(`error;x)}];()]}[x;]peach 10b; if[(2=count res) and `error~first res; if[last[res]~"noupdate";'"You do not have write access"]] } Our validation functions then become .perm.pg.user:{[user;query] if[not".perm.executeSproc"~.perm.toString first q:.perm.parse query; '"You can only execute stored procedures:.perm.executeSproc[sp;(x;y;z)]"]; .perm.readOnlyNoError'[(eval;)each 1_q]; value query } .perm.pg.poweruser:{[user;query] if[".perm.executeSproc"~.perm.toString first q:.perm.parse query; .perm.readOnlyNoError'[(eval;)each 1_q]; :value query]; if[.perm.isTableQuery q; :.perm.validateTableQuery[user;q]]; .perm.hidden query; .perm.blockInjection query; .perm.readOnly query } The attempt to elevate permissions is now blocked. q)h".perm.executeSproc[`getVWAP;(`AAPL;5)]" sym bucket| vwap -----------| -------- AAPL 09:00 | 440.8216 AAPL 09:05 | 440.8516 AAPL 09:10 | 440.9229 AAPL 09:15 | 440.9324 AAPL 09:20 | 440.9074 AAPL 09:25 | 440.8243 .. q)h".perm.executeSproc[[.perm.addSuperuser[`user1;`password];`getVWAP] ;(`AAPL;5)]" 'You do not have write access Restricting HTTP queries¶ So far, this paper has just discussed how to restrict and control access from a client process that connects via IPC. However, it is also possible to connect to a kdb+ process via HTTP. While the .z.pw callback works equally for both IPC and HTTP connections, HTTP queries are not routed through the .z.pg message handler. Rather, they are handled by .z.ph . As defined in q.k , .z.ph is responsible for composing the HTML webpage, executing the query, and formatting the results into a HTML table. q.k definition: k).z.ph:{x:uh$[@x;x;*x];$[~#x;hy[`htm]fram[$.z.f;x]("?";"?",*x:$."\\v" ); x~,"?";hp@{hb["?",x]x}'$."\\v";"?["~2#x;hp jx["J"$2_x]R "?"=*x;@[{hp jx[0] R::1_x};x;he];"?"in x;@[{hy[t]@`/:tx[t:`$- 3#n#x]@.(1+n:x?"?")_x};x;he] #r:@[1::;`$":",p:HOME,"/",x;""];hy[`$(1+x?".")_x]"c"$r;hn["404 Not Found";`txt] p,": not found"]} Translated to q: .z.ph:{[x] x:.h.uh $[type x;x;first x]; $[not count x; //1 .h.hy[`htm;.h.fram[string .z.f;x;("?";"?",first x:string system"v")]]; x~enlist "?"; //2 .h.hp[{.h.hb["?",x; x]}each string system"v"]; "?["~2#x; //3 .h.hp[.h.jx["J"$2_x; .h.R]]; "?"=first x; //4 @[{.h.hp[.h.jx[0;.h.R::1_x]]};x;.h.he]; "?" in x; //5 @[{.h.ht[t;] ` sv .h.tx[t:`$-3#n#x;value (1+n:x?"?")_x]};x;.h.he]; count r:@[1:;`$":",p:.h.HOME,"/",x;""]; //6 .h.hy[`$(1+x?".")_x; "c"$r]; .h.hn["404 Not Found";`txt;p,": not found"]] //7 } .z.ph consists of seven branches, each of which is numbered above. We will focus mostly on branches 1 and 4. Branch 1 is responsible for composing the HTML page, populating the left-hand pane with a list of all variables in the root namespace. It then executes the first of these variables. In an unprotected kdb+ process, there is no problem with this, since every user has access to every variable. In a permissioned system however, the user may not have permission to view this variable. Rather than having an error display each time the user accesses the URL, we can instead define a specific variable that is loaded when the URL is accessed and which all users will be able to view. .perm.h.open:”kdb+ permissions” Branch 1 then changes from this: .h.hy[`htm;.h.fram[string .z.f;x;("?";"?",first x:string system"v")]]; To this: .h.hy[`htm;.h.fram[string .z.f;x;("?";"?.perm.h.open")]]; Branch 4 is responsible for handling incoming queries (anything beginning with ? ). Inside this branch, the function .h.jx is called. This is responsible for executing the incoming query. q).h.jx k){[j;x]x:. x;$[$[.Q.qt[x];(N:(*."\\C")-4)<n:#x;0]; (" "/:ha'["?[",/:$(0;0|j-N),|&\(n- N;j+N); $`home`up`down`end],,($n),"[",($j),"]";"");()],hc'.Q.S[."\\C";j ]x} The very first statement in .h.jx is x:. x , which is the k-equivalent of x:value x . To restrict the execution of queries that come into the process via HTTP, it is just a matter of replicating in .h.jx the logic we previously inserted into .z.pg . We define a new function, .h.jx2 , which takes two additional arguments: user and class. We add our validation from .z.pg here, including the override for .perm.h.open . k).h.jx2:{[j;x;u;c] x: $[(c=`superuser)|x~”.perm.h.open”;c=`poweruser; .perm.pg.poweruser[u;x] ; .perm.pg.user[u;x]]; $[$[.Q.qt[x];(N:(*."\\C")-4)<n:#x;0]; (" "/:.h.ha'[ "?[",/:$(0;0|j- N),|&\(n- N;j+N);$`home`up`down`end], ,($n),"[",($j),"]";""); ()], .h.hc'.Q.S[."\\C ";j]x} In .z.ph we then replace any references to .h.jx with .h.jx2 , adding in our extract user and class arguments. .z.ph:{[x] .perm.h.user:.z.u; .perm.h.open:”kdb+ permissions”; .perm.h.class: .perm.getClass[.perm.h.user]; x:.h.uh $[type x;x;first x]; $[not count x; //1 .h.hy[`htm;.h.fram[string .z.f;x;("?";"?.perm.h.open")]]; x~enlist "?"; //2 .h.hp[{.h.hb["?",x; x]}each string system"v"]; "?["~2#x; //3 .h.hp[.h.jx2["J"$2_x; .h.R; .perm.h.user;.perm.h.class]]; "?"=first x; //4 @[{.h.hp[.h.jx2[0;.h.R::1_x;.perm.h.user;.perm.h.class]]};x;.h.he]; "?" in x; //5 @[{.h.ht[t;] ` sv .h.tx[t:`$-3#n#x;value (1+n:x?"?")_x]};x;.h.he]; count r:@[1:;`$":",p:.h.HOME,"/",x;""]; //6 .h.hy[`$(1+x?".")_x; "c"$r]; .h.hn["404 Not Found";`txt;p,": not found"]] //7 } Logging client activity¶ As stated previously, the restrictions described in this paper are not watertight. If so inclined, an industrious user could potentially find a workaround for the restrictions that have been imposed. The development of a permissioning system is a gradual process, with holes being patched as they are identified. To help with this process, all client activity on a kdb+ process should be logged so that if a user does manage to circumvent the system there will be a record of it. There are two separate logs we wish to maintain: - Access – who accessed the system? - Query – what commands did they execute? To store this information, we need to define two new tables. .perm.queryLog:([] time:`timestamp$(); handle:`int$(); user:`$(); class:`$(); hostname:`$(); ip:`$(); query:(); valid:`boolean$(); error:() ) .perm.accessLog:([] time:`timestamp$(); handle:`int$(); user:`$(); class:`$(); hostname:`$(); ip:`$(); state:`$();error:() ) .perm.queryLog - will keep a record of all queries entered on the system, including when they were executed, who executed them, whether they were valid queries and if not, why they failed. .perm.accessLog - keeps a record of all attempts to access the server. We then define some utility functions that will help populate the tables. .perm.getIP:{[] `$"."sv string `int$0x0 vs .z.a} .perm.logQuery:{[q;valid;err] ip:.perm.getIP[]; cls:.perm.getClass[.z.u]; `.perm.queryLog insert (.z.P;.z.w;.z.u;cls;.z.h;ip;q;valid;enlist err) } .perm.logValidQuery:{[q] .perm.logQuery[q;1b;""]} .perm.logInvalidQuery:{[q;err] .perm.logQuery[q;0b;err]} .perm.logAccess:{[hdl;u;state;msg] ip:.perm.getIP[]; cls:.perm.getClass[u]; `.perm.accessLog insert (.z.P;hdl;u;cls;.z.h;ip;state;enlist msg) } .perm.blockAccess:{[usr;msg].perm.logAccess[.z.w;usr;`block; msg]; 0b} .perm.grantAccess:{[usr] .perm.logAccess[.z.w;usr;`connect;""]; 1b} Then it is just a matter of making some adjustments to our message handlers: .z.pw , .z.pg , and .z.ph . For .z.pw , we want to log the cases where access has been denied. This happens if the requested username is not valid, or if the password supplied by the user does not match what is stored in the users table. .z.pw:{[user;pwd] $[not user in key .perm.users; .perm.blockAccess[user;"User does not exist"]; not .perm.encrypt[user;pwd]~.perm.users[user][`password]; .perm.blockAccess[user;"Password Authentication Failed"]; .perm.grantAccess user] } Client: q)h:hopen`:localhost:5001:POWERUSER1:password 'access q)h:hopen`:localhost:5001:poweruser1:PASSWORD 'access q)h:hopen`:localhost:5001:poweruser1:password Server: q)select user,hostname,ip,state,error from .perm.accessLog user hostname ip state error -------------------------------------------------------------------------- POWERUSER1 debian-image 127.0.0.1 block "User does not exist" POWERUSER1 debian-image 127.0.0.1 block "User does not exist" poweruser1 debian-image 127.0.0.1 block "Password Authentication Failed" poweruser1 debian-image 127.0.0.1 block "Password Authentication Failed" poweruser1 debian-image 127.0.0.1 connect "" Retries To resolve compatibility issues between the various versions, kdb+ attempts the IPC handshake twice if the authentication fails, so any errors will be duplicated in our logging table. To log client queries, we need to make changes to .z.pg and .z.ph . First, we rename .z.pg and .z.ph to .perm.zpg and .perm.zph respectively. We then use .z.pg and .z.ph as wrapper functions around our original handlers, allowing us to catch and log any errors. .z.pg:{[query] res:@[.perm.zpg; query; {[x;y].perm.logInvalidQuery[x;y];'y}[query;]]; .perm.logValidQuery query; res } When a HTTP call results in an error, it doesn’t signal the error in the usual way, so we can’t use protected evaluation to trap the error. Rather, it generates a HTTP response displaying the error. HTTP/1.1 400 Bad Request Content-Type: text/plain Connection: close Content-Length: N 'Error message This means we can use pattern matching to identify when HTTP calls result in errors. .z.ph:{[query] res:.perm.zph query; if[res like "HTTP/1.1 400 Bad Request*"; .perm.logInvalidQuery[1_first query;errMsg:5_"\r\n" vs res]; :res]; .perm.logValidQuery[1_first query]; res } Client: q)h:hopen`:localhost:5001:poweruser1:password q)h"select from .perm.users" 'You do not have select permission on .perm.users q)h"a:1" 'You do not have write access q)h"select from trade" 'You do not have select permission on trade q)h"select from quote" time sym bid ask bsize asize ex ------------------------------------------------------ 09:00:00.863 AMZN 266.0749 266.0697 100000 90000 NYSE 09:00:02.416 MSFT 34.41319 34.41266 80000 80000 BATS 09:00:03.440 AAPL 449.5043 449.4989 30000 110000 BATS 09:00:03.959 MSFT 34.41286 34.41274 110000 110000 BATS 09:00:04.340 AMZN 266.0636 266.0593 20000 150000 NYSE .. Server: q)select user,query,valid,error from .perm.queryLog user query valid error -------------------------------------------------------------------------- ------------------- poweruser1 "select from .perm.users" 0 "You do not have select permission on .perm.users" poweruser1 "a:1" 0 "You do not have write access" poweruser1 "select from trade" 0 "You do not have select permission on trade" poweruser1 "select from quote" 1 "" Conclusion¶ This paper was an introduction to permissioning in kdb+ without using LDAP or any other external entitlements system. In order to pass a security audit, access to this data should be controlled and logged to ensure that only those who are entitled to view the information are able to do so. We have described a number of methods of securing a kdb+ process. We examined the concept of splitting clients into separate groups or classes, each with different permission levels. We examined how to block write access on a kdb+ process, and how to restrict certain users from viewing proprietary code. While the system described in the paper offers a broad coverage, including blocking some forms of code injection, it is not intended to be complete. While the approach outlined in this paper solely used q code to implement a permissioning system, there is scope to extend this to incorporate external protocols such as LDAP, Kerberos or Single Sign-On, allowing kdb+ to be fully integrated with a firm’s authentication infrastructure. One should also consider out-of-the-box solutions like KX Control which, as well as handling permissioning, also delivers a well-defined framework for process workflow, scheduling, audit trails and system alerts. Author¶ Tom Martin is a senior kdb+ consultant for KX who has built kdb+ systems for some of the world’s leading financial institutions. Tom is currently based in London, where he works on FX auto-hedging and client algos at a top-tier investment bank.
Appendix E - Goofys¶ Goofys is an open-source Linux client distribution. It uses an AWS S3 storage backend, behind a running and a normal Linux AWS EC2 instance. It presents a POSIX file system layer to kdb+ using the FUSE layer. It is distributed in binary form for RHEL/CentOS and others, or can be built from source. Limitations of the POSIX support are that hard links, symlinks and appends are not supported. | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.468 | ();,;2 3 | DNF | hcount | 0.405 | read1 | 0.487 | Metadata operational latencies - mSecs (headlines) Summary¶ Operational latency is high. The natural streaming throughput seems to hover around 130 MB/sec, or approximately a quarter of the EBS rate. The solution thrashes at 16 processes of streaming reads. Metadata latency figures are in the order of 100-200× higher that of EBS. The compressed tests show that the bottleneck is per-thread read speeds, as the data when decompressed rates improve a lot over the uncompressed model. Appendix D – MapR-FS¶ MapR is qualified with kdb+ It offers the full POSIX semantics, including through the NFS interface. MapR is a commercial implementation of the Apache Hadoop open-source stack. Solutions such as MapR-FS were originally driven by the need to support Hadoop clusters alongside high-performance file-system capabilities. In this regard, MapR improved on the original HDFS implementation found in Hadoop distributions. MapR-FS is a core component of their stack. MapR AMIs are freely available on the Amazon marketplace. We installed version 6.0a1 of MapR, using the cloud formation templates published in EC2. We used the BYOL licensing model, using an evaluation enterprise license. We tested just the enterprise version of the NFS service for this test, as we were not able to test the POSIX fuse client at the time we went to press. The reasons for considering something like MapR include: - Already being familiar with and using MapR in your enterprise, so this may already be a candidate or use case when considering AWS. - You would like to read and write HDB structured data into the same file-system service as is used to store unstructured data written/read using the HDFS RESTful APIs. This may offer the ability to consolidate or run Hadoop and kdb+ analytics independently of each other in your organization while sharing the same file-system infrastructure. Locking semantics on files passed muster during testing, although thorough testing of region or file locking on shared files across multiple hosts was not fully tested for the purposes of this report. | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.447 | ();,;2 3 | 6.77 | hcount | 0.484 | read1 | 0.768 | Metadata operational latencies - mSecs (headlines) Summary¶ The operational latency of this solution is significantly lower than seen with EFS and Storage Gateway, which is good for an underlying NFS protocol, but is beaten by WekaIO Matrix. By way of contrast however, this solution scales very well horizontally and vertically when looking at the accumulated throughput numbers. It also appears to do very well with random reads, however there we are likely to be hitting server-side caches in a significant way, so mileage will vary. We plan to look at the POSIX MapR client in the future. Appendix H - ObjectiveFS¶ ObjectiveFS is qualified with kdb+. ObjectiveFS is a commercial Linux client/kernel package. It arbitrates between S3 storage (each S3 bucket is presented as a FS) and each server running ObjectiveFS. ObjectiveFS supports S3-compatible object stores (e.g. IBM COS, AWS S3, Google Cloud Storage, etc) and Microsoft Azure. It presents a POSIX file system layer to kdb+. This is distinct from the EFS NFS service from AWS, which is defined independently from the S3 service. With this approach, you pay storage fees only for the S3 element, alongside a usage fee for ObjectiveFS. ObjectiveFS contains a pluggable driver, which allows for multithreaded readers to be implemented in kernel mode. This gives an increase in the concurrency of the reading of S3 data. ObjectiveFS would be installed on each kdb+ node accessing the S3 bucket containing the HDB data. A kdb+ node can be a cloud instance or a local server. ObjectiveFS is qualified with kdb+. ObjectiveFS achieves significantly better performance than EFS. It also has significantly better metadata operation latency than all of the EFS and open source S3 gateway products. It provides snapshots and client-side encryption. ObjectiveFS also scales aggregate bandwidth as more kdb+ nodes use the same S3 bucket. It scales up close to linearly for reads, as the number of reader nodes increase, since Amazon automatically partitions a bucket across service nodes, as needed to support higher request rates. The results below were generated on ObjectiveFS V5.3.1 from December 2017. Results from the newest V6.8 will be published soon. This shows that the read rates from the S3 buckets scale well when the number of nodes increases. This is more noticeable than the read rate seen when measuring the throughput on one node with varying numbers of kdb+ processes. Here it remains around the 260 MB/sec mark irrespective of the number of kdb+ processes reading. If you select the use of instance local SSD storage as a cache, this can accelerate reads of recent data. The instance local cache is written around for writes, as these go direct to the S3 bucket. But any re-reads of this data would be cached on local disk, local to that node. In other words, the same data on multiple client nodes of ObjectiveFS would each be copies of the same data. The cache may be filled and would be expired in a form of LRU expiry based on the access time of a file. For a single node, the read rate from disk cache is: | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.162 | ();,;2 3 | 0.175 | hcount | 0.088 | read1 | 0.177 | ObjectiveFS metadata operational latencies - mSecs (headlines) Note that ObjectiveFS encrypts and compresses the S3 objects using its own private keys plus your project’s public key. This will require a valid license and functioning software for the length of time you use this solution in a production setting. Summary¶ This is a simple and elegant solution for the retention of old data on a slower, lower cost S3 archive, which can be replicated by AWS, geographically or within availability zones. It magnifies the generically very low S3 read rates by moving a “parallelizing” logic layer into a kernel driver, and away from the FUSE layer. It then multithreads the read tasks. It requires the addition of the ObjectiveFS package on each node running kdb+ and then the linking of that system to the target S3 bucket. This is a very simple process to install, and very easy to set up. For solutions requiring higher throughput and lower latencies, you can consider the use of their local caching on instances with internal SSD drives, allowing you to reload and cache, at runtime, the most recent and most latency sensitive data. This cache can be pre-loaded according to a site-specific recipe, and could cover, for example, the most recent market data written back to cache, even through originally written to S3. Like some of the other solutions tested, ObjectiveFS does not use the kernel block cache. Instead it uses its own memory cache mechanism. The amount used by it is defined as a percent of RAM or as a fixed size. This allocation is made dynamically. Therefore attention should be paid to the cases where a kdb+ writer (e.g. RDB or a TP write-down) is growing its private heap space dynamically, as this could extend beyond available space at runtime. Reducing the size of the memory cache for ObjectiveFS and use of disk cache would mitigate this. Appendix J – Quobyte¶ Quobyte is functionally qualified with kdb+. Quobyte offers a shared namespace solution based on either locally-provisioned or EBS-style storage. It leverages an erasure-coding model around nodes of a Quobyte cluster. | test | result | |---|---| | throughput | Multiple thread read saturated the ingest bandwidth of each r4.4xlarge instance running kdb+. | | fileops attributes | Test results to follow, please check back at code.kx.com for full results. | [WekaIO Matrix](wekaio-matrix.md) Appendix F - S3FS¶ S3FS is an open-source Linux client software layer that arbitrates between the AWS S3 storage layer and each AWS EC2 instance. It presents a POSIX file system layer to kdb+. S3FS uses the Linux user-land FUSE layer. By default, it uses the POSIX handle mapped as an S3 object in a one-to-one map. It does not use the kernel cache buffer, nor does it use its own caching model by default. Due to S3’s eventual consistency limitations file creation with S3FS can occasionally fail. Metadata operations with this FS are slow. The append function, although supported is not usable in a production setting due to the massive latency involved. With multiple kdb+ processes reading, the S3FS service effectively stalled. | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 7.57 | ();,;2 3 | 91.1 | hcount | 10.18 | read1 | 12.64 | Metadata operational latencies - mSecs (headlines) Appendix G - S3QL¶ The code is perhaps the least-referenced open-source S3 gateway package, and from a vanilla RHEL 7.3 build we had to add a significant number of packages to get to the utility compiled and installed. S3QL is written in Python. Significant additions are required to build S3QL namely: llfuse, Python3, Cython, Python-pip, EPEL and SQlite. S3QL uses the Python bindings (llfuse) to the Linux user-mode kernel FUSE layer. By default, it uses the POSIX handle mapped as an S3 object in a one-to-one map. S3QL supports only one node sharing one subset (directory) tree of one S3 bucket. There is no sharing in this model. Several code exception/faults were seen in Python subroutines of the mkfs.s3ql utility during initial test so, due to time pressures, we will revisit this later. Although the process exceptions are probably due to a build error, and plainly the product does work, this does highlight that the build process was unusually complex, due to the nature of so many dependencies on other open-source components. This may play as a factor in the decision process for selecting solutions. Appendix I – WekaIO Matrix¶ WekaIO Matrix is qualified with kdb+. WekaIO Matrix is a commercial product from WekaIO. Version 3.1.2 was used for testing. Matrix uses a VFS driver, enabling Weka to support POSIX semantics with lockless queues for I/O. The WekaIO POSIX system has the same runtime semantics as a local Linux file system. Matrix provides distributed data protection based on a proprietary form of erasure coding. Files are broken up into chunks and spread across nodes (or EC2 instances) of the designated Matrix cluster (minimum cluster size is six nodes = four data + two parity). The data for each chunk of the file is mapped into an erasure-coded stripe/chunk that is stored on the node’s direct-attached SSD. EC2 instances must have local SATA or NVMe based SSDs for storage. With Matrix, we would anticipate kdb+ to be run in one of two ways. Firstly, it can run on the server nodes of the Matrix cluster, sharing the same namespace and same compute components. This eliminates the need to create an independent file-system infrastructure under EC2. Secondly, the kdb+ clients can run on clients of the Matrix cluster, the client/server protocol elements being included as part of the Matrix solution, being installed on both server and client nodes. One nice feature is that WekaIO tiers its namespace with S3, and includes operator selectable tiering rules, and can be based on age of file and time in cache, and so on. The performance is at its best when running from the cluster’s erasure-coded SSD tier, exhibiting good metadata operational latency. This product, like others using the same design model, does require server and client nodes to dedicate one or more cores (vCPU) to the file-system function. These dedicated cores run at 100% of capability on that core. This needs to be catered for in your core sizing calculations for kdb+, if you are running directly on the cluster. When forcing the cluster to read from the data expired to S3, we see these results: | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.555 | ();,;2 3 | 3.5 | hcount | 0.049 | read1 | 0.078 | WekaIO Matrix metadata operational latencies - mSecs (headlines) Summary¶ Streaming reads running in concert across multiple nodes of the cluster achieve 4.6 GB/sec transfer rates, as measured across eight nodes running kdb+, and on one file system. What is interesting here is to observe there is no decline in scaling rate between one and eight nodes. This tested cluster had twelve nodes, running within that a 4+2 data protection across these nodes, each of instance type r3.8xlarge (based on the older Intel Ivy Bridge chipset), chosen for its modest SSD disks and not for its latest CPU/mem speeds. Streaming throughput on one client node is 1029 MB/sec representing wire speed when considered as a client node. This indicates that the data is injected to the host running kdb+ from all of the Matrix nodes whilst still constructing sequential data from the remaining active nodes in the cluster, across the same network. Metadata operational latency: whilst noticeably worse than EBS, is one or two orders of magnitude better than EFS and Storage Gateway and all of the open source products. For the S3 tier, a single kdb+ thread on one node will stream reads at 555 MB/sec. This rises to 1596 MB/sec across eight nodes, continuing to scale, but not linearly. For eight processes and eight nodes throughput maximizes at a reasonable 1251 MB/sec. In a real-world setting, you are likely to see a blended figure improve with hits coming from the SSDs. The other elements that distinguish this solution from others are “block-like” low operational latencies for some meta-data functions, and good aggregate throughputs for the small random reads with kdb+. For setup and installation, a configuration tool guides users through the cluster configuration, and it is pre-configured to build out a cluster of standard r3- or i3-series EC2 instances. The tool has options for both standard and expert users. The tool also provides users with performance and cost information based on the options that have been chosen. Access a table from an MDB file via ODBC¶ Install the ODBC client driver for kdb+. Install Microsoft ODBC driver for Microsoft Access. Using the driver installed, a MDB file can be opened using the following example command: q)h:.odbc.open "driver=Microsoft Access Driver (*.mdb, *.accdb);dbq=C:\\mydb.mdb" The name of the driver may differ between versions. The command above should be altered to reflect the driver name installed. Use .odbc.tables to list the tables. q).odbc.tables h `aa`bb`cc`dd`ii`nn Use .odbc.eval to evaluate SQL commands via ODBC. q).odbc.eval[h;"select * from aa"] An alternative to querying through SQL is to load the entire database into kdb+ via the .odbc.load command, where the data can then be queried using kdb+ directly.
//replace callback function .finos.timer.replaceCallback:{[tid;func] if[not type[tid] in -6 -7h; '"Expecting a integer id in .finos.timer.replaceCallback."]; if[not tid in exec id from .finos.timer.priv.timers; '"invalid timer ID"]; .finos.timer.priv.validateCallback[func]; .finos.timer.priv.timers[tid;`func]:.finos.timer.priv.wrapCallbackByName func; }; //insert a new timer .finos.timer.priv.addTimer:{[func;when;period] if[not null when; when:.finos.timer.priv.toTimestamp when]; if[not null period; period:.finos.timer.priv.toTimespan period]; .finos.timer.priv.validateCallback[func]; id:.finos.timer.priv.idcount+1; if[not .finos.timer.defaultCatchUpMode in .finos.timer.priv.validCatchUpModes; '`$".finos.timer.defaultCatchUpMode has invalid value ",.Q.s1[.finos.timer.defaultCatchUpMode],", should be one of ",.Q.s1 .finos.timer.priv.validCatchUpModes; ]; t:`id`when`func`period`catchUpMode!(id;when;func;period;.finos.timer.defaultCatchUpMode); `.finos.timer.priv.timers upsert t; .finos.timer.priv.idcount+:1; .finos.timer.priv.setSystemT[]; id}; .finos.timer.priv.NANOSINMILLI:1000*1000j; .finos.timer.priv.toTimespan:{ $[-16h~t:type x; //timespan x; t in -6 -7h; //int, long = milliseconds `timespan$x*.finos.timer.priv.NANOSINMILLI; t in -17 -18 -19h; //minute, second, time `timespan$x; '`$"cannot convert to timespan: ",.Q.s1 x]}; .finos.timer.priv.toTimestamp:{ $[-12h~t:type x; //timestamp x; -15h~t; //datetime `timestamp$x; t in -6 -7 -16 -17 -18 -19h; /int, long, timespan, minute, second, time (`timestamp$.z.D)+.finos.timer.priv.toTimespan x; '`$"cannot convert to timestamp: ",.Q.s1 x]}; /// // Add a periodic timer with the specified start time. // @param func The function to run // @param when The first invocation time (timestamp) // @param period The timer period (time or timespan) // @return Timer handle .finos.timer.addPeriodicTimerWithStartTime:{[func;when;period] .finos.timer.priv.addTimer[func;when;period]}; /// // Add a timer that runs once at the specified time. If the time is in the past, the function is run immediately after returning from currently running functions. // @param func The function to run // @param when The invocation time (timestamp) // @return Timer handle .finos.timer.addAbsoluteTimer:{[func;when] .finos.timer.priv.addTimer[func;when;0Nn]}; /// // Add a timer that runs once at the specified time. If the time is in the past, the function is not run. // @param func The function to run // @param when The invocation time (timestamp) // @return Timer handle .finos.timer.addAbsoluteTimerFuture:{[func;when] $[.z.P<when:.finos.timer.priv.toTimestamp when;.finos.timer.priv.addTimer[func;when;0Nn];0N]}; /// // Add a periodic timer with the specified start time of day. If the time is in the future, it is run today, if it is in the past, it is run tomorrow. // @param func The function to run // @param startTime The first invocation time of day (time or timespan) // @param period The timer period (time or timespan) // @return Timer handle .finos.timer.addTimeOfDayTimer:{[func;startTime;period] firstTrigger:$[.z.T < startTime; .z.D+startTime; (.z.D+1)+startTime]; .finos.timer.addPeriodicTimerWithStartTime[func;firstTrigger;period]}; .finos.timer.priv.relativeToTimestamp:{.z.P+.finos.timer.priv.toTimespan x}; // Add a timer that runs once after a specified delay. // @param func The function to run // @param delay The time after which the timer runs (time or timespan) // @return Timer handle .finos.timer.addRelativeTimer:{[func;delay] .finos.timer.priv.addTimer[func;.finos.timer.priv.relativeToTimestamp delay;0Nn]}; // Add a periodic timer. // @param func The function to run // @param period The timer period (time or timespan) // @return Timer handle .finos.timer.addPeriodicTimer:{[func;period] .finos.timer.priv.addTimer[func;.finos.timer.priv.relativeToTimestamp period;period]}; // Remove a previously added timer. // @param tid Timer handle returned by one of the addXXTimer functions. .finos.timer.removeTimer:{[tid] if[not type[tid] in -6 -7h; '"Expecting an integer id"]; delete from `.finos.timer.priv.timers where id=tid; }; // Change the frequency of a periodic timer or make a previously one-shot timer periodic. // @param tid Timer handle returned by one of the addXXTimer functions. // @param period The new timer period (time or timespan) .finos.timer.adjustPeriodicFrequency:{[tid;newperiod] if[not type[tid] in -6 -7h; '"Expecting an integer id"]; if[not tid in exec id from .finos.timer.priv.timers; '"invalid timer ID"]; .finos.timer.priv.timers[tid;`period]:.finos.timer.priv.toTimespan newperiod; }; // Change the catch up mode of a periodic timer. // @param tid Timer handle returned by one of the addXXTimer functions. // @param mode One of the valid values for [[.finos.timer.defaultCatchUpMode]]. .finos.timer.setCatchUpMode:{[tid;mode] if[not type[tid] in -6 -7h; '"Expecting an integer id"]; if[not type[mode]=-11h; '"Expecting a symbol mode"]; if[not mode in .finos.timer.priv.validCatchUpModes; '`$"mode must be one of ",.Q.s1 .finos.timer.priv.validCatchUpModes]; if[not tid in exec id from .finos.timer.priv.timers; '"invalid timer ID"]; .finos.timer.priv.timers[tid;`catchUpMode]:mode; }; // Get the table of all timers. .finos.timer.list:{.finos.timer.priv.timers}; { //the "main" function restoreOld:0b; if[not ()~key `.z.ts; if[()~key `.finos.timer.priv.oldZts; //don't overwrite if this script is reloaded period:system"t"; restoreOld:period>0; //if period=0, timer is disabled so it shouldn't run ]; ]; if[restoreOld; .finos.timer.priv.oldZts:.z.ts; ]; //invokes expired timers, reschedules periodic timers //and resets \t for next expiration .z.ts:{ now:.z.P; toRun:`when xasc select from .finos.timer.priv.timers where when<=now; .finos.timer.priv.runCallback each 0!toRun; .finos.timer.priv.setSystemT[];}; if[restoreOld; .finos.timer.addPeriodicTimer[.finos.timer.priv.oldZts;period]; ]; }[]; ================================================================================ FILE: kdb_q_unzip_unzip.q SIZE: 25,707 characters ================================================================================ .finos.dep.include"../util/util.q" // Utilities // Read bytes from either a file or a byte vector. // @param x hsym or bytes // @param y offset // @param z length // @return z bytes from x, starting at y .finos.unzip.priv.bytes:{$[-11h=t:type x;read1(x;y;z);4h=t;x y+til z;'`type]} // Count bytes from either a file or a byte vector. // @param x hsym or bytes // @return count of bytes in x .finos.unzip.priv.bcount:{$[-11h=t:type x;hcount;count]x} // Split a subsection of data into fields. // Starts from offset and takes sum fields entries, splitting them according. // to fields. // fields is a dictionary of field names and widths. // @param x fields // @param y offset // @param z data // @return the split subsection of the vector .finos.unzip.priv.split:{(key x)!(get sums prev x)cut z y+til sum x} // Parse byte(s) into a "number" (i.e. byte, short, int, or long, depending on the length). // @param x byte or bytes // @return byte, short, int, or long .finos.unzip.priv.parseNum:.finos.util.compose({$[1=count x;first;0x00 sv]x};reverse); // Parse byte(s) into bits; N.B. output is reversed to make flag dicts more natural. // @param x byte or bytes // @return bool vector .finos.unzip.priv.parseBits:.finos.util.compose(reverse;0b vs;.finos.unzip.priv.parseNum); // Parse bytes into a (global) unix timestamp. // @param x bytes // @return timestamp .finos.unzip.priv.parseUnixTime:.finos.util.compose(.finos.util.timestamp_from_epoch;.finos.unzip.priv.parseNum); // Parse a range of data with a header. // parser is a function of three arguments: // Its first argument will be (data;extra); extra is passed as :: if not // included. // Its second argument will be the starting index of the record to extract. // Its third argument will be the raw headers of the record, split and // labeled according to fields. // It should return (record;next index). // parser will be called until it returns next index equal to length. // @param x (parser;fields;extra) // @param y data // @param z length // @return parsed records // @see .finos.unzip.priv.split .finos.unzip.priv.parse:{ if[2=count x; x,:(::); ]; f:{ $[ (z 1)=z 2; z; [ h:.finos.unzip.priv.split[x 1;z 1]y; a:x[0][(y;x 2);(z 1)+sum x 1]h; (raze(first z;enlist a 0);a 1;z 2)]]}; 1_first f[x][y]over(enlist(enlist`)!enlist(::);0;z)} // Constants // Flag names for central directory & file data .finos.unzip.priv.flags:.finos.util.list( `encrypted_file; `compression_option_1; `compression_option_2; `data_descriptor; `enhanced_deflation; `compressed_patched_data; `strong_encryption; `unused_7; `unused_8; `unused_9; `unused_10; `language_encoding; `reserved_12; `mask_header_values; `reserved_14; `reserved_15; ) // Flag names for internal file attributes .finos.unzip.priv.flags_iat:.finos.util.list( `text; `reserved_01; `control_field_records_precede_logical_records; `unused_03; `unused_04; `unused_05; `unused_06; `unused_07; `unused_08; `unused_09; `unused_10; `unused_11; `unused_12; `unused_13; `unused_14; `unused_15; ) // Flag names for extra field 0x5455 (extended timestamp) .finos.unzip.priv.flags_xfd_0x5455:.finos.util.list( `mtime; `atime; `ctime; `reserved_3; `reserved_4; `reserved_5; `reserved_6; `reserved_7; )
/ help.q 2011.06.07T13:44:00.971 \d .help DIR:TXT:()!() display:{if[not 10h=abs type x;x:string x];$[1=count i:where(key DIR)like x,"*";-1 each TXT[(key DIR)[i]];show DIR];} fetch:{if[not 10h=abs type x;x:string x];$[1=count i:where(key DIR)like x,"*";1_raze"\n",'TXT[(key DIR)[first i]];DIR]} TXT,:(enlist`adverb)!enlist( "' eachboth each"; "/ [x]over over(:+*&|,) [x]do/while"; "\\ [x]scan scan(:+*&|,) [x]do\\while"; "': [x]prior prior(:-%)"; "/: eachright sv(i:i/:I s:`/:S C:c/:L) j:0x40/:X i:0x0/:X"; "\\: eachleft vs(I:i\\:i S:`\\:s L:c\\:C) X:0x40\\:j X:0x0\\:i" ) DIR,:(enlist`adverb)!enlist`$"adverbs/operators" TXT,:(enlist`attributes)!enlist( "example overhead "; "`s#2 2 3 sorted 0 "; "`u#2 4 5 unique 16*u "; "`p#2 2 1 parted (4*u;16*u;4*u+1) "; "`g#2 1 2 grouped (4*u;16*u;4*u+1;4*n)"; ""; "The byte overheads use n(number of elements) and u(number of uniques)"; "`u is for unique lists."; "`p and `g are for lists with a lot of repetition."; ""; "`s#, `u# and `g# are preserved on append in memory (if possible)"; "only `s# is preserved on append to disk" ) DIR,:(enlist`attributes)!enlist`$"data attributes" TXT,:(enlist`cmdline)!enlist( "q [f] [-b] [-c r c] [-C r c] [-g 0|1] [-l] [-L][-o N] [-p N] [-P N] [-q]"; " [-r :H:P] [-s N] [-t N] [-T N] [-u|U F] [-w N] [-W N] [-z 0|1]"; ""; "f load script (*.q, *.k, *.s), file or directory"; ""; "-b block client write access "; "-c r c console maxRows maxCols"; "-C r c http display maxRows maxCols "; "-g 1 enable immediate garbage collect"; "-l log updates to filesystem "; "-L as -l, but sync logging"; "-o N offset hours (from GMT: affects .z.Z)"; "-p N port kdbc(/jdbc/odbc) http(html xml txt csv)"; "-p -N port multithreaded kdbc"; "-P N printdigits, default 7, 0=>all"; "-q quiet, no startup banner text"; "-r :H:P replicate from :host:port "; "-s N secondary processes for parallel execution"; "-t N timer milliseconds"; "-T N timeout seconds(applies to all client queries)"; "-u F usr:pwd file, no access above start directory"; "-u 1 disable system escapes"; "-U F as -u, but no file restrictions"; "-w N workspace MB limit (default: 2*RAM)"; "-W N week offset, default 2, 0=>saturday"; "-z B \"D\" uses 0:mm/dd/yyyy or 1:dd/mm/yyyy, default 0" ) DIR,:(enlist`cmdline)!enlist`$"command line parameters" TXT,:(enlist`data)!enlist( "char-size--type-literal--------------q---------sql--------java-------.net--- "; "b 1 1 0b boolean Boolean boolean "; "x 1 4 0x0 byte Byte byte "; "h 2 5 0h short smallint Short int16 "; "i 4 6 0 int int Integer int32 "; "j 8 7 0j long bigint Long int64 "; "e 4 8 0e real real Float single "; "f 8 9 0.0 float float Double double "; "c 1 10 \" \" char Character char"; "s . 11 ` symbol varchar String string "; "p 8 12 dateDtimespan timestamp"; "m 4 13 2000.01m month"; "d 4 14 2000.01.01 date date Date "; "z 8 15 dateTtime datetime timestamp Timestamp DateTime"; "n 8 16 0D00:00:00.000000000 timespan"; "u 4 17 00:00 minute"; "v 4 18 00:00:00 second"; "t 4 19 00:00:00.000 time time Time TimeSpan "; "* 4 20.. `s$` enum"; " 98 table"; " 99 dict"; " 100 lambda"; " 101 unary prim"; " 102 binary prim"; " 103 ternary(operator)"; " 104 projection"; " 105 composition"; " 106 f'"; " 107 f/"; " 108 f\\"; " 109 f':"; " 110 f/:"; " 111 f\\:"; " 112 dynamic load"; ""; "the nested types are 77+t (e.g. 78 is boolean. 96 is time.)"; ""; "`char$data `CHAR$string"; ""; "The int, float, char and symbol literal nulls are: 0N 0n \" \" `."; "The rest use type extensions, e.g. 0Nd. No null for boolean or byte."; "0Wd 0Wz 0Wt placeholder infinite dates/times/datetimes (no math)"; ""; "dict:`a`b!.. table:([]x:..;y:..) or +`x`y!.."; "date.(datetime dd mm month timestamp uu week year) / .z.d"; "datetime.(date dd hh minute mm month second ss time timespan timestamp uu week year) / .z.z"; "time.(hh minute mm second ss timespan uu) milliseconds=time mod 1000 / .z.t"; "timespan.(hh minute mm second ss time uu) / .z.n"; "timestamp.(date datetime dd hh minute mm month second ss time timespan uu week year) / .z.p" ) DIR,:(enlist`data)!enlist`$"data types" TXT,:(enlist`define)!enlist( "Dyad------------D-Amend---------Monad-----------M-amend------"; "v:y .[`v;();:;y]"; "v+:y .[`v;();+;y] v-: .[`v;();-:]"; "v[i]+:y .[`v;,i;+;y] v[i]-: .[`v;,i;-:]"; "v[i;j]+:y .[`v;(i;j);+;y] v[i;j]-: .[`v;(i;j);-:]"; ""; "@[v;i;d;y] is .[v;,i;d;y] @[v;i;m] is .[v;,i;m]"; ""; "{[a;b;c] ...} function definition"; " x y z default parameters"; " d:... local variable"; " d::.. global variable "; ""; "control(debug: ctrl-c stop)"; " $[c;t;f] conditional"; " ?[c;t;f] boolean conditional"; " if[c; ... ]"; " do[n; ... ]"; " while[c; ...]"; " / ... comment"; " : ... return(resume)"; " ' ... signal"; ""; "trap signals with .[f;(x;y;z);v] and @[f;x;v]"; "or .[f;(x;y;z);g] and @[f;x;g] "; "where v is the value to be returned on error "; "or g is a monadic function called with error text" ) DIR,:(enlist`define)!enlist`$"assign, define, control and debug" TXT,:(enlist`dotz)!enlist( ".z.a ip-address "; ".z.b dependencies (more information than \\b)"; ".z.d utc date"; ".z.D local date"; ".z.exit callback on exit "; ".z.f startup file"; ".z.h hostname"; ".z.i pid"; ".z.k kdb+ releasedate "; ".z.K kdb+ major version"; ".z.l license information (;expirydate;updatedate;;;)"; ".z.n utc timespan "; ".z.N local timespan"; ".z.o OS "; ".z.p utc timetamp"; ".z.P local timetamp "; ".z.pc[h] close, h handle (already closed)"; ".z.pg[x] get"; ".z.ph[x] http get"; ".z.pi[x] input (qcon)"; ".z.po[h] open, h handle "; ".z.pp[x] http post"; ".z.ps[x] set"; ".z.pw[u;p] validate user and password"; ".z.q in quiet mode (no console)"; ".z.s self, current function definition"; ".z.t utc time"; ".z.T local time"; ".z.ts[x] timer expression (called every \\t)"; ".z.u userid "; ".z.vs[v;i] value set"; ".z.w handle (0 for console, handle to remote for KIPC)"; ".z.W openHandles!vectorOfMessageSizes (oldest first)"; ".z.x command line parameters (argc..)"; ".z.z utc timestamp"; ".z.Z local timestamp" ) DIR,:(enlist`dotz)!enlist`$".z locale contents " TXT,:(enlist`errors)!enlist( "runtime errors"; "error--------example-----explanation"; "access attempt to read files above directory, run system commands or failed usr/pwd"; "accp tried to accept an incoming tcp/ip connection but failed to do so"; "arch attempt to load file of wrong endian format"; "assign cos:12 attempt to reuse a reserved word"; "badtail incomplete transaction at end of logfile, get good (count;length) with -11!(-2;`:file)"; "cast `sym$`xxx attempt to enumerate invalid value (`xxx not in sym in example) "; "conn too many incoming connections (1022 max)"; "d8 the log had a partial transaction at the end but q couldn't truncate the file."; "domain !-1 out of domain"; "elim more than 57 distinct enumerations "; "from select a b badly formed select statement"; "glim `g# limit, kdb+ currently limited to 99 concurrent `g#'s "; "hwr handle write error, can't write inside a peach"; "insert attempt to insert a record with a key that already exists "; "length ()+!1 incompatible lengths"; "limit 0W#2 tried to generate a list longer than 2,000,000,000 "; "loop a::a dependency or transitive closure loop"; "mismatch columns that can't be aligned for R,R or K,K "; "Mlim more than 999 nested columns in splayed tables"; "nyi not yet implemented"; "noamend can't change global state inside an amend"; "noupdate update not allowed when using negative port number"; "os operating system error"; "parse invalid syntax"; "part something wrong with the partitions in the hdb"; "pl peach can't handle parallel lambda's (2.3 only)"; "Q7 nyi op on file nested array"; "rank +[2;3;4] invalid rank or valence"; "rb encountered a problem whilst doing a blocking read"; "s-fail `s#2 1 cannot apply `s# to data (not ascending values) "; "splay nyi op on splayed table"; "stack {.z.s[]}[] ran out of stack space"; "stop \t user interrupt(ctrl-c) or time limit (-T)"; "stype '42 invalid type used to signal"; "trunc the log had a partial transaction at the end but q couldn't truncate the file."; "type til 2.2 wrong type"; "u-fail `u#1 1 cannot apply `u# to data (not unique values)"; "unmappable when saving partitioned data, each column must be mappable"; "value no value"; "vd1 attempted multithread update"; "view trying to re-assign a view to something else"; "wsfull malloc failed. ran out of swap (or addressability on 32bit). or hit -w limit."; "XXX value error (XXX undefined) "; ""; "system (file and ipc) errors"; "XXX:YYY XXX is from kdb+, YYY from the OS"; "XXX from addr, close, conn, p(from -p), snd, rcv or (invalid) filename (read0`:invalidname.txt)"; ""; "parse errors (execute or load)"; "[/(/{/]/)/}/\" open ([{ or \""; "branch a branch(if;do;while;$[.;.;.]) more than 255 byte codes away"; "char invalid character"; "constants too many constants (max 96)"; "globals too many global variables (32 max)"; "locals too many local variables (24 max)"; "params too many parameters (8 max)"; ""; "license errors"; "core too many cores"; "exp expiry date passed"; "host unlicensed host"; "k4.lic k4.lic file not found, check QHOME/QLIC"; "os unlicensed OS"; "srv attempt to use client-only license in server mode "; "upd attempt to use version of kdb+ more recent than update date"; "user unlicensed user"; "wha invalid system date" ) DIR,:(enlist`errors)!enlist`$"error messages" TXT,:(enlist`save)!enlist( "tables can be written as a single file or spread across a directory, e.g."; "`:trade set x / write as single file "; "`:trade/ set x / write across a directory "; "trade:get`:trade / read "; "trade:get`:trade/ / map columns on demand"; ""; "tables splayed across a directory must be fully enumerated(no varchar) and not keyed." ) DIR,:(enlist`save)!enlist`$"save/load tables" TXT,:(enlist`syscmd)!enlist( "\\1 filename redirect stdout"; "\\2 filename redirect stderr"; "\\a tables"; "\\b views (see also .z.b)"; "\\B invalid dependencies"; "\\c [23 79] console height,width"; "\\C [36 2000] browser height,width"; "\\d [d] q directory [go to]"; "\\e [0|1] error trap clients"; "\\f [d] functions [directory]"; "\\l [f] load script (or dir:files splays parts scripts)"; "\\o [0N] offset from gmt"; "\\p [i] port (0 turns off)"; "\\P [7] print digits(0-all)"; "\\r old new unix mv "; "\\s number of secondary processes (query only) "; "\\S [-314159] seed"; "\\t [i] timer [x] milliseconds (1st fire after delay)"; "\\t expr time expression "; "\\T [i] timeout [x] seconds "; "\\u reload the user:pswd file specified with -u"; "\\v [d] variables [directory]"; "\\w workspace(M0 sum of allocs from M1 bytes;M1 mapped anon bytes;M2 peak of M1;M3 mapped files bytes)"; " (max set by -w, 0 => unlimited) - see .Q.w[]"; "\\w 0 count symbols defined, symbol space used (bytes)"; "\\W [2] week offset(sat..fri)"; "\\x .z.p? expunge .z.p? value (ie reset to default)"; "\\z [0] \"D\"$ uses mm/dd/yyyy or dd/mm/yyyy"; "\\cd [d] O/S directory [go to]"; "\\_ is readonly (cmdline -b)"; "\\_ f.q create runtime script f.q_ from f.q (or f.k_ from f.k) "; "\\[other] O/S execute"; "\\\\ exit"; "\\ (escape suspension, or switch language mode)"; "ctrl-c (stop)" ) DIR,:(enlist`syscmd)!enlist`$"system commands" TXT,:(enlist`temporal)!enlist( "`timestamp$x ~ 2009.11.05D20:39:35.614334000 ~ \"p\"$x ~ x.timestamp"; "`datetime$x ~ 2009.11.05T20:39:35.614 ~ \"z\"$x ~ x.datetime"; "`year$x ~ 2009 ~ x.year"; "`month$x ~ 2009.11m ~ \"m\"$x ~ x.month"; "`mm$`date$x ~ 11 ~ x.mm"; "`week$x ~ 2009.11.02 ~ x.week"; "`date$x ~ 2009.11.05 ~ \"d\"$x ~ x.date"; "`dd$x ~ 5 ~ x.dd"; "`hh$x ~ 20 ~ x.hh"; "`minute$x ~ 20:39 ~ \"u\"$x ~ x.minute"; "`mm$`time$x ~ 39 ~ x.mm"; "`uu$x ~ 39 ~ x.uu"; "`second$x ~ 20:39:35 ~ \"v\"$x ~ x.second"; "`ss$x ~ 35 ~ x.ss"; "`time$x ~ 20:39:35.614 ~ \"t\"$x ~ x.time"; "`timespan$x ~ 0D20:39:35.614334000 ~ \"n\"$x ~ x.timespan" ) DIR,:(enlist`temporal)!enlist`$"temporal - date & time casts" TXT,:(enlist`verbs)!enlist( "verb-infix-------prefix"; "s:x gets :x idem"; "i+i plus +l flip"; "i-i minus -i neg"; "i*i times *l first"; "f%f divide %f reciprocal"; "a&a and &B where"; "a|a or |l reverse"; "a^a fill ^a null"; "a=a equal =l group"; "a<a less <l iasc <s(hopen)"; "a>a more >l idesc >i(hclose)"; "c$a cast s$ $a string h$a \"C\"$C `$C"; "l,l cat ,x enlist"; "i#l take #l count"; "i_l drop _a floor sc(lower)"; "x~x match ~a not ~s(hdelete)"; "l!l xkey !d key !i (s;();S):!s"; "A?a find ?l distinct rand([n]?bxhijefcs)"; "x@i at s@ @x type trap amend(:+-*%&|,)"; "x.l dot s. .d value .sCL trap dmend(:+-*%&|,)"; "A bin a;a in A;a within(a;a);sC like C;sC ss sC"; "{sqrt log exp sin cos tan asin acos atan}f"; "last sum prd min max avg wsum wavg xbar"; "exit getenv"; ""; "dependency::expression (when not in function definition)" ) DIR,:(enlist`verbs)!enlist`$"verbs/functions"
// Test trade batches batch1:(10?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); batch2:(1?`4;1?100.0;1?100i;1#0b;1?.Q.A;1?.Q.A;1#`buy); // Local trade table schema trade:flip `time`sym`price`size`stop`cond`ex`side!"PSFIBCCS" $\: (); // Local upd function upd:{[t;x] t insert x}; ================================================================================ FILE: TorQ_tests_stp_chainedeod_settings.q SIZE: 669 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant`segmentedchainedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/chainedeod/process.csv"; tplogdir:getenv[`KDBTPLOG]; // Count number of tplog dirs for a given proc // eg counttplogs[`sctptest1] counttplogs:{[procname;tplogdir] sum system["ls ",tplogdir] like string[procname],"*" }[;tplogdir]; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; // Flag to save tests to disk .k4.savetodisk:1b; ================================================================================ FILE: TorQ_tests_stp_chainedeodlognone_settings.q SIZE: 846 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant`segmentedchainedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/chainedeod/process.csv"; tplogdir:getenv[`KDBTPLOG]; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Count number of tplog dirs for a given proc // eg counttplogs[`sctptest1] counttplogs:{[procname;tplogdir] sum system["ls ",tplogdir] like string[procname],"*" }[;tplogdir]; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; // Flag to save tests to disk .k4.savetodisk:1b; ================================================================================ FILE: TorQ_tests_stp_chainedlogmodes_settings.q SIZE: 766 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant`segmentedchainedtickerplant; .servers.USERPASS:`admin:admin; // Test trade batches testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/chainedlogmodes/process.csv"; testlogdb:"logmodeslog"; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; // End period function to send to subs endp:{[x;y;z] .tst.endp:@[{1+value x};`.tst.endp;0]}; // Flag to save tests to disk .k4.savetodisk:1b; ================================================================================ FILE: TorQ_tests_stp_chainedperiodend_settings.q SIZE: 596 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant`segmentedchainedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/chainedperiodend/process.csv"; testlogdb:"testlog"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // End period function to send to subs endp:{[x;y;z] .tst.endp:@[{1+value x};`.tst.endp;0]}; // Flag to save tests to disk .k4.savetodisk:1b; ================================================================================ FILE: TorQ_tests_stp_chainedrecovery_settings.q SIZE: 745 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant`tickerplant`segmentedchainedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/chainedrecovery/process.csv"; stptestlogs:getenv[`KDBTESTS],"/stp/chainedrecovery/testlog"; stporiglogs:getenv[`KDBTPLOG]; teststpdb:"teststplog"; testsctpdb:"testsctplog"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; ================================================================================ FILE: TorQ_tests_stp_chainedstp_settings.q SIZE: 1,036 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant`segmentedchainedtickerplant; .servers.USERPASS:`admin:admin; // Test trade batches testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/chainedstp/process.csv"; stptestlogs:getenv[`KDBTESTS],"/stp/chainedstp/testlog"; stporiglogs:getenv[`KDBTPLOG]; teststpdb:"teststpdb"; testsctpdb:"testsctpdb"; // Local trade table schema trade:flip `time`sym`price`size`stop`cond`ex`side!"PSFIBCCS" $\: (); quote:flip `time`sym`bid`ask`bsize`asize`mode`ex`src!"PSFFJJCCS" $\: (); // Local upd and error log function upd:{[t;x] t insert x}; upderr:{[t;x].tst.err:x}; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; // Flag to save tests to disk .k4.savetodisk:1b; ================================================================================ FILE: TorQ_tests_stp_custommode_settings.q SIZE: 893 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test/default STP log directory tstlogs:getenv[`KDBTESTS],"/stp/custommode/tstlogs"; deflogs:getenv[`KDBTPLOG]; // Trade and quote schemas trade:flip `time`sym`price`size`stop`cond`ex`side!"PSFIBCCS" $\: (); quote:flip `time`sym`bid`ask`bsize`asize`mode`ex`src!"PSFFJJCCS" $\: (); // Define upd functions for local tables and errors upd:{[t;x] t insert x}; upderr:{[t;x] .tst.err:x}; // Couple of pre-defined strings db:"stp1_",string .z.d; proc:"stp1_"; liketabs:string[`segmentederrorlogfile`periodic`quote`stpmeta`heartbeat] ,\: "*"; liketabs:@[liketabs;0 1 2 4;{y,x}[;proc]]; // Test trade and quote updates testtrade:(10?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;10?100.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); ================================================================================ FILE: TorQ_tests_stp_eod_settings.q SIZE: 373 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/eod/process.csv"; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; ================================================================================ FILE: TorQ_tests_stp_exit_settings.q SIZE: 419 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV, strings to move the sub CSV around processcsv:getenv[`KDBTESTS],"/stp/exit/process.csv"; tstlogs:"stpex"; tstlogsdir:hsym `$getenv[`KDBTPLOG],"/",tstlogs,"_",string .z.d; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; ================================================================================ FILE: TorQ_tests_stp_housekeeping_settings.q SIZE: 461 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`housekeeping; .servers.USERPASS:`admin:admin; // Paths testlogs:getenv[`KDBTESTS],"/stp/housekeeping/logs/logs"; copylogs:getenv[`KDBTESTS],"/stp/housekeeping/logs/copy"; copytar:getenv[`KDBTESTS],"/stp/housekeeping/logs/copy.tar.gz"; extrtar:getenv[`KDBTESTS],"/stp/housekeeping/logs/home"; copystr:"cp -r ",testlogs," ",copylogs; tarstr:"tar -xvf ",copytar," -C ",getenv[`KDBTESTS],"/stp/housekeeping/logs"; ================================================================================ FILE: TorQ_tests_stp_idbdefault_settings.q SIZE: 720 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`segmentedtickerplant`wdb`hdb`idb`gateway`sort; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/idbdefault/process.csv"; wdbdir:hsym `$getenv[`KDBTESTS],"/stp/idbdefault/tempwdb/"; hdbdir:hsym `$getenv[`KDBTESTS],"/stp/idbdefault/temphdb/"; testlogdb:"testlog"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; ================================================================================ FILE: TorQ_tests_stp_idbpartbyenum_settings.q SIZE: 729 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`segmentedtickerplant`wdb`hdb`idb`gateway`sort; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/idbpartbyenum/process.csv"; wdbdir:hsym `$getenv[`KDBTESTS],"/stp/idbpartbyenum/tempwdb/"; hdbdir:hsym `$getenv[`KDBTESTS],"/stp/idbpartbyenum/temphdb/"; testlogdb:"testlog"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; ================================================================================ FILE: TorQ_tests_stp_periodend_settings.q SIZE: 490 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/periodend/process.csv";
// @private // @kind function // @category optimizationUtility // @desc Calculate the vector norm, used in calculation of the gradient // norm at position k. Default behaviour is to use the maximum value of the // gradient, this can be overwritten by a user, this is in line with the // default python implementation. // @param gradVals {number[]} Vector of calculated gradient values // @param ord {long} Order of norm (0W = max; -0W = min) // @return {float} Gradient norm based on the input gradient i.vecNorm:{[gradVals;ord] if[-7h<>type ord;'"ord must be +/- infinity or a long atom"]; $[0W~ord;max abs gradVals; -0W~ord;min abs gradVals; sum[abs[gradVals]xexp ord]xexp 1%ord ] } // Stopping conditions // @private // @kind function // @category optimizationUtility // @desc Evaluate if the optimization function has reached a condition // which should result in the optimization algorithm being stopped // @param dict {dictionary} Optimization function returns // @param params {dictionary} Parameters controlling non default optimization // behaviour // @return {boolean} Indication as to if the optimization has met one of it's // stopping conditions i.stopOptimize:{[dict;params] // Is the function evaluation at k an improvement on k-1? check1:dict[`fk]<dict`fkPrev; // Has x[k] returned a non valid return? check2:not any null dict`xk; // Have the maximum number of iterations been met? check3:params[`optimIter]>dict`idx; // Is the gradient at position k below the accepted tolerance check4:params[`gtol]<dict`gnorm; check1&check2&check3&check4 } // @private // @kind function // @category optimizationUtility // @desc Evaluate if the wolfe condition search has reached a condition // which should result in the optimization algorithm being stopped // @param dict {dictionary} Optimization function returns // @param params {dictionary} Parameters controlling non default optimization // behaviour // @return {boolean} Indication as to if the optimization has met one of it's // stopping conditions i.stopWolfe:{[dict;params] dict[`idx]<params`wolfeIter } // @private // @kind function // @category optimizationUtility // @desc Evaluate if the alpha condition 'zoom' has reached a condition // which should result in the optimization algorithm being stopped // @param dict {dictionary} Optimization function returns // @param params {dictionary} Parameters controlling non default optimization // behaviour // @return {boolean} Indication as to if the optimization has met one of it's // stopping conditions i.stopZoom:{[dict;params] dict[`idx]<params`zoomIter } // Function + derivative evaluation at x[k]+p[k]*alpha[k] // @private // @kind function // @category optimizationUtility // @desc Evaluate the objective function at the position x[k]+step size // @param func {fn} The objective function to be minimized // @param pk {float} Step direction // @param alpha {float} Size of the step to be applied // @param xk {number[]} Parameter values at position k // @param args {dictionary|number[]} Function arguments that do not change per // iteration // @returns {float} Function evaluated at at the position x[k] + step size i.phi:{[func;pk;alpha;xk;args] xk+:alpha*pk; i.funcEval[func;xk;args] } // @private // @kind function // @category optimizationUtility // @desc Evaluate the derivative of the objective function at // the position x[k] + step size // @param func {fn} The objective function to be minimized // @param eps {float} The absolute step size used for numerical approximation // of the jacobian via forward differences // @param pk {float} Step direction // @param alpha {float} Size of the step to be applied // @param xk {number[]} Parameter values at position k // @param args {dictionary|number[]} Function arguments that do not change per // iteration // @returns {dictionary} Gradient and value of scalar derivative i.derPhi:{[func;eps;pk;alpha;xk;args] // Increment xk by a small step size xk+:alpha*pk; // Get gradient at the new position gval:i.grad[func;xk;args;eps]; derval:gval mmu pk; `grad`derval!(gval;derval) } // Minimization functions // @private // @kind function // @category optimizationUtility // @desc Find the minimizing solution for a cubic polynomial which // passes through the points (a,fa), (b,fb) and (c,fc) with a derivative of // the objective function calculated as fpa. This follows the python // implementation outlined here // https://github.com/scipy/scipy/blob/v1.5.0/scipy/optimize/linesearch.py#L482 // @param a {float} Position a // @param fa {float} Objective function evaluated at a // @param fpa {float} Derivative of the objective function evaluated at 'a' // @param b {float} Position b // @param fb {float} Objective function evaluated at b // @param c {float} Position c // @param fc {float} Objective function evaluated at c // @returns {number[]} Minimized parameter set as a solution for the cubic // polynomial i.cubicMin:{[a;fa;fpa;b;fb;c;fc] bDiff:b-a; cDiff:c-a; denom:(bDiff*cDiff)xexp 2*(bDiff-cDiff); d1:2 2#0f; d1[0]:(1 -1)*xexp[;2]each(bDiff;cDiff); d1[1]:(-1 1)*xexp[;3]each(cDiff;bDiff); AB:d1 mmu(fb-fa-fpa*bDiff;fc-fa-fpa*cDiff); AB%:denom; radical:AB[1]*AB[1]-3*AB[0]*fpa; a+(neg[AB[1]]+sqrt(radical))%(3*AB[0]) } // @private // @kind function // @category optimizationUtility // @desc Find the minimizing solution for a quadratic polynomial which // passes through the points (a,fa) and (b,fb) with a derivative of the // objective function calculated as fpa. This follows the python // implementation outlined here // https://github.com/scipy/scipy/blob/v1.5.0/scipy/optimize/linesearch.py#L516 // @param a {float} Position a // @param fa {float} Objective function evaluated at a // @param fpa {float} Derivative of the objective function evaluated at a // @param b {float} Position b // @param fb {float} Objective function evaluated at b // @returns {number[]} Minimized parameter set as a solution for the quadratic // polynomial i.quadMin:{[a;fa;fpa;b;fb] bDiff:b-a; B:(fb-fa-fpa*bDiff)%(bDiff*bDiff); a-fpa%(2*B) } // Gradient + function evaluation // @private // @kind function // @category optimizationUtility // @desc Calculation of the gradient of the objective function for all // parameters of x incremented individually by epsilon // @param func {fn} The objective function to be minimized // @param xk {number[]} Parameter values at position k // @param args {dictionary|number[]} Function arguments that do not change per // iteration // @param eps {float} The absolute step size used for numerical approximation // of the jacobian via forward differences // @returns {dictionary} Gradient of function at position k i.grad:{[func;xk;args;eps] fk:i.funcEval[func;xk;args]; i.gradEval[fk;func;xk;args;eps]each til count xk } // @private // @kind function // @category optimizationUtility // @desc Calculation of the gradient of the objective function for a // single parameter set x where one of the indices has been incremented by // epsilon // @param func {fn} The objective function to be minimized // @param xk {number[]} Parameter values at position k // @param args {dictionary|number[]} Function arguments that do not change per // iteration // @param eps {float} The absolute step size used for numerical approximation // of the jacobian via forward differences // @returns {dictionary} Gradient of function at position k with an individual // variable x incremented by epsilon i.gradEval:{[fk;func;xk;args;eps;idx] if[(::)~fk;fk:i.funcEval[func;xk;args]]; // Increment function optimisation values by epsilon xk[idx]+:eps; // Evaluate the gradient (i.funcEval[func;xk;args]-fk)%eps } // @private // @kind function // @category optimizationUtility // @desc Evaluate the objective function at position x[k] with relevant // additional arguments accounted for // @param {fn} The objective function to be minimized // @param xk {number[]} Parameter values at position k // @param args {dictionary|number[]} Function arguments that do not change per // iteration // @returns {float} The objective function evaluated at the appropriate // location i.funcEval:{[func;xk;args] $[any args~/:((::);());func xk;func[xk;args]] } // Parameter dictionary // @private // @kind function // @category optimizationUtility // @desc Update the default behaviour of the model optimization // procedure to account for increased sensitivity to tolerance, the number // of iterations, how the gradient norm is calculated and various numerical // updates including changes to the Armijo rule and curvature for calculation // of the strong Wolfe conditions // @param dict {dictionary|(::)|()} If dict isn't empty,update the default // dictionary to include the user defined updates, otherwise use the default // dictionary // @returns {dictionary} Updated or default parameter set depending on // user input i.updDefault:{[dict] dictKeys:`norm`optimIter`gtol`geps`stepSize`c1`c2`wolfeIter`zoomIter`display; dictVals:(0W;0W;1e-4;1.49e-8;0w;1e-4;0.9;10;10;0b); returnDict:dictKeys!dictVals; if[99h<>type dict;dict:()!()]; i.wolfeParamCheck[returnDict,dict] } // @private // @kind function // @category optimizationUtility // @desc Ensure that the Armijo and curvature parameters are consistent // with the expected values for calculation of the strong Wolfe conditions // @param dict {dictionary} Updated parameter dictionary containing default // information and any updated parameter information // @returns {dictionary|err} The original input dictionary or an error // suggesting that the Armijo and curvature parameters are unsuitable i.wolfeParamCheck:{[dict] check1:dict[`c1]>dict`c2; check2:any not dict[`c1`c2]within 0 1; $[check1 or check2; '"When evaluating Wolfe conditions the following must hold 0 < c1 < c2 < 1"; dict ] } // Data Formatting // @private // @kind function // @category optimizationUtility // @desc Ensure that the input parameter x at position 0 which // will be updated is in a format that is suitable for use with this // optimization procedure i.e. the data is a list of values. // @param x0 {dictionary|number|number[]} Initial values of x to be optimized // @returns {number[]} The initial values of x converted into a suitable // numerical list format i.dataFormat:{[x0] "f"$$[99h=type x0;raze value x0;0h>type x0;enlist x0;x0] } // Conditional checks for Wolfe, zoom and quadratic condition evaluation // @private // @kind function // @category optimizationUtility // @desc Ensure new values lead to improvements over the older values // @param wolfeDict {dictionary} The current iterations values for the // objective function and the derivative of the objective function evaluated // @param params {dictionary} Parameter dictionary containing the updated/ // default information used to modify the behaviour of the system as a whole // @returns {boolean} Indication as to if a further zoom is required i.wolfeCriteria1:{[wolfeDict;params] prdVal:prd wolfeDict`alpha1`derPhi0; check1:wolfeDict[`phia1]>wolfeDict[`phi0]+params[`c1]*prdVal; prevPhi:wolfeDict[`phia1]>=wolfeDict`phia0; wolfeIdx:1<wolfeDict`idx; check2:prevPhi and wolfeIdx; check1 or check2 } // @private // @kind function // @category optimizationUtility // @desc Ensure new values lead to improvements over the older values // @param wolfeDict {dictionary} The current iterations values for the // objective function and the derivative of the objective function evaluated // @param params {dictionary} Parameter dictionary containing the updated/ / default information used to modify the behaviour of the system as a whole // @returns {boolean} Indication as to if a further zoom is required i.wolfeCriteria2:{[wolfeDict;params] neg[params[`c2]*wolfeDict[`derPhi0]]>=abs wolfeDict`derPhia1 } // @private // @kind function // @category optimizationUtility // @desc Check if there is need to apply quadratic minimum calculation // @param findMin {number[]} The currently calculated minimum values // @param highLow {dictionary} Upper and lower bounds of the search space // @param cubicCheck {float} Interpolation check parameter // @param zoomDict {dictionary} Parameters to be updated as 'zoom' procedure is // applied to find the optimal value of alpha // @returns {boolean} Indication as to if the value of findMin needs to be // updated i.quadCriteria:{[findMin;highLow;cubicCheck;zoomDict] // On first iteration the initial minimum has not been calculated // as such criteria should exit early to complete the quadratic calculation if[findMin~();:1b]; check1:0=zoomDict`idx; check2:findMin>highLow[`low] -cubicCheck; check3:findMin<highLow[`high]+cubicCheck; check1 or check2 or check3 }
Reference card¶ Keywords¶ By category¶ .Q.id (sanitize), .Q.res (reserved words) Operators¶ . | Apply, Index, Trap, Amend | @ | Apply At, Index At, Trap At, Amend At | |||| $ | Cast, Tok, Enumerate, Pad, mmu | |||||| ! | Dict, Enkey, Unkey, Enumeration, Flip Splayed, Display, internal, Update, Delete, lsq | |||||| ? | Find, Roll, Deal, Enum Extend, Select, Exec, Simple Exec, Vector Conditional | |||||| + - * % | Add, Subtract, Multiply, Divide | |||||| = <> ~ | Equals, Not Equals, Match | |||||| < <= >= > | Less Than, Up To, At Least, Greater Than | |||||| | & | Greater (OR), Lesser, AND | |||||| # | Take, Set Attribute | _ | Cut, Drop | : | Assign | || ^ | Fill, Coalesce | , | Join | ' | Compose | || 0: 1: 2: | File Text, File Binary, Dynamic Load | |||||| 0 ±1 ±2 ±n | write to console, stdout, stderr, handle n | Iterators¶ maps accumulators ' Each, each , Case /: Each Right / Over, over ': Each Parallel, peach \: Each Left \ Scan, scan ': Each Prior, prior Execution control¶ .[f;x;e] Trap : Return do exit $[x;y;z] Cond @[f;x;e] Trap-At ' Signal if while :[v;p1;r1;...] Pattern conditional Other¶ ` pop stack :: identity \x system cmd x . push stack generic null \ abort global amend \\ quit q set view / comment () precedence [;] expn block {} lambda ` symbol (;) list argt list ; separator `: filepath ([]..) table Attributes¶ g grouped p parted s sorted u unique Command-line options and system commands¶ | file | ||| \a | tables | \r | rename | -b | blocked | -s \s | secondary processes | \b \B | views | -S \S | random seed | -c \c | console size | -t \t | timer ticks | -C \C | HTTP size | \ts | time and space | \cd | change directory | -T \T | timeout | \d | directory | -u -U \u | usr-pwd | -e \e | error traps | -u | disable syscmds | -E \E | TLS server mode | \v | variables | \f | functions | -w \w | memory | -g \g | garbage collection | -W \W | week offset | \l | load file or directory | \x | expunge | -l -L | log sync | -z \z | date format | -o \o | UTC offset | \1 \2 | redirect | -p \p | listening port | \_ | hide q code | -P \P | display precision | \ | terminate | -q | quiet mode | \ | toggle q/k | -r \r | replicate | \\ | quit | system Command-line options, System commands, OS commands Datatypes¶ Basic datatypes n c name sz literal null inf SQL Java .Net ------------------------------------------------------------------------------------ 0 * list 1 b boolean 1 0b Boolean boolean 2 g guid 16 0Ng UUID GUID 4 x byte 1 0x00 Byte byte 5 h short 2 0h 0Nh 0Wh smallint Short int16 6 i int 4 0i 0Ni 0Wi int Integer int32 7 j long 8 0j 0Nj 0Wj bigint Long int64 0 0N 0W 8 e real 4 0e 0Ne 0We real Float single 9 f float 8 0.0 0n 0w float Double double 0f 0Nf 10 c char 1 " " " " Character char 11 s symbol ` ` varchar 12 p timestamp 8 dateDtimespan 0Np 0Wp Timestamp DateTime (RW) 13 m month 4 2000.01m 0Nm 14 d date 4 2000.01.01 0Nd 0Wd date Date 15 z datetime 8 dateTtime 0Nz 0wz timestamp Timestamp DateTime (RO) 16 n timespan 8 00:00:00.000000000 0Nn 0Wn Timespan TimeSpan 17 u minute 4 00:00 0Nu 0Wu 18 v second 4 00:00:00 0Nv 0Wv 19 t time 4 00:00:00.000 0Nt 0Wt time Time TimeSpan Columns: n short int returned by type and used for Cast, e.g. 9h$3 c character used lower-case for Cast and upper-case for Tok and Load CSV sz size in bytes inf infinity (no math on temporal types); 0Wh is 32767h RO: read only; RW: read-write Other datatypes 20-76 enums 77 anymap 104 projection 78-96 77+t – mapped list of lists of type t 105 composition 97 nested sym enum 106 f' 98 table 107 f/ 99 dictionary 108 f\ 100 lambda 109 f': 101 unary primitive 110 f/: 102 operator 111 f\: 103 iterator 112 dynamic load Above, f is an applicable value. Nested types are 77+t (e.g. 78 is boolean. 96 is time.) Cast $ : where char is from the c column above char$data:CHAR$string dict:`a`b!… table:([]x:…;y:…) date.(year month week mm dd) time.(minute second mm ss) milliseconds: time mod 1000 Namespaces¶ .h (markup)¶ HTTP, markup and data conversion. .j (JSON)¶ De/serialize as JSON. .m (memory backed files)¶ Memory backed by files. .Q (utils)¶ Utilities: general, environment, IPC, datatype, database, partitioned database state, segmented database state, file I/O, debugging, profiling. .z (environment, callbacks)¶ Environment, callbacks Release history¶ | version | date* | release note | |---|---|---| | 4.1 | 2024.02.12 | Changes in 4.1 | | 4.0 | 2020.03.17 | Changes in 4.0 | | 3.6 | 2020.02.24 | Changes in 3.6 | | 3.5 | 2020.02.13 | Changes in 3.5 | | 3.4 | 2019.06.03 | Changes in 3.4 | | 3.3 | 2017.03.13 | Changes in 3.3 | | 3.2 | 2016.12.22 | Changes in 3.2 | | 3.1 | 2016.01.27 | Changes in 3.1 | | 3.0 | 2014.11.26 | Changes in 3.0 | | 2.8 | 2015.05.28 | Changes in 2.8 | | 2.7 | 2013.06.28 | Changes in 2.7 | | 2.6 | 2012.08.03 | Changes in 2.6 | | 2.5 | 2012.03.31 | Changes in 2.5 | | 2.4 | 2012.03.31 | Changes in 2.4 | * Latest update to the distribution. Technical white papers¶ White papers are flagged in the navigation menus. Applications of kdb+¶ - Implementing trend indicators in kdb+ James Galligan, 2020.04 - Comparing option-pricing methods in q Deanna Morgan, 2019.10 - kdb+ in astronomy Andrew Magowan & James Neill, 2019.09 - Signal processing and q Callum Biggs, 2018.08 - Streaming analytics: detecting card counters in Blackjack Caolan Rafferty & Krishan Subherwal, 2017.05 - Surveillance techniques to effectively monitor algo and high-frequency trading Sam Stanton-Cook, Ryan Sparks, Dan O’Riordan & Rob Hodgkinson, 2014.03 - Sample aggregation engine for market depth Stephen Dempsey, 2014.01 - Market Fragmentation: a kdb+ framework for multiple liquidity sources James Corcoran, 2013.01 - Transaction-cost analysis using kdb+ Colm Earley, 2012.12 Interfaces¶ - Internet of Things with MQTT Rian Ó Cuinneagáin, 2021.06 - Publish/subscribe with the Solace event broker Himanshu Gupta, 2020.11 - C API for kdb+ Jeremy Lucid, 2018.12 - Data visualization with kdb+ using ODBC: a Tableau case study Michaela Woods, 2018.07 - kdb+ and FIX messaging Damien Barker, 2014.01 - An introduction to graphical interfaces for kdb+ using C# Michael Reynolds, 2013.05 - Common design principles for kdb+ gateways Michael McClintock, 2012.12 Managing data and systems¶ - Mass ingestion through data loaders Enda Gildea, 2020.08 - Latency and efficiency considerations for a real-time surveillance system Jason Quinn, 2019.11 - Working with sym files Paula Clarke, 2019.03 - Socket sharding with kdb+ and Linux Marcus Clarke, 2018.01 - Query Routing: a kdb+ framework for a scalable load-balanced system Kevin Holsgrove, 2015.11 - Time-series simplification in kdb+: a method for dynamically shrinking Big Data Sean Keevey & Kevin Smyth, 2015.02 - A natural query interface for distributed systems Sean Keevey, 2014.11 - Multi-partitioned kdb+ databases: an equity options case study James Hanna, 2014.04 - Temporal data: a kdb+ framework for corporate actions Sean Rodgers, 2014.03 - kdb+tick profiling for throughput optimization Ian Kilpatrick, 2014.03 - Intraday writedown solutions Colm McCarthy, 2014.03 - Compression in kdb+ Eoin Killeen, 2013.10 - Permissions with kdb+ Tom Martin, 2013.05 - Multi-threading in kdb+: performance optimizations and use cases Edward Cormack, 2013.03 - kdb+ data-management techniques Simon Mescal, 2013.01 - Order Book: a kdb+ intraday storage and access methodology Niall Coulter, 2012.04 - Disaster-recovery planning for kdb+ tick systems Stewart Robinson Machine learning¶ - NASA FDL: Analyzing social media data for disaster management Conor McCarthy, 2019.10 - NASA FDL: Predicting floods with q and machine learning Diane O’Donoghue, 2019.10 - An introduction to neural networks with kdb+ James Neill, 2019.07 - NASA FDL: Exoplanets Challenge Esperanza López Aguilera, 2018.12 - NASA FDL: Space Weather Challenge Deanna Morgan, 2018.11 - Using embedPy to apply LASSO regression Samantha Gallagher, 2018.10 - K-Nearest Neighbor classification and pattern recognition with q Emanuele Melis, 2017.07 Programming in q¶ - Iterators Conor Slattery & Stephen Taylor, 2019.03 - kdb+ query scaling Ian Lester, 2014.01 - The application of foreign keys and linked columns in kdb+ Kevin Smyth, 2013.04 - Columnar database and query optimization Ciáran Gorman, 2012.06 Help for man.q ¶ The man.q script mimics the Unix man command. Examples¶ man "$" / operator glyph man "enum extend" / operator name man "read0" / keyword man ".z" / namespace man "-b" / command-line option man "\\b" / system command Special pages¶ man "" / reference card man "cmdline" / command-line options man "errors" man "datatypes" man "debug" man "interfaces" man "internal" man "iterators" man "db" / database man "database" / database man "syscmds" / system commands man "wp" / White Papers Arguments to man ¶ man "--list" man "--help" Terminology¶ And the words that are used for to get this ship confused Will not be understood as they are spoken — Bob Dylan “When the ship comes in” In 2018 and 2019 the vocabulary used to describe the q language changed. Why? The q language inherited from its ancestor languages (APL, J, A+, k) syntactic terms that made the language seem more difficult to learn than it really is. So we changed them. Inheritance¶ Iverson wrote his seminal book A Programming Language at Harvard with Fred Brooks as a textbook for the world’s first computer-science courses. He adopted Heaviside’s term for higher-order functions: operator. It did not catch on. The term retains this usage in the APL language, but in other languages denotes simple functions such as addition and subtraction. Similarly Iverson’s monadic and dyadic are better known as unary and binary. In 1990 Iverson and Hui published a reboot of APL, the J programming language. Always alert to the power of metaphor, they referred to the syntactic elements of J as nouns, verbs, and adverbs, with the latter two denoting respectively functions and higher-order functions. Canadian schools used to teach more formal grammar than other English-speaking countries. Perhaps this made the metaphor seem more useful than it now is. In 2017 an informal poll in London of senior q programmers found none able to give a correct definition of adverb in either English or q. Iverson & Hui’s metaphor no longer had explanatory value. Worse, discussions with the language implementors repeatedly foundered on conflicting understandings of terms such as verb and ambivalent. The terms we had inherited for describing q were obstacles in the path of people learning the language. We set out to remove the obstacles. Revision¶ In revising, our first principle was to use familiar programming-language terms for familiar concepts. So + and & would be operators. Primitive functions with reserved words as names became keywords. We had no role for verb. Functions defined in the functional notation would be lambdas, anonymous or not. Operators, keywords, and lambdas would all be functions. Monadic, dyadic, and triadic yielded to unary, binary, and ternary. Removing verb drained the noun-verb-adverb metaphor of whatever explanatory power it once had. Many candidates were considered as replacements. Iterator was finally adopted at a conference of senior technical staff in January 2019. The isomorphism between functions, lists and dictionaries is a foundational insight of q. It underlies the syntax of Apply and Index. Defining - the application of functions - the indexing of arrays - the syntax and semantics of iterators requires an umbrella term for the valence of a function or the dimensions of an array. We follow the usage of J in denoting this as rank, despite the rank keyword having a quite different meaning. Changes¶ The following tabulates the changes in terminology. The new terms are used consistently on this site. Training materials must adopt them to ensure consistency with reference articles. | old | new | |---|---| | adverb | iterator | | ambivalent | variadic | | char vector | string | | dimensions | rank | | dyadic | binary | | monadic | unary | | niladic | nullary | | triadic | ternary | | valence | rank | | verb | operator | | verb | keyword | Recommendations¶ Consistency helps the reader. Variations in terminology that do not mark a distinction add to the reader’s cognitive load. This site uses the following terms consistently. If you are writing about q, we recommend you adopt them too. | deprecated | preferred | |---|---| | array | list | | element | item, list item | | indices | indexes | | input, parameter | argument | | matrices | matrixes | | output | result |
// Reset and try to load in a file while in segmented mode segfile:{[logfile] .replay.segmentedmode:1b; .replay.tplogfile:logfile; .replay.tplogdir:`; .replay.initandrun[]; }; // Reset and try to load in a file and a directory at the same time dirandfile:{[logfile] .replay.tplogfile:logfile; .replay.tplogdir:logfile; .replay.initandrun[]; }; // Change meta table lognames to match local testing setup localise:{[logpath] metatable:get tabpath:.Q.dd[logpath;`stpmeta]; logpaths:.Q.dd[logpath;] each `$last each exec "/" vs' string logname from metatable; tabpath set update logname:logpaths from metatable; }; ================================================================================ FILE: TorQ_tests_stp_tpvalidation_settings.q SIZE: 762 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`wdb`rdb`segmentedtickerplant`tickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/tpvalidation/process.csv"; temphdbdir:hsym `$getenv[`KDBTESTS],"/stp/tpvalidation/tmphdb/"; testlogdb:"testlog"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; // Get rid of some of the more egregious magic numbers tincrease:10 5 10 5; qincrease:10 0 10 0 10; ================================================================================ FILE: TorQ_tests_stp_tz_settings.q SIZE: 1,409 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Test STP log directory testlogdb:"testlog"; // Test trade update testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); // Re-initialise the eodtime namespace eodinit:{ // Default offset off:0D00; // If adding a synthetic TZ, set roll time to 2 seconds from now +- any rolloffsets if[x in `custom`customoffsetplus`customoffsetminus; dt:"p"$0;doff:"n"$0; adj:("p"$1+.z.d) - .z.p + 00:00:02 + off:(`custom`customoffsetplus`customoffsetminus!(0D00;0D02;-0D02))[x]; `.tz.t upsert (x;dt;adj;doff;adj;dt); .stplg.nextendUTC:"p"$0 ]; // Re-init eodtime .eodtime.datatimezone:x; .eodtime.rolltimezone:x; .eodtime.rolltimeoffset:neg off; .eodtime.dailyadj:.eodtime.getdailyadjustment[]; .eodtime.d:.eodtime.getday[.z.p]; .eodtime.nextroll:.eodtime.getroll[.z.p]; }; eodchange:{ // change eod, no custom tz .eodtime.datatimezone:`GMT; .eodtime.rolltimezone:`GMT; // eod set to 2 seconds after stp init .eodtime.rolltimeoffset:.z.p+00:00:02-"p"$.z.d+1; .eodtime.dailyadj:.eodtime.getdailyadjustment[]; .eodtime.d:.eodtime.getday[.z.p]; .eodtime.nextroll:.eodtime.getroll[.z.p]; }; // Local trade table schema and UPD function trade:flip `time`sym`price`size`stop`cond`ex`side!"PSFIBCCS" $\: (); upd:{[t;x] t insert x}; ================================================================================ FILE: TorQ_tests_stp_upds_settings.q SIZE: 711 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/upds/process.csv"; stptestlogs:getenv[`KDBTESTS],"/stp/recovery/testlog"; stporiglogs:getenv[`KDBTPLOG]; testlogdb:"testlogdb"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; // Flag to save tests to disk .k4.savetodisk:0b; ================================================================================ FILE: TorQ_tests_stp_upds_testdatabase.q SIZE: 893 characters ================================================================================ quote:([] time:`timestamp$(); seqnum:`long$(); sym:`g#`symbol$(); bid:`float$(); ask:`float$(); bsize:`long$(); asize:`long$(); mode:`char$(); ex:`char$(); src:`symbol$()) trade:([]time:`timestamp$(); sym:`g#`symbol$(); price:`float$(); size:`int$(); stop:`boolean$(); cond:`char$(); ex:`char$();side:`symbol$()) quote_iex:([]time:`timestamp$(); sym:`g#`symbol$(); bid:`float$(); ask:`float$(); bsize:`long$(); asize:`long$(); mode:`char$(); ex:`char$(); srctime:`timestamp$()) trade_iex:([]time:`timestamp$(); sym:`g#`symbol$(); price:`float$(); size:`int$(); stop:`boolean$(); cond:`char$(); ex:`char$(); srctime:`timestamp$()) packets:([] time:`timestamp$(); sym:`symbol$(); src:`symbol$(); dest:`symbol$(); srcport:`long$(); destport:`long$(); seq:`long$(); ack:`long$(); win:`long$(); tsval:`long$(); tsecr:`long$(); flags:(); protocol:`symbol$(); length:`long$(); len:`long$(); data:()) ================================================================================ FILE: TorQ_tests_stp_wdb_settings.q SIZE: 869 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`wdb`segmentedtickerplant`tickerplant`hdb; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/wdb/process.csv"; wdbdir:hsym `$getenv[`KDBTESTS],"/stp/wdb/tempwdb/"; temphdbdir:hsym `$getenv[`KDBTESTS],"/stp/wdb/temphdb/"; testlogdb:"testlog"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // expected WDB folder structure folder_patterns:{"*",x,"*"}each 1_/:string ` sv/: cross[hsym each `$string til count distinct testtrade[0],testquote[0];`trade`quote]; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; ================================================================================ FILE: TorQ_tests_wdb_intpartbyenum_database.q SIZE: 188 characters ================================================================================ tshort:([]enumcol:`short$(); expint:`long$()) tint: ([]enumcol:`int$(); expint:`long$()) tlong: ([]enumcol:`long$(); expint:`long$()) tsym: ([]enumcol:`symbol$(); expint:`long$()) ================================================================================ FILE: TorQ_tests_wdb_intpartbyenum_settings.q SIZE: 680 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`wdb`hdb`idb; .servers.USERPASS:`admin:admin; // Filepaths wdbdir:hsym `$getenv[`KDBTESTS],"/wdb/intpartbyenum/tempwdb"; hdbdir:hsym `$getenv[`KDBTESTS],"/wdb/intpartbyenum/temphdb"; symfile:` sv hdbdir,`sym; // Test tables with expected int partitions testtshort:([]enumcol:-0W -1 0 0N 1 0Wh; expint:0 0 0 0 1 32767); testtint: ([]enumcol:-0W -1 0 0N 1 0Wi; expint:0 0 0 0 1 2147483647); testtlong: ([]enumcol:-0W -1 0 0N 1 0W; expint:0 0 0 0 1 2147483647); testtsym: update expint:i from ([]enumcol:`a`b`c`d`e`); // All expected int partitions expints:asc distinct raze (testtshort;testtint;testtlong;testtsym)@\:`expint; ================================================================================ FILE: TorQ_tests_wdb_nullpartbyenum_database.q SIZE: 295 characters ================================================================================ quote:([]time:`timestamp$(); sym:`g#`symbol$(); bid:`float$(); ask:`float$(); bsize:`long$(); asize:`long$(); mode:`char$(); ex:`char$(); src:`symbol$()) trade:([]time:`timestamp$(); sym:`g#`symbol$(); price:`float$(); size:`int$(); stop:`boolean$(); cond:`char$(); ex:`char$();side:`symbol$()) ================================================================================ FILE: TorQ_tests_wdb_nullpartbyenum_settings.q SIZE: 958 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`wdb`segmentedtickerplant`tickerplant`hdb`idb`sort; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/wdb/nullpartbyenum/process.csv"; wdbpartbyenumdir:hsym `$getenv[`KDBTESTS],"/wdb/nullpartbyenum/tempwdbpartbyenum/"; temphdbpartbyenumdir:hsym `$getenv[`KDBTESTS],"/wdb/nullpartbyenum/temphdbpartbyenum/"; testlogdb:"testlog"; // Test updates testtrade:((3#`GOOG),``,5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:((8?`4),``;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // expected WDB folder structure folder_patterns:{"*",x,"*"}each 1_/:string ` sv/: cross[hsym each `$string til count distinct testtrade[0],testquote[0];`trade`quote]; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; ================================================================================ FILE: dpy_dpy.q SIZE: 2,636 characters ================================================================================ / General object display with type and structure Copyright (c) 2016-2018 Leslie Goldsmith Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ---------------- Displays an arbitrary q object, revealing its type, rank, and shape. Use <setc> to control box drawing characters (defaults to Windows chars under Windows and ASCII chars otherwise). Use <trt> to define arbitrary character translations (e.g. to illustrate nesting using colo[u]r). Can be wired into <d> namespace via the following in your <q.q> file: \l dpy.q .d.e:('[dpy;value]) Then, invoke via: d)2 3 4#til 24 Author: Leslie Goldsmith \ \d .dpy enl:enlist ty:@[(58#"l"),reverse[ty],"-",upper[ty:1_-1_.Q.t],(58#"L"),(20#"-"),"AY",13#":"]77+ trm:{[t;x] x:((1<count x)&","=first x)_x;$[10h=t;1_-1_x;t in 1 5 6 8 9h;((-)last[x]in "bhief")_x;x]} pad:{(1|/count each x)$x} ts:{$[t:type x;t,count x;any 0~'i:ts each x;0;1=count distinct -1_'i;first[j],count[i],(1_-1_j:first i),(|/)last each i;0]} sp:{@["j"$(+/)0=(1+til i)mod/:-1_x;-1+i:last x:(*\)(|) -1_x;:;0]} mat:{[s;x] r:sp s;a:.Q.s2(-2+count s),//x;(a,enl count[first a]#" ")?[p=-1_-1,p;count a;p:where r+1]} trt:(::)
// @private // // @overview // Copy a file from one location to another, ensuring that the file exists // at the source location // // @todo // Update this to use the axfs OS agnostic functionality provided by Analyst // this should ensure that the functionality will operate on Windows/MacOS/Linux // // @param src {#hsym} Source file to be copied. // @param dest {#hsym} Destination file to which the file is to be copied. // @return {null} registry.util.copy.file:{[src;dest] // Expecting an individual file for copying -> should return itself // if the file exists at the correct location src:key src; if[()~src; logging.error"File expected at '",string[src],"' did not exist" ]; if[not(1=count src)&all src like":*"; logging.error"src must be an individual file not a directory" ]; if[(not all(src;dest)like":*") & not all -11h=type each (src;dest); logging.error"Both src and dest directories must be a hsym like path" ]; system sv[" "]enlist["cp"],1_/:string(src;dest) } // @private // // @overview // Copy a directory from one location to another // // @todo // Update this to use the axfs OS agnostic functionality provided by Analyst // this should ensure that the functionality will operate on Windows/MacOS/Linux // // @param src {#hsym} Source destination to be copied. // @param dest {#hsym} Destination to which to be copied. // @return {null} registry.util.copy.dir:{[src;dest] // Expecting an individual file for copying -> should return itself // if the file exists at the correct location if[(not all(src;dest)like":*") & not all -11h=type each (src;dest); logging.error"Both src and dest directories must be a hsym like path" ]; system sv[" "]enlist["cp -r"],1_/:string(src;dest) } ================================================================================ FILE: ml_ml_registry_q_main_utils_create.q SIZE: 10,382 characters ================================================================================ // create.q - Create new objects within the registry // Copyright (c) 2021 Kx Systems Inc // // @overview // Create new objects within the registry // // @category Model-Registry // @subcategory Utilities // // @end \d .ml // @private // // @overview // Create the registry folder within which models will be stored // // @todo // Update for Windows Compliance // // @param folderPath {string|null} A folder path indicating the location the // registry is to be located or generic null to place in the current // directory // @param config {dict} Any additional configuration needed for // initialising the registry (Not presently used but for later use) // // @return {dict} Updated config with registryPath added registry.util.create.registry:{[config] registryPath:config[`folderPath],"/KX_ML_REGISTRY"; if[not()~key hsym`$registryPath;logging.error"'",registryPath,"' already exists"]; system"mkdir ",$[.z.o like"w*";"";"-p "],registry.util.check.osPath registryPath; config,enlist[`registryPath]!enlist registryPath } // @private // // @overview // Create the splayed table within the registry folder which will be used // to store information about the models that are present within the registry // // @param config {dict} Any additional configuration needed for // initialising the registry (Not presently used but for later use) // // @return {dict} Updated config with modelStorePath added registry.util.create.modelStore:{[config] modelStoreKeys:`registrationTime`experimentName`modelName`uniqueID`modelType`version`description; modelStoreVals:(`timestamp$();();();`guid$();();();()); modelStoreSchema:flip modelStoreKeys!modelStoreVals; modelStorePath:hsym`$config[`registryPath],"/modelStore"; modelStorePath set modelStoreSchema; config,enlist[`modelStorePath]!enlist modelStorePath } // @private // // @overview // Create the base folder structure used for storage of models associated // with an experiment and models which have been generated independently // // @param config {dict} Any additional configuration needed for // initialising the registry (Not presently used but for later use) // // @return {null} registry.util.create.experimentFolders:{[config] folders:("/namedExperiments";"/unnamedExperiments"); experimentPaths:config[`registryPath],/:folders; {system"mkdir ",$[.z.o like"w*";"";"-p "],registry.util.check.osPath x }each experimentPaths; // The following is required to upload the folders to cloud vendors hiddenFiles:hsym`$experimentPaths,\:"/.hidden"; {x 0:enlist()}each hiddenFiles; } // @private // // @overview // Add a folder associated to a named experiment provided // // @param experimentName {string} Name of the experiment to be saved // @param config {dict|null} Any additional configuration needed for // initialising the experiment // // @return {dict} Updated config dictionary containing experiment path registry.util.create.experiment:{[experimentName;config] if[experimentName~"undefined";logging.error"experimentName must be defined"]; experimentString:config[`registryPath],"/namedExperiments/",experimentName; experimentPath:hsym`$experimentString; if[()~key experimentPath; system"mkdir ",$[.z.o like"w*";"";"-p "],registry.util.check.osPath experimentString ]; // The following is requred to upload the folders to cloud vendors hiddenFiles:hsym`$experimentString,"/.hidden"; {x 0:enlist()}each hiddenFiles; config,`experimentPath`experimentName!(experimentString;experimentName) } // @private // // @overview // Add all the folders associated with a specific model to the // correct location on disk // // @param config {dict} Information relating to the model // being saved, this includes version, experiment and model names // // @return {dict} Updated config dictionary containing relevant paths registry.util.create.modelFolders:{[model;modelType;config] folders:$[99h=type model; $[not (("q"~modelType)&((`predict in key[model])|(`modelInfo in key model))); ("params";"metrics";"code"),raze enlist["model/"],/:\:string[key[model]]; ("model";"params";"metrics";"code")]; ("model";"params";"metrics";"code") ]; newFolders:"/",/:folders; modelFolder:config[`experimentPath],"/",config`modelName; if[(1;0)~config`version;system"mkdir ",$[.z.o like"w*";"";"-p "], registry.util.check.osPath modelFolder]; versionFolder:modelFolder,"/",/registry.util.strVersion config`version; newFolders:versionFolder,/:newFolders; paths:enlist[versionFolder],newFolders; {system"mkdir ",$[.z.o like"w*";"";"-p "], registry.util.check.osPath x }each paths; config,(`versionPath,`$folders,\:"Path")!paths } // @private // // @overview // Generate the configuration information which is to be saved // with the model // // @param config {dict} Configuration information provided by the user // // @return {dict} A modified version of the run information // dictionary with information formatted in a structure that is more sensible // for persistence registry.util.create.config:{[config] newConfig:.ml.registry.config.default; newConfig[`registry;`description]:config`description; newConfig[`registry;`experimentInformation;`experimentName]:config`experimentName; modelInfo:`modelName`version`requirements`registrationTime`uniqueID; newConfig:{y[`registry;`modelInformation;z]:x z;y}[config]/[newConfig;modelInfo]; newConfig[`model;`type]:config[`modelType]; newConfig[`model;`axis]:config[`axis]; newConfig } // @private // // @overview // Generate latency configuration information which is to be saved // with the model // // @param model {any} `(dict|fn|proj)` Model retrieved from registry. // @param modelType {string} The type of model that is being saved, namely // "q"|"sklearn"|"keras"|"python" // @param data {table} Historical data for evaluating behaviour of model // @param config {dict} Configuration information provided by the user // // @return {dict} A dictionary containing information on the average // time to serve a prediction together with the standard deviation registry.util.create.latency:{[model;modelType;data] function:{[model;modelType;data] // get predict function predictor:.ml.mlops.wrap[`$modelType;model;1b]; // Latency information L:{system"sleep 0.0005";zz:enlist value x;a:.z.p;y zz;(1e-9)*.z.p-a}[;predictor] each 30#data; `avg`std!(avg L;dev L)}[model;modelType]; @[function;data;{show "unable to generate latency config due to error: ",x, " latency monitoring cannot be supported"}] } // @private // // @overview // Generate schema configuration information which is to be saved // with the model // // @param data {table} Historical data for evaluating behaviour of model // @param config {dict} Configuration information provided by the user // // @return {dict} A dictionary containing information on the schema // of the data provided to the prediction service registry.util.create.schema:{[data] // Schema information (!). (select c,t from (meta data))`c`t } // @private // // @overview // Generate nulls configuration information which is to be saved // with the model // // @param data {table} Historical data for evaluating behaviour of model // @param config {dict} Configuration information provided by the user // // @return {dict} A dictionary contianing the values for repalcement of // null values. registry.util.create.null:{[data] // Null information function:{med each flip mlops.infReplace x}; @[function;data;{show "unable to generate null config due to error: ",x, " null replacement cannot be supported"}] } // @private // // @overview // Generate infs configuration information which is to be saved // with the model // // @param data {table} Historical data for evaluating behaviour of model // @param config {dict} Configuration information provided by the user // // @return {dict} A dictionary contianing the values for replacement of // infinite values registry.util.create.inf:{[data] // Inf information function:{(`negInfReplace`posInfReplace)!(min;max)@\:mlops.infReplace x}; @[function;data;{show "unable to generate inf config due to error: ",x, " inf replacement cannot be supported"}] } // @private // // @overview // Generate csi configuration information which is to be saved // with the model // // @param data {table} Historical data for evaluating behaviour of model // // @return {dict} A dictionary contianing the values for replacement of // infinite values registry.util.create.csi:{[data] bins:@["j"$;(count data)&registry.config.commandLine`bins; {logging.error"Cannot convert 'bins' to an integer"}]; @[{mlops.create.binExpected[;y] each flip x}[;bins];data;{show "unable ", "to generate csi config due to error: ",x," csi monitoring cannot be ", "supported"}] } // @private // // @overview // Generate psi configuration information which is to be saved // with the model // // @param model {any} `(dict|fn|proj)` Model retrieved from registry. // @param modelType {string} The type of model that is being saved, namely // "q"|"sklearn"|"keras"|"python" // @param data {table} Historical data for evaluating behaviour of model // // @return {dict} A dictionary containing information on the average // time to serve a prediction together with the standard deviation registry.util.create.psi:{[model;modelType;data] bins:@["j"$;(count data)&registry.config.commandLine`bins; {logging.error"Cannot convert 'bins' to an integer"}]; function:{[bins;model;modelType;data] // get predict function predictor:.ml.mlops.wrap[`$modelType;model;0b]; preds:predictor data; mlops.create.binExpected[raze preds;bins] }[bins;model;modelType]; @[function;data;{show "unable to generate psi config due to error: ",x, " psi monitoring cannot be supported"}] } // @private // // @overview // Create a table within the registry folder which will be used // to store information about the metrics of the model // // @param metricPath {string} The path to the metrics file // // @return {null} registry.util.create.modelMetric:{[metricPath] modelMetricKeys:`timestamp`metricName`metricValue; modelMetricVals:(enlist 0Np;`; ::); modelMetricSchema:flip modelMetricKeys!modelMetricVals; modelMetricPath:hsym`$metricPath,"metric"; modelMetricPath set modelMetricSchema; } ================================================================================ FILE: ml_ml_registry_q_main_utils_delete.q SIZE: 2,889 characters ================================================================================ // delete.q - Delete items from the model registry and folder structure // Copyright (c) 2021 Kx Systems Inc // // @overview // Delete items from the registry // // @category Model-Registry // @subcategory Utilities // // @end \d .ml // @private // // @overview // Delete all files contained within a specified directory recursively // // @param folderPath {symbol} Folder to be deleted // // @return {null} registry.util.delete.folder:{[folderPath] ty:type folderPath; folderPath:hsym$[10h=ty;`$;-11h=ty;;logging.error"type"]folderPath; orderedPaths:(),{$[11h=type d:key x;raze x,.z.s each` sv/:x,/:d;d]}folderPath; hdel each desc orderedPaths; }
Historical database¶ A historical database (HDB) holds data before today, and its tables would be stored on disk, being much too large to fit in memory. Each new day’s records would be added to the HDB at the end of day. Typically, large tables in the HDB (such as daily tick data) are stored splayed, i.e. each column is stored in its own file. Knowledge Base: Splayed tables Q for Mortals: §11.3 Splayed Tables Typically also, large tables are stored partitioned by date. Very large databases may be further partitioned into segments, using par.txt . These storage strategies give best efficiency for searching and retrieval. For example, the database can be written over several drives. Also, partitions can be allocated to secondary threads so that queries over a range of dates can be run in parallel. The exact set-up would be customized for each installation. For example, a simple partitioning scheme on a single disk might be as shown right. Here, the daily and master tables are small enough to be written to single files, while the trade and quote tables are splayed and partitioned by date. Sample partitioned database¶ The script KxSystems/cookbook/start/buildhdb.q builds a sample HDB. It builds a month’s random data in directory start/db . Load q, then: q)\l buildhdb.q To load the database, enter: q)\l start/db In q (actual values may vary): q)count trade 369149 q)count quote 1846241 q)t:select from trade where date=last date, sym=`IBM q)count t 1017 q)5#t date time sym price size stop cond ex --------------------------------------------------- 2013.05.31 09:30:00.004 IBM 47.38 48 0 G N 2013.05.31 09:30:01.048 IBM 47.4 56 0 9 N 2013.05.31 09:30:01.950 IBM 47.38 89 0 G N 2013.05.31 09:30:02.547 IBM 47.36 70 0 9 N 2013.05.31 09:30:03.448 IBM 47.4 72 0 N N q)select count i by date from trade date | x ----------| ----- 2013.05.01| 15271 2013.05.02| 15025 2013.05.03| 14774 2013.05.06| 14182 ... q)select cnt:count i,sum size,last price, wprice:size wavg price by 5 xbar time.minute from t minute| cnt size price wprice ------| ----------------------- 09:30 | 44 2456 47.83 47.60555 09:35 | 27 1469 47.74 47.77138 09:40 | 17 975 47.84 47.87198 09:45 | 19 1099 47.84 47.78618 ... Join trades with the most recent quote at time of trade (as-of join): q)t:select time,price from trade where date=last date,sym=`IBM q)q:select time,bid,ask from quote where date=last date,sym=`IBM q)aj[`time;t;q] time price bid ask ------------------------------ 09:30:00.004 47.38 47.12 48.01 09:30:01.048 47.4 46.91 47.88 09:30:01.950 47.38 46.72 47.99 09:30:02.547 47.36 47.33 47.46 ... Sample segmented database¶ The buildhdb.q script can be customized to build a segmented database. In practice, database segments should be on separate drives, but for illustration, the segments are here written to a single drive. Both the database root, and the location of the database segments need to be specified. For example, edit the first few lines of the script KxSystems/cookbook/start/buildhdb.q as below. dst:`:start/dbs / new database root dsp:`:/dbss / database segments directory dsx:5 / number of segments bgn:2010.01.01 / set 4 years data end:2013.12.31 ... Ensure that the directory given in dsp is the full pathname, and that it is created, writeable and empty. For Windows, dsp might be: dsp:`:c:/dbss . This example writes approximately 7GB of created data to disk. Load the modified script, which should now take a minute or so. This should write the partitioned data to subdirectories of the directory specified by dsp par.txt can be found within the dsp directory, which lists the disks/directories containing the data of the segmented database. /dbss/d0 /dbss/d1 /dbss/d2 /dbss/d3 /dbss/d4 Restart q, and load the segmented database: q)\l start/dbs q)(count quote), count trade 61752871 12356516 q)select cnt:count i,sum size,size wavg price from trade where date in 2012.09.17+til 5, sym=`IBM cnt size price -------------------- 4033 217537 37.35015 Interprocess communications¶ A production kdb+ system may have several kdb+ processes, possibly on several machines. These communicate via TCP/IP. Any kdb+ process can communicate with any other process as long as it is accessible on the network and is listening for connections. - a server process listens for connections and processes any requests - a client process initiates the connection and sends commands to be executed Client and server can be on the same machine or on different machines. A process can be both a client and a server. A communication can be synchronous (wait for a result to be returned) or asynchronous (no wait and no result returned). Initialize server¶ A kdb+ server is initialized by specifying the port to listen on, with either a command-line parameter or a session command. ..$ q -p 5001 / command line q)\p 5001 / session command Communication handle¶ A communication handle is a symbol that starts with : and has the form: `:[server]:port where the server is optional, and port is a port number. The server need not be given if on the same machine. Examples: `::5001 / server on same machine as client `:genie:5001 / server on machine genie `:198.168.1.56:5001 / server on given IP address `:www.example.com:5001 / server at www.example.com The function hopen starts a connection, and returns an integer connection handle. This handle is used for all subsequent client requests. q)h:hopen `::5001 q)h "3?20" 1 12 9 q)hclose h Synchronous/asynchronous¶ Where the connection handle is used as defined (it will be a positive integer), the client request is synchronous. In this case, the client waits for the result from the server before continuing execution. The result from the server is the result of the client request. Where the negative of the connection handle is used, the client request is asynchronous. In this case, the request is sent to the server, but the client does not wait or get a result from the server. This is done when a result is not required by the client. q)h:hopen `::5001 q)(neg h) "a:3?20" / send asynchronously, no result q)(neg h) "a" / again no result q)h "a" / synchronous, with result 0 17 14 Message formats¶ There are two message formats: - a string containing a q expression to be executed on the server - a list (function; arg1; arg2; ...) where the function is to be applied with the given arguments q)h "2 3 5 + 10 20 30" / send q expression 12 23 35 q)h (+;2 3 5;10 20 30) / send function and its arguments 12 23 35 If a function name is given, this is called on the server. q)h ("mydef";2 3 5;10 20 30) / call function mydef with these arguments There are examples in the Realtime Database chapter, where a process receives a data feed and posts to subscribers by calling an update function in the subscriber. HTTP connections¶ A kdb+ server can also be accessed via HTTP. To try this, run a kdb+ server on your machine with port 5001. Then, load a Web browser, and go to http://localhost:5001 . You can now see the names defined in the base context.
// Check if required process names all connected reqprocnamesnotconn:reqprocsnotconn[;`procname]; // Block process until all required processes are connected startupdepcyclestypename:{[requiredprocs;typeornamefunc;timeintv;cycles] n:1; //variable used to check how many cycles have passed .servers.startup[]; while[typeornamefunc requiredprocs; //check if requiredprocs are running if[n>cycles; b:((),requiredprocs)except(),exec proctype from .servers.SERVERS where .dotz.liveh w; .lg.e[`connectionreport;s:string[.proc.procname]," cannot connect to ",","sv string'[b]]; //after "cycles" times output error and exit process. 's; //signal to error out if running after initialisation ]; .os.sleep[timeintv]; n+:1; .servers.startup[]; ]; }; // Block process until all required process types are connected startupdepcycles:startupdepcyclestypename[;.servers.reqproctypesnotconn;;]; // Block process until all required process names are connected startupdepnamecycles:startupdepcyclestypename[;.servers.reqprocnamesnotconn;;]; startupdependent:startupdepcyclestypename[;.servers.reqproctypesnotconn;;0W]; pc:{[result;W] update w:0Ni,endp:.proc.cp[] from`.servers.SERVERS where w=W;cleanup[];result} .dotz.set[`.z.pc;{.servers.pc[x y;y]}value .dotz.getcommand[`.z.pc]]; if[enabled; if[DISCOVERYRETRY > 0; .timer.repeat[.proc.cp[];0Wp;DISCOVERYRETRY;(`.servers.retrydiscovery;`);"Attempt reconnections to the discovery service"]]; if[RETRY > 0; .timer.repeat[.proc.cp[];0Wp;RETRY;(`.servers.retry;`);"Attempt reconnections to closed server handles"]]]; ================================================================================ FILE: TorQ_code_handlers_writeaccess.q SIZE: 1,356 characters ================================================================================ // This is used to make data available in the process read only // Uses reval to block the write access to connecting clients // Reval is available in KDB+ 3.3 onwards // If enabled on older KDB versions this will throw an error // Write protection is only provided on string based messaging // Http access is disabled \d .readonly enabled:@[value;`enabled;0b] // whether read only is enabled \d . .lg.o[`readonly;"read only mode is ",("disabled";"enabled").readonly.enabled]; if[.readonly.enabled; // Check if the current KDB version supports blocking write access to clients if[3.3>.z.K; .lg.e[`readonly;"current KDB+ version ",(string .z.K), " does not support blocking write access,a minimum of KDB+ version 3.3 is required"] ]; // Modify the sync message handler .dotz.set[`.z.pg;{[x;y] $[10h=type y;reval(x;y); x y]}value .dotz.getcommand[`.z.pg]]; // Modify the async message handler .dotz.set[`.z.ps;{[x;y] $[10h=type y;reval(x;y); x y]}value .dotz.getcommand[`.z.ps]]; // Modify the websocket message handler .dotz.set[`.z.ws;{[x;y] $[10h=type y;reval(x;y); x y]}value .dotz.getcommand[`.z.ws]]; // Modify the http get message handler .dotz.set[`.z.ph;{[x] .h.hn["403 Forbidden";`txt;"Forbidden"]}]; // Modify the http post message handler .dotz.set[`.z.pp;{[x] .h.hn["403 Forbidden";`txt;"Forbidden"]}]; ]; ================================================================================ FILE: TorQ_code_handlers_zpsignore.q SIZE: 558 characters ================================================================================ // This is used to allow .z.ps (async) calls to not be permission checked, logged etc. // this can be useful as depending on how the connection is initiated, the username is not always available to check against // It should be loaded last as it globally overrides .z.ps \d .zpsignore enabled:@[value;`enabled;1b] // whether its enabled ignorelist:@[value;`ignorelist;(`upd;"upd";`.u.upd;".u.upd")] // list of functions to ignore if[enabled; .dotz.set[`.z.ps;{$[any first[y]~/:ignorelist;value y;x @ y]}[@[value;.dotz.getcommand[`.z.ps];{value}]]]] ================================================================================ FILE: TorQ_code_hdb_hdbstandard.q SIZE: 184 characters ================================================================================ // reload function reload:{ .lg.o[`reload;"reloading HDB"]; system"l ."} // Get the relevant HDB attributes .proc.getattributes:{`partition`tables!(@[value;.Q.pf;.Q.PV];tables[])} ================================================================================ FILE: TorQ_code_monitor_apidetails.q SIZE: 2,018 characters ================================================================================ /Add to API functions for process \d .api //addcheck //copyconfig add[`copyconfig;1b;"Copy row of config into checkconfig with new process";"[int:check id to be copied;symbol: new process name]";"table with additional row"]; //disablecheck add[`disablecheck;1b;"Disable check until enabled again";"[list of int:check id to be disabled]";"table with relevant checks disabled"]; //enablecheck add[`enablecheck;1b;"Enable checks";"[list of int: check id to be enabled]";"table with relevant checks enabled"]; //checkruntime add[`checkruntime;1b;"Check process has not been running over next alloted runtime,amend checkstatus accordingly";"[timespan:threshold age of check";"amended checkstatus table"] //timecheck add[`timecheck;1b;"Check if median loadtime is less than specific value";"[timespan:threshold median time value]";"table with boolean value returning true if median loadtime lower than threshold"]; //updateconfig add[`updateconfig;1b;"Add new parameter config to checkconfig table";"[int:checkid to be changed;symbol:parameter key;undefined:new parameter value]";"checkconfig table with new config added"]; //updateconfigfammet add[`updateconfigfammet;1b;"Add new parameter config to checkconfig table";"[symbol:family;symbol:metric;symbol:parameter key;undefined:new parameter value";"checkconfig table with new config added"]; //forceconfig add[`forceconfig;1b;"Force new config parameter over top of existing config without checking types";"[int:checkid;dictionary:new config"]; //currentstatus add[`currentstatus;1b;"Return only current information for each check";"[list of int: checkids to be returned]";"table of checks"]; //statusbyfam add[`statusbyfam;1b;"Return checkstatus table ordered by status then timerstatus";"[symbol:name of family of checks]";"table ordered by status and timerstatus"]; //cleartracker add[`cleartracker;1b;"Delete rows older than certain amount of time from checktracker";"[timespan:maximum age of check to be kept]";"checktracker table with removed rows"] ================================================================================ FILE: TorQ_code_monitor_checkmonitor.q SIZE: 13,374 characters ================================================================================ //Process which takes in configurable process specific checks and is called as part of monitor process //Get handle to other TorQ process specified gethandle:{exec first w from .servers.getservers[`procname;x;()!();1b;1b]} // table of check statuses - i.e. the result of the last run checkstatus:( [checkid:`int$()] // id of the check family:`symbol$(); // the family of checks metric:`symbol$(); // specific check process:`symbol$(); // process it was run on lastrun:`timestamp$(); // last time it was run nextrun:`timestamp$(); // next time it will be run status:`short$(); // status executiontime:`timespan$(); // time the execution took totaltime:`timespan$(); // total time- including the network transfer time+queue time on target timerstatus:`short$(); // whether the check run in the correct amount of time running:`short$(); // whether the check is currently running result:()) // error message // the table of checks to run checkconfig:( [checkid:`int$()] // id of the check family:`symbol$(); // the family of checks metric:`symbol$(); // specific check process:`symbol$(); // process it was run on query:(); // query to execute resultchecker:(); // function to run on the result params:(); // the parameters to pass to query and resultchecker period:`timespan$(); // how often to run it runtime:`timespan$(); // how long it should take to run active:`boolean$()) // whether the check is active or not // table to track the monitoring requests we have in flight // we don't have any trimming functionality for this table, we may need to add that checktracker:( [runid:`u#`int$()] // id of the run sendtime:`timestamp$(); // the time we sent the request receivetime:`timestamp$(); // the time the response was received executiontime:`timespan$(); // the time it took to run the query checkid:`int$(); // the id of the check that was run status:`short$(); // the status of the request result:()) // the result of the request // insert placeholder row to make sure result field doesn't inherit a type `checktracker insert (0Ni;0Np;0Np;0Nn;0Ni;0Nh;()); // initialise the runid to 0 runid:0i duplicateconfig:{[t] update process:raze[t `process] from ((select from t)where count each t[`process])}; readmonitoringconfig:{[file] // read in config CSV (actually pipe delimited) .lg.o[`readconfig;"reading monitoring config from ",string file:hsym file]; // read in csv file, trap error c:.[0:;(("SS****NN";enlist"|");file);{.lg.e[`readconfig;"failed to load monitoring configuration file: ",x]}]; //ungroup checks and make new row for each process // attempt to parse the params value p:{@[value;x;{[x;y;e] .lg.e[`readconfig;"failed to parse param value from config file at row ",(string y)," with definition ",x,": ",e]}[x;y]]}'[c`params;til count c]; // check each params value is a dictionary if[not all 99h=type each p; .lg.e[`readconfig;"all param values must have type dictionary. Values at rows ",(.Q.s1 where not 99h=type each p)," do not"]; ]; //ungroup checks and make new row for each process //c:update params:p from c; c:duplicateconfig[update params:p from update`$";"vs/:process from c]; addconfig c; } readstoredconfig:{[file] // read in the stored config file. Return true or false status // check for existence if[any null file; .lg.o[`readconfig;"supplied stored config file location is null: not attempting to read stored config"]; :0b ]; if[()~key file:hsym file; .lg.o[`readconfig;"could not find storedconfig file at ",string file]; :0b]; .lg.o[`readconfig;"reading stored config file from ",string file]; @[{addconfig get x};file;{'"failed to read stored config file: ",x} ]; 1b } saveconfig:{[file;config] // write the in-memory config to disk if[null file;:()]; .lg.o[`saveconfig;"saving stored config to ",string file:hsym file]; .[set;(file;config);{'"failed to write config file to ",(string x),": ",y}file] }
// @kind function // @category fresh // @desc K-best features: choose the K features which have the lowest // p-values and thus have been determined to be the most important features // to allow us to predict the target vector. // @param k {long} Number of features to select // @param pValues {dictionary} Output of .ml.fresh.sigFeat // @return {symbol[]} Significant features fresh.kSigFeat:{[k;pValues] key k sublist asc pValues } // @kind function // @category fresh // @desc Percentile based selection: set a percentile threshold for // p-values below which features are selected. // @param percentile {float} Percentile threshold // @param pValues {dictionary} Output of .ml.fresh.sigFeat // @return {symbol[]} Significant features fresh.percentile:{[percentile;pValues] where pValues<=fresh.feat.quantile[value pValues]percentile } ================================================================================ FILE: ml_ml_fresh_utils.q SIZE: 7,942 characters ================================================================================ // fresh/utils.q - Utility functions // Copyright (c) 2021 Kx Systems Inc // // Unitily functions used in the implimentation of FRESH \d .ml // Python imports sci_ver :"F"$"." vs cstring .p.import[`scipy][`:__version__]` sci_break:((sci_ver[0]=1)&sci_ver[1]>=5)|sci_ver[0]>1 numpy :.p.import`numpy pyStats :.p.import`scipy.stats signal :.p.import`scipy.signal stattools:.p.import`statsmodels.tsa.stattools stats_ver:"F"$"." vs cstring .p.import[`statsmodels][`:__version__]` stats_break:((stats_ver[0]=0)&stats_ver[1]>=12)|stats_ver[0]>0 // @private // @kind function // @category freshPythonUtility // @desc Compute the one-dimensional // discrete Fourier Transform for real input fresh.i.rfft:numpy`:fft.rfft // @private // @kind function // @category freshPythonUtility // @desc Return the real part of the complex argument fresh.i.real:numpy`:real // @private // @kind function // @category freshPythonUtility // @desc Return the angle of the complex argument fresh.i.angle:numpy`:angle // @private // @kind function // @category freshPythonUtility // @desc Return the imaginary part of the complex argument fresh.i.imag:numpy`:imag // @private // @kind function // @category freshPythonUtility // @desc Calculate the absolute value element-wise fresh.i.abso:numpy`:abs // @private // @kind function // @category freshPythonUtility // @desc Kolmogorov-Smirnov two-sided test statistic distribution fresh.i.ksDistrib:pyStats[$[sci_break;`:kstwo.sf;`:kstwobign.sf];<] // @private // @kind function // @category freshPythonUtility // @desc Calculate Kendall’s tau, a correlation measure for // ordinal data fresh.i.kendallTau:pyStats`:kendalltau // @private // @kind function // @category freshPythonUtility // @desc Perform a Fisher exact test on a 2x2 contingency table fresh.i.fisherExact:pyStats`:fisher_exact // @private // @kind function // @category freshPythonUtility // @desc Estimate power spectral density using Welch’s method fresh.i.welch:signal`:welch // @private // @kind function // @category freshPythonUtility // @desc Find peaks in a 1-D array with wavelet transformation fresh.i.findPeak:signal`:find_peaks_cwt // @private // @kind function // @category freshPythonUtility // @desc Calculate the autocorrelation function fresh.i.acf:stattools`:acf // @private // @kind function // @category freshPythonUtility // @desc Partial autocorrelation estimate fresh.i.pacf:stattools`:pacf // @private // @kind function // @category freshPythonUtility // @desc Augmented Dickey-Fuller unit root test fresh.i.adFuller:stattools`:adfuller // Python features fresh.i.pyFeat:`aggAutoCorr`augFuller`fftAggReg`fftCoeff`numCwtPeaks, `partAutoCorrelation`spktWelch // Extract utilities // @private // @kind function // @category freshUtility // @desc Create a mapping between the functions and columns on which // they are to be applied // @param map {symbol[][]} Two element list where first element is the // columns to which functions are to be applied and the second element is // the name of the function in the .ml.fresh.feat namespace to be applied // @return {symbol[]} A mapping of the functions to be applied to each column fresh.i.colMap:{[map] updFunc:flip (` sv'`.ml.fresh.feat,'map[;1];map[;0]); updFunc,'last@''2_'map } // @private // @kind function // @category freshUtility // @desc Returns features given data and function params with error handling // @param data {table} Data on which to generate features // @param funcs {dictionary} Function names with functions to execute // @param idCol {list} Columns to index // @return {table} Unexpanded list of features fresh.i.protect:{[data;funcs;idCol] {@[ {?[x;();z!z;enlist[y 0]!enlist 1_y]}[x;;z]; y; {-1"Error generating function : ",string[x 0]," with error ",y;()}[y] ]}[data;;idCol]'[key[funcs],'value funcs]}; // @private // @kind function // @category freshUtility // @desc Returns the length of each sequence // @param condition {boolean} Executed condition, e.g. data>avg data // @return {long[]} Sequence length based on condition fresh.i.getLenSeqWhere:{[condition] idx:where differ condition; (1_deltas idx,count condition)where condition idx } // @private // @kind function // @category freshUtility // @desc Find peaks within the data // @param data {number[]} Numerical data points // @param support {long} Support of the peak // @param idx {long} Current index // @return {boolean[]} 1 where peak exists fresh.i.peakFind:{[data;support;idx] neg[support]_support _min data>/:xprev\:[-1 1*idx]data } // @private // @kind function // @category freshUtility // @desc Expand results produced by FRESH // @param results {table} Table of resulting features // @param column {symbol} Column of interest // @return {table} Expanded results table fresh.i.expandResults:{[results;column] t:(`$"_"sv'string column,'cols t)xcol t:results column; ![results;();0b;enlist column],'t } // Select utilities // @private // @kind function // @category freshUtility // @desc Apply python function for Kendall’s tau // @param target {number[]} Target vector // @param feature {number[]} Feature table column // @return {float} Kendall’s tau - Close to 1 shows strong agreement, close to // -1 shows strong disagreement fresh.i.kTau:{[target;feature] fresh.i.kendallTau[target;feature][`:pvalue]` } // @private // @kind function // @category freshUtility // @desc Perform a Fisher exact test // @param target {number[]} Target vector // @param feature {number[]} Feature table column // @return {float} Results of Fisher exact test fresh.i.fisher:{[target;feature] g:group@'target value group feature; fresh.i.fisherExact[count@''@\:[g]distinct target][`:pvalue]` } // @private // @kind function // @category freshUtility // @desc Calculate the Kolmogorov-Smirnov two-sided test statistic // distribution // @param feature {number[]} Feature table column // @param target {number[]} Target vector // @return {float} Kolmogorov-Smirnov two-sided test statistic distribution fresh.i.ks:{[feature;target] d:asc each target group feature; n:count each d; k:max abs(-). value(1+d bin\:raze d)%n; en:prd[n]%sum n; fresh.i.ksDistrib .$[sci_break;(k;ceiling en);enlist k*sqrt en] } // @private // @kind function // @category freshUtility // @desc Pass data correctly to .ml.fresh.i.ks allowing for projection // in main function // @param target {number[]} Target vector // @param feature {number[]} Feature table column // @return {float} Kolmogorov-Smirnov two-sided test statistic distribution fresh.i.ksYX:{[target;feature] fresh.i.ks[feature;target] } // @private // @kind function // @category freshUtility // @desc Generate features for fresh feature creation // @param features {symbol|symbol[]|null} Features to remove // @return {table} Updated table of features fresh.util.featureList:{[features] noHP:`aggLinTrend`autoCorr`binnedEntropy`c3`cidCe`eRatioByChunk`fftCoeff, `indexMassQuantile`largestDev`numCrossing`numCwtPeaks`numPeaks, `partAutoCorrelation`quantile`ratioBeyondRSigma`spktWelch, `symmetricLooking`treverseAsymStat`valCount`rangeCount`changeQuant; noPY:`aggAutoCorr`fftAggreg`fftCoeff`numCwtPeaks`partAutoCorrelation, `spktWelch; noClass:`aggLinTrend`aggFuller`c3`cidCe`linTrend`mean2DerCentral, `perRecurToAllData`perRecurToAllVal`symmetricLooking`treverseAsymStat; $[(features~(::))|features~`regression; :.ml.fresh.params; features~`noHyperparameters; :update valid:0b from .ml.fresh.params where f in noHP; features~`noPython; :update valid:0b from .ml.fresh.params where f in noPY; features~`classification; :update valid:0b from .ml.fresh.params where f in noClass; (11h~abs type[features])& all ((),features) in\: key[.ml.fresh.params]`f; :update valid:0b from .ml.fresh.params where not f in ((),features); '"Params not recognized" ]; }; ================================================================================ FILE: ml_ml_graph_graph.q SIZE: 7,896 characters ================================================================================ // graph/graph.q - Graph tools // Copyright (c) 2021 Kx Systems Inc // // Create, update, and delete functionality for a graph. \d .ml // @kind function // @category graph // @desc Generate an empty graph // @return {dictionary} Structure required for the generation of a connected // graph. This includes a key for information on the nodes present within the // graph and edges outlining how the nodes within the graph are connected. createGraph:{[] nodeKeys:`nodeId``function`inputs`outputs; nodes:1!enlist nodeKeys!(`;::;::;::;::); edgeKeys:`destNode`destName`sourceNode`sourceName`valid; edges:2!enlist edgeKeys!(`;`;`;`;0b); `nodes`edges!(nodes;edges) } // @kind function // @category graph // @desc Add a functional node to a graph // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param nodeId {symbol} Denotes the name associated with the functional node // @param node {fn} A functional node // @return {dictionary} The graph with the the new node added to the graph // structure addNode:{[graph;nodeId;node] node,:(1#`)!1#(::); if[nodeId in exec nodeId from graph`nodes;'"invalid nodeId"]; if[not``function`inputs`outputs~asc key node;'"invalid node"]; if[(::)~node`inputs;node[`inputs]:(0#`)!""]; if[-10h=type node`inputs;node[`inputs]:(1#`input)!enlist node`inputs]; if[99h<>type node`inputs;'"invalid inputs"]; if[-10h=type node`outputs; node[`outputs]:(1#`output)!enlist node`outputs; node[`function]:((1#`output)!enlist@)node[`function]::; ]; if[99h<>type node`outputs;'"invalid outputs"]; graph:@[graph;`nodes;,;update nodeId from node]; edgeKeys:`destNode`destName`sourceNode`sourceName`valid; edges:flip edgeKeys!(nodeId;key node`inputs;`;`;0b); graph:@[graph;`edges;,;edges]; graph }
//- get default time from tickerplant or table getdefaulttime:{[dict] // go to the tableproperties table if[not ` ~ configure:.checkinputs.tablepropertiesconfig[(dict`tablename),.proc.proctype;`primarytimecolumn];:configure]; timestamp:(exec from meta (dict`tablename) where t in "p")`c; if[1 < count timestamp; '`$.checkinputs.formatstring["Table has multiple time columns, please select one of the following {} for the parameter timecolumn";timestamp]]; date:(exec from meta (dict`tablename) where t in "d")`c; if[1 < count date; '`$.checkinputs.formatstring["Table has multiple date columns, please select one of the following {} for the parameter timecolumn";date]]; if[not timestamp = `;.checkinputs.tablepropertiesconfig[(dict`tablename),.proc.proctype;`primarytimecolumn]::timestamp;:timestamp]; if[not date = `;:date]; '`$.checkinputs.formatstring["Table:{tablename} does not have a default timecolumn, one must be selected using the time column parameter";dict] }; ================================================================================ FILE: TorQ_code_dataaccess_extractqueryparams.q SIZE: 7,555 characters ================================================================================ \d .eqp //- table to store arguments queryparams:`tablename`partitionfilter`attributecolumn`timefilter`instrumentfilter`columns`grouping`aggregations`filters`ordering`freeformwhere`freeformby`freeformcolumn`optimisation!(`;();`;();();();();();();();();();();1b); extractqueryparams:{[inputparams;queryparams] queryparams:extracttablename[inputparams;queryparams]; queryparams:extractpartitionfilter[inputparams;queryparams]; queryparams:extractattributecolumn[inputparams;queryparams]; queryparams:extracttimefilter[inputparams;queryparams]; queryparams:extractinstrumentfilter[inputparams;queryparams]; queryparams:extractcolumns[inputparams;queryparams]; queryparams:extractgrouping[inputparams;queryparams]; queryparams:extractaggregations[inputparams;queryparams]; queryparams:extracttimebar[inputparams;queryparams]; queryparams:extractfilters[inputparams;queryparams]; queryparams:extractordering[inputparams;queryparams]; queryparams:extractfreeformwhere[inputparams;queryparams]; queryparams:extractfreeformby[inputparams;queryparams]; queryparams:extractfreeformcolumn[inputparams;queryparams]; queryparams:jointableproperties[inputparams;queryparams]; queryparams:extractoptimisationkey[inputparams;queryparams]; queryparams:extractcolumnnaming[inputparams;queryparams]; :queryparams; }; extracttablename:{[inputparams;queryparams]@[queryparams;`tablename;:;inputparams`tablename]}; extractpartitionfilter:{[inputparams;queryparams] //If an RDB return the partitionfilters as empty if[`rdb~inputparams[`metainfo;`proctype];:@[queryparams;`partitionfilter;:;()]]; //Get the partition range function getpartrangef:.checkinputs.gettableproperty[inputparams;`getpartitionrange]; // Get the time column timecol:inputparams`timecolumn; // Get the time range function timerange:inputparams[`metainfo]`starttime`endtime; // Find the partition field partfield:.checkinputs.gettableproperty[inputparams;`partfield]; //return a list of partions to search through partrange:.dacustomfuncs.partitionrange[(inputparams`tablename);timerange;.proc.proctype;timecol]; // Return as kdb native filter partfilter:exec enlist(within;partfield;partrange)from inputparams; :@[queryparams;`partitionfilter;:;partfilter]; }; extractattributecolumn:{[inputparams;queryparams] attributecolumn:.checkinputs.gettableproperty[inputparams;`attributecolumn]; :@[queryparams;`attributecolumn;:;attributecolumn]; }; extracttimefilter:{[inputparams;queryparams] procmeta:inputparams`metainfo; if[-14h~type procmeta[`endtime];procmeta[`endtime]:1+procmeta[`endtime]]; timecolumn:inputparams`timecolumn; addkeys:`proctype`timefilter; :queryparams,exec addkeys!(proctype;enlist(within;timecolumn;(starttime;endtime)))from procmeta; }; extractinstrumentfilter:{[inputparams;queryparams] if[not`instruments in key inputparams;:queryparams]; instrumentcolumn:.checkinputs.gettableproperty[inputparams;`instrumentcolumn]; instruments:enlist inputparams`instruments; filterfunc:$[1=count first instruments;=;in]; instrumentfilter:enlist(filterfunc;instrumentcolumn;instruments); :@[queryparams;`instrumentfilter;:;instrumentfilter]; }; extractcolumns:{[inputparams;queryparams] if[not`columns in key inputparams;:queryparams]; columns:(),inputparams`columns; :@[queryparams;`columns;:;columns!columns]; }; extractgrouping:{[inputparams;queryparams] if[not`grouping in key inputparams;:queryparams]; grouping:(),inputparams`grouping; :@[queryparams;`grouping;:;grouping!grouping]; }; extractaggregations:{[inputparams;queryparams] if[not`aggregations in key inputparams;:queryparams]; aggregations:(!). flip(extracteachaggregation'). ungroupaggregations inputparams; :@[queryparams;`aggregations;:;aggregations]; }; ungroupaggregations:{[inputparams](key[inputparams`aggregations]where count each get inputparams`aggregations;raze inputparams`aggregations;.checkinputs.getdefaulttime[inputparams])}; extracteachaggregation:{[func;columns;deftime](`$string[func],raze .[string(),?[columns=`$((string deftime),".date");`date;columns];(::;0);upper];?[`sumsq=func;(sum;(xexp;columns;2));parse[string func],columns])}; extracttimebar:{[inputparams;queryparams] // If no key has been provided return the queryparams if[not`timebar in key inputparams;:queryparams]; // Get the timebar params as a dictionary timebar:`size`bucket`timecol!inputparams`timebar; // Convert the timebucket to it's corresponding integer value timebucket:exec size * .schema.timebarmap bucket from timebar; // Return as a kdb+ native function :@[queryparams;`timebar;:;timebar[1#`timecol]!enlist(xbarfunc;timebucket;timebar[`timecol])]; }; xbarfunc:{[timebucket;x] typ:type x; if[typ~12h;:timebucket xbar x]; if[typ in 13 14h;:timebucket xbar 0D+`date$x]; if[typ~15h;:timebucket xbar`timespan$x]; if[typ in 16 17 18 19h;:timebucket xbar`timespan$x]; '`$"timebar type error"; //- type checks in checkinputs functions should stop it reaching here }; // extract where filter parameters from input dictionary // the filters parameter is a dictionary of the form: // `sym`price`size!(enlist(=;`AAPL);((within;80 100);(not in;81 83 85));enlist(>;50)) // this is translated into a kdb parse tree for use in the where clause: // ((=;`sym;,`AAPL);(within;`price;80 100);(in[~:];`price;81 83 85);(>;`size;50)) // this function ensures symbol types are enlist for the parse tree and reorders // filters prefaced with the 'not' keyword as neeeded extractfilters:{[inputparams;queryparams] if[not`filters in key inputparams;:queryparams]; f:inputparams`filters; f:@''[f;-1+count''[f];{$[11h~abs type x;enlist x;x]}]; f:raze key[f]{$[not~first y;y[0],enlist(y 1),x,-1#y;(1#y),x,-1#y]}''get f; :@[queryparams;`filters;:;f]; }; extractordering:{[inputparams;queryparams] if[not`ordering in key inputparams;:queryparams]; go:{[x;input]if[first (input)[x]=`asc;:((input)[x;1] xasc)];if[first (input)[x]=`desc;:((input)[x;1] xdesc)];(input)[x]}; order:go[;inputparams`ordering] each til count inputparams`ordering; :@[queryparams;`ordering;:;order]; }; extractfreeformwhere:{[inputparams;queryparams] if[not`freeformwhere in key inputparams;:queryparams]; whereclause:parse["select from x where ",inputparams`freeformwhere][2;0]; :@[queryparams;`freeformwhere;:;whereclause]; }; extractfreeformby:{[inputparams;queryparams] if[not`freeformby in key inputparams;:queryparams]; byclause:parse["select by ",inputparams[`freeformby]," from x"][3]; :@[queryparams;`freeformby;:;byclause]; }; extractfreeformcolumn:{[inputparams;queryparams] if[not`freeformcolumn in key inputparams;:queryparams]; selectclause:parse["select ",inputparams[`freeformcolumn]," from x"][4]; :@[queryparams;`freeformcolumn;:;selectclause]; }; extractoptimisationkey:{[inputparams;queryparams] A:((1#`optimisation)!1#1b)^inputparams; :@[queryparams;`optimisation;:;A`optimisation]; }; jointableproperties:{[inputparams;queryparams]queryparams,enlist[`tableproperties]#inputparams}; //-Extract the column naming dictionary/list extractcolumnnaming:{[inputparams;queryparams] // If No argument has been supplied return an empty list (this will return default behaviour in getdata.q) if[not `renamecolumn in key inputparams;:@[queryparams;`renamecolumn;:;()!()]]; // Otherwise extract the column order list/dictionary :@[queryparams;`renamecolumn;:;@[inputparams;`renamecolumn]]; }; processpostback:{[result;postback]:postback result;}; ================================================================================ FILE: TorQ_code_dataaccess_getdata.q SIZE: 3,630 characters ================================================================================ // high level api functions for data retrieval getdata:{[inputparams] if[.proc.proctype in key inputparams;inputparams:inputparams .proc.proctype]; requestnumber:.requests.initlogger[inputparams]; // [input parameters dict] generic function acting as main access point for data retrieval if[1b~inputparams`getquery;:.dataaccess.buildquery[inputparams]]; // validate input passed to getdata usersdict:inputparams; inputparams:@[.dataaccess.checkinputs;inputparams;.requests.error[requestnumber;]]; // log success of checkinputs .lg.o[`getdata;"getdata Request Number: ",(string requestnumber)," checkinputs passed"]; // extract validated parameters from input dictionary queryparams:.eqp.extractqueryparams[inputparams;.eqp.queryparams]; // log success of eqp .lg.o[`getdata;"getdata Request Number: ",(string requestnumber)," extractqueryparams passed"]; // re-order the passed parameters to build an efficient query query:.queryorder.orderquery queryparams; // log success of queryorder .lg.o[`getdata;"getdata Request Number: ",(string requestnumber)," queryorder passed"]; // execute the queries table:raze value each query; if[(.proc.proctype=`rdb); // change defaulttime.date to date on rdb process query result if[(`$(string .checkinputs.getdefaulttime inputparams),".date") in (cols table); table:?[(cols table)<>`$(string .checkinputs.getdefaulttime[inputparams]),".date";cols table;`date] xcol table]; // adds partition column when all columns are quried from the rdb process for both keyed and unkeyed results if[(1 < count inputparams`procs) & (all (cols inputparams`tablename) in (cols table)); //get appropriate column name based on partition type colname:$[-7h~type .rdb.getpartition[];`int;`date]; //update table to include col of current partition value table:![table;();0b;enlist[colname]!(), .rdb.rdbpartition]; if[98h=type table;table:colname xcols table]; if[99h=type table;keycol:cols key table; table:0!table; table:colname xcols table; table:keycol xkey table]]; ]; f:{[input;x;y]y[x] input}; // order the query after it's fetched if[not 0~count (queryparams`ordering); table:f[table;;queryparams`ordering]/[1;last til count (queryparams`ordering)]]; // rename the columns result:queryparams[`renamecolumn] xcol table; // apply post-processing function if called in process or query to single process called from gateway if[(10b~in[`postprocessing`procs;key inputparams])or((1b~`postprocessing in key inputparams)and(1~count inputparams `procs)); result:.eqp.processpostback[result;inputparams`postprocessing]]; // apply sublist function if called in process or query to single process called from gateway if[(10b~`sublist`procs in key inputparams)or((1b~`sublist in key inputparams)and(1~count inputparams `procs)); result:(inputparams`sublist) sublist result]; .requests.updatelogger[requestnumber;`endtime`success!(.proc.cp[];1b)]; :result }; \d .dataaccess
Sample aggregation engine for market depth¶ Throughout the past decade the volume of data in the financial markets has increased substantially due to a variety of factors, including access to technology, decimalization and increased volatility. As these volumes increase it can pose significant problems for applications consuming market depth, which is summarized as the quantity of the instrument on offer at each price level. The purpose of this white paper is to describe a sample method for efficiently storing and producing views on this depth data. Due to the large data volume most applications won’t want to consume or process the full depth of book. Having the process subscribe to the full depth and calculate a top-of-book (TOB) will be computationally expensive and take up valuable processing time. Additionally, if there were multiple applications looking to consume the same data it might not make sense to repeat the effort. Offloading the work to a separate calculation engine would be appropriate since it could then calculate the required data and publish to any interested consumers. An example of this might be a mark-to-market engine that only wants a TOB price. There are other benefits to this approach and these will be outlined in the discussion section. Sample order book displaying price level and depth information for a stock Using TOB calculation as a use case, we will describe a flexible framework for approaching the problem and touch on some possible enhancements. kdb+ is highly optimized for vector operations so the framework will use this strength. For example, instead of re-running the same code multiple times, it will use vector operations and shared code where possible. We will also introduce the concept of filtering data sources so multiple downstream consumers can see different views on the same quote landscape. For the purpose of this paper the examples use FX data, which requires price aggregation across liquidity sources. However, this requirement to aggregate across venues is also increasingly becoming the case in the equity market, where the large exchanges are losing volume and liquidity is more fragmented across pools and alternative sources. For simplicity it is assumed the engine operates in a pub/sub framework where updates are received from upstream liquidity sources through a upd function and published downstream using a pub function. The upd function is a generic callback for incoming data and is called with the schema name and table of updates. The definition of these functions will be implementation-specific and is deliberately ignored for the purpose of this paper. The logic will not be applicable in every case and there are a few caveats, which will be discussed at the end. It should be noted that this paper assumes the consumers of depth information are computer programs. However, human consumers including traders and surveillance personnel also find it useful to visualize the book. While this paper does not focus on the visualization of market depth, there are visualization tools available that can assist in viewing the full depth of the order book. KX Dashboards, seen above in Figure 1, is one example of such tools. All code was run using kdb+ version 3.1 (2013.11.20). Approach¶ As the volume of incoming data increases, sorting the book on every tick becomes computationally expensive and might be unnecessary. For this reason a timer-based approach might be more appropriate, where the TOB is calculated and published periodically. The timer value will need to be calibrated according to requirements and performance considerations since it depends on multiple factors. A few examples are listed below: - Volume of quotes being received – if the volume of quotes is quite low, it wouldn’t make sense for the timer interval to be small - Type of consumer processes – consumers may want the data only in quasi-realtime, i.e. a longer timer interval might be more appropriate - Latency of code – if it typically takes 10ms for the timer code to run, then a timer interval of 5ms wouldn’t make sense - Number of subscriptions – as the number of subscriptions/consumers grows, the likelihood is that the timer function will take longer to run Schemas and data structures¶ In this section, some sample schemas and data structures are defined for storing the quote data. The marketQuotes schema is displayed below; this is assumed to be the format of the data received from our upstream feeds. An assumption is being made is that only the last quote per distinct price level is valid, i.e. if a EURUSD quote from FeedA and level 0 is received twice then the second quote overwrites the first. For this reason the table is keyed by the sym , src and level columns in the engine process. marketQuotes:([] time:`timestamp$(); sym:`symbol$(); src:`symbol$(); level :`int$(); bid:`float$(); ask:`float$(); bsize:`int$(); asize:`int$(); bexptime:`timestamp$(); aexptime:`timestamp$() ) `sym`src`level xkey `marketQuotes quote:update bok:1b, aok:1b from marketQuotes The second schema defined is a modified version of marketQuotes . This will be used internally and updated on every timer run. The aok & bok columns are flags to indicate whether a quote is still valid and will be updated periodically. The reason for keeping two similar schemas instead of one will become clearer later. The internal quote is updated on every timer call and is only ever appended to, which allows the engine to take advantage of a useful feature of keyed tables in kdb+. The row index of a specific key combination doesn’t change from the moment it’s added (assuming rows are never removed). The following is an example using a simplified version of the marketQuotes table, mentioned above, with the same key columns. We assume the table below is the application’s current snapshot of the market. q)marketQuotes sym src level| time bid ask .. ------------------| ---------------------------------------------.. EURUSD FeedA 2 | 2013.11.20D19:05:00.849247000 1.43112 1.43119.. EURUSD FeedB 2 | 2013.11.20D19:05:00.849247000 1.43113 1.4312 .. Now another quote arrives for EURUSD, FeedA with a level of 2. This would overwrite the previous one, as highlighted below. The row index or position in the table of that key combination remains the same but the time and price columns update. q)/ table after new quote q)show marketQuotes upsert `time`sym`src`level`bid`ask! (.z.p;`EURUSD;`FeedA;2;1.43113;1.43118) sym src level| time bid ask .. ------------------| ---------------------------------------------.. EURUSD FeedA 2 | 2019.09.25D02:58:03.729837000 1.43113 1.43118.. EURUSD FeedB 2 | 2013.11.20D19:05:00.849247000 1.43113 1.4312 .. Using the behavior from the previous example it’s possible to map each instrument to its corresponding row index so that it’s easy to quickly extract all entries for an instrument, i.e. the EURUSD instrument would map to row numbers 0 and 1 in the previous example. This functionality could also be extended to do additional filtering, which will be touched on in more detail later. The structures below (bids and asks ) will store the row indexes of occurrences of each instrument, sorted from best to worst on each side of the market and will be updated on each timer call. The second set of structures (validbids and validasks ) is used to store the indexes of unexpired prices by instrument. By using the inter keyword, the unexpired rates can be extracted quickly and pre-sorted. asks:bids:(`u#"s"$())!() validbids:validasks:(`u#"s"$())!() The following example shows how the sorted row indexes for EURUSD are extracted. The important thing to note here is that the inter keyword preserves the order of the first list, i.e. the result of the third command results in a list of unexpired bids for EURUSD, still sorted by price. q)bids[`EURUSD] 1 3 5 8 2 6 q)validbids[`EURUSD] 1 8 3 6 q)bids[`EURUSD] inter validbids[`EURUSD] 1 3 8 6 Stream groupings¶ As mentioned previously, the engine might need to handle some additional filtering and an example was outlined where filtering was performed to remove expired quotes. To extend this, the concept of stream groupings is introduced where there are multiple subscribers in the system, for the derived data with each having different entitlements to the feeds/streams. This is a common requirement in real-world applications where institutions need to manage their pricing and counterparties. The table below is an example of how those entitlements might look. Instrument Stream group Streams ------------------------------------------- EURUSD A Feed1,Feed2,Feed3 EURUSD B Feed1,Feed4 If the engine could create groups of subscriptions (stream groups) then it could create mappings as before and use the inter function to apply them. For example if there were two stream groups (A and B) for EURUSD, then the engine could then extract the best, unexpired prices for each group. Some structures and functions to handle the maintenance of stream groups per instrument are detailed below; this will be achieved by giving each subscription a distinct group name. The structures and a description of each are as follows: | structure | description | |---|---| | symtogroup | maps an instrument to list of stream groups | | grouptosym | maps a group name back to an instrument | | streamgroups | maps a stream name to a list of feeds/sources | | streamindices | maps a stream name to a list of row indexes (corresponding to rows with the instrument and sources in the group) | symtogroup:(`u#"s"$())!() grouptosym:(`u#"s"$())!"s"$() streamgroups:(`u#"s"$())!() streamindices:(`u#"s"$())!() The function below is used to create a stream group for an instrument by instantiating the data structures. Arguments to the function are: sym instrument name strgrp stream group name strms symbol list of streams in the group A sample call to register a stream group is displayed below along with how data structures would be populated. The stream group name is appended to the instrument to ensure further uniqueness across instruments (though this might not be necessary): registerstreamgroup:{[sym;strgrp;strms] sg:` sv (sym;strgrp); if[sg in key streamgroups; :(::)]; @[`symtogroup; sym; union; sg]; @[`grouptosym; sg; :; sym]; @[`streamgroups; sg; :; strms]; @[`streamindices; sg; :; "i"$()]; } At this point, an SG1 stream group with two component feeds has been created for EURUSD. streamindices is initialized for the stream group as an empty list of integers. It is assumed that all the stream groups are registered before any quotes are received and that the indexes will be updated whenever new quotes enter the system. As new quotes arrive, if the key combination is new to our internal quote table then the source might be part of a stream group and the engine will have to update the streamindices structure to account for this. The following function is called in this event, where the input is a table containing the new quotes. If any of the new quotes are part of a stream group then their row numbers are appended to the structure in the appropriate places: updstreamgroups:{[tab] sg:raze symtogroup distinct exec sym from tab; s:grouptosym sg; .[`streamindices; (); ,'; ] sg! {[x;s;srcs] $[count r:exec row from x where sym=s, src in srcs; r; "i"$()] }[tab]'[s;streamgroups sg]; } Using the stream groupings from the previous section, the following example shows how the streamindices structure is updated for new quotes. The tab parameter is assumed to be as below (price, size and other columns have been omitted): q)tab / New incoming quotes table sym src level row ---------------------- EURUSD FeedA 0 5 EURUSD FeedB 2 6 q)streamindices / before EURUSD.SG1| 0 2 3 EURUSD.SG2| 1 4 q)updstreamgroups tab q)streamindices / after EURUSD.SG1| 0 2 3 5 EURUSD.SG2| 1 4 5 6 FeedA is a member of both stream groups so row index 5 is added to both groups whereas FeedD is only a member of SG2. Implementation¶ Quote preparation¶ Quotes are assumed to be received from upstream through a generic upd function, the definition of which is displayed below. Incoming quotes are appended to the marketQuotes schema and between each timer call quotes may be conflated due to the key columns. This will only happen if a quote is republished during the timer interval for the same instrument, source and level. This function could be modified to include custom code or table handling. On each timer call, the entire marketQuotes table will be passed into the sorting algorithm (the quoteTimer function). upd:{[t;x] // ..... if[t=`marketQuotes; t upsert x]; // ..... } .z.ts:{[] quoteTimer[0!marketQuotes]; marketQuotes:0#marketQuotes; } After the sorting algorithm finishes, the marketQuotes table is cleared of records. This is the reason two schemas are used instead of one. The internal quote schema holds the latest quotes from each timer call and marketQuotes holds only the data between timer calls. This means that only new quotes are processed by the sorting algorithm each time. Sorting algorithm¶ The main body of the algorithm is done in the quoteTimer function and is called each time the timer fires. The incoming data is appended to the quote table with the expiry flags set to true (unexpired). If the count of the quote table has increased then there has been a new quote key received and the engine needs to update the stream group indexes as described in the stream grouping section. The algorithm then proceeds to sort the bid and ask sides of the market separately, these are then appended to the bids and asks structures for each updated instrument. The sorting algorithm in the example is from best-to-worst (descending on bid, ascending on ask). quoteTimer:{[data] qc:count quote; `quote upsert update bok:1b, aok:1b from data; s:distinct data`sym; if[not count s; :()]; if[qc<count[quote]; updstreamgroups[qc _ update row:i from quote]; ]; bids,:exec i {idesc x}[bid] by sym from quote where sym in s; asks,:exec i {iasc x}[ask] by sym from quote where sym in s; checkexpiry[]; updsubscribers[s]; } The checkexpiry function is used to update the expiry flags on the quote table and update the valid quote structures. The row indexes of each valid quote are extracted by instrument and appended as before: checkexpiry:{[] now:.z.p; update bok:now<bexptime, aok:now<aexptime from `quote; validbids,:exec i where bok by sym from quote; validasks,:exec i where aok by sym from quote; } q)exec i where bok by sym from quote EURUSD| 0 2 5 6 8 9 12 13 14 GBPUSD| 1 3 4 7 10 11 15 At this point the algorithm should be able to extract the unexpired and sorted quotes. This will be performed by a function using the method described before. The functions below take a list of instruments and extract the best, valid quote indexes for each instrument: q)getactivebids:{[s] bids[s] inter' validbids[s]} q)getactiveasks:{[s] asks[s] inter' validasks[s]} q)getactivebids[`EURUSD`GBPUSD] 0 6 8 14 5 9 12 13 2 3 15 4 7 10 1 11 Grouping algorithm¶ The last part of the algorithm creates the output to be published downstream. It extracts the best quotes per instrument and applies the filters per stream group. Taking the list of updated instruments as an input, it extracts the list of applicable stream groups. The best valid quotes are extracted and the inter keyword is used to filter them to include only indexes that are valid for each stream group: updsubscribers:{[s] sg:raze symtogroup[s]; s:grouptosym[sg]; aix:getactiveasks[s] inter' streamindices[sg]; bix:getactivebids[s] inter' streamindices[sg]; qts:(0!quote)[`bid`ask`bsize`asize`src]; bind:{[amnts;sz;s] s first where amnts[s] >= sz} [qts[2];1000000]'[bix]; aind:{[amnts;sz;s] s first where amnts[s] >= sz} [qts[3];1000000]'[aix]; new:([] time:.z.p; sym:s; stream:sg; bid: qts[0;bind]; ask: qts[1;aind]; bsize:qts[2;bind]; asize:qts[3;aind]; bsrc: qts[4;bind]; asrc: qts[4;aind] ); pub[`quoteView; new]; }; Local variables sg and s are assigned the stream groups for the list of updated instruments and the corresponding instruments, i.e. if there are two stream groups for EURUSD in sg , there will be two instances of EURUSD in s . This ensures the two variables conform when aix and bix are defined. In these lines of code the best quotes are extracted using the getactivebids and getactiveasks functions. The row indexes returned for each instrument are then filtered to include only rows corresponding to each stream group. The two variables aix and bix then contain the sorted indexes per stream group. q)sg `EURUSD.SG1`EURUSD.SG2 q)s `EURUSD`EURUSD q)getactiveasks[s] inter' streamindices[sg] 2 5 0 5 6 Obviously the best or TOB quote would be the first element for each stream group. However, to add slightly more complexity, the engine may only require prices for sizes above 1 million. The definition of qts extracts a list of columns to be used when building the result. qts is a five-item list with each item consisting of column data. These are used to extract price, size and source information for the row indexes. q)qts 1.295635 1.295435 1.295835 1.295835 1.295835 1.295635 1.296035 1.296035 500000 500000 1000000 1000000 500000 500000 1000000 1000000 FeedA FeedB FeedB FeedA For each stream group, the engine has a list of sorted indexes. In the definitions of bind and aind it indexes into the lists of sizes to extract quotes with size greater than the 1-million limit. This is then applied back to the list of rows so the bind and aind variables then contain the index of the best quote per stream group. Using the example from before, with two stream groups for EURUSD and the qts table, the index of the best bid (with a size above 1 million) per stream group is extracted. q)show bind:{[amnts;sz;s] s first where amnts[s]>=sz}[qts[2];1000000]'[bix] 33 A table of output is built using these indexes with a row for each stream group. The pub function will be implementation-specific and is left undefined in this example. Discussion¶ This paper has discussed a framework for aggregating market depth using FX data as an example, as well as managing multiple subscriptions. For simplicity, the code was kept relatively succinct and is not production-ready. It would need to be optimized for each use case, but at least describes the approach and highlights the enhancements or optimizations that could be integrated. There are many benefits to the method suggested above. As discussed in the introduction, sorting the book in each engine and duplicating the work in each of them can be costly, so offloading this effort to another process makes sense. The downstream engines will then only have to listen to an incoming message instead of doing some expensive sorting. They will also be insulated from spikes in the amount of market data being published. For example, during market announcements these spikes may cause the engines to peg CPU and cause delays executing their normal functions. It will also take pressure off upstream publishers if they have to publish to one or two processes instead of multiple. Obviously there are a few caveats with this approach and some situations where it may not be appropriate. Some engines may be sensitive to latency and need quotes as soon as possible: an execution engine would be an example. The approach described in this paper would introduce an extra hop of latency between the feed and the quote hitting the execution engine. This would probably be unacceptable, though the size of that latency would vary depending on a lot of factors such as the number of input streams and number of applications subscribing. There are also a couple of assumptions as part of the approach that may not be valid in every situation. Handling of order-book data in this type of situation is examined by white paper “kdb+ and FIX messaging”. The sorting algorithm is shared across all stream groups, which might not always be the case. Adding multiple sorting algorithms would be possible but might adversely affect performance. Another assumption that was made concerned the validity of incoming market data, i.e. only the last quote per price stream (sym , src and level combination) is assumed to be valid. This is dependent upon the incoming data and might differ across applications. A number of enhancements could easily be made to the engine. For instance, the stream groups currently need to be initialized prior to quotes being received and it would be relatively easy to have these initialized dynamically when a subscriber comes online. The output mode is also fixed to be TOB above a certain size and this could be enhanced to have a dynamic amount or use a VWAP price across multiple quotes for Immediate or Cancel (IOC) type views. Multiple output layers per stream group could also be added instead of just the TOB price. In the example given, the engine sorts only by best-to-worst price but it could easily be modified to sort on multiple criteria. Sorting could be done on multiple columns as required: price and size, price and expiry, etc. The example below describes how sorting by price and size could be undertaken: tab:([]sym:`EURUSD; src:`FeedA`FeedB`FeedC`FeedC; bid:1.2344 1.2345 1.2343 1.2344; bsize:10000 5000 3000 12000 ) q)/Quote table prior to sorting q)tab sym src bid bsize ------------------------- EURUSD FeedA 1.2344 10000 EURUSD FeedB 1.2345 5000 EURUSD FeedC 1.2343 3000 EURUSD FeedD 1.2344 12000 q)/Quote table after sorting q)tab {i idesc x i:idesc y} . tab`bid`bsize sym src bid bsize ------------------------- EURUSD FeedB 1.2345 5000 EURUSD FeedD 1.2344 12000 EURUSD FeedA 1.2344 10000 EURUSD FeedC 1.2343 3000 It is worth noting that the u attribute is applied to the majority of data structures. This is used to ensure quick lookup times on the structure keys as they grow. Q for Mortals: §8.8 Attributes Set Attribute Author¶ Stephen Dempsey is senior kdb+ developer on the KX R&D team. His core responsibility has been developing the KX Platform and supporting the wide range of applications built upon it. Earlier he implemented various kdb+ applications including large-volume exchange backtesting, and eFX trading.
Play Klondike¶ Problem Play Klondike in the q session. The last thing the world needs is another program to implement Solitaire, a.k.a. Klondike. But here it is, as a case study for writing q programs. A well-understood problem domain, small but non-trivial, is a good subject for close code reading. Techniques¶ - Working with indexes - Working with nested lists - Boolean operator arguments as forms of conditional - Projection - Scattered indexing - Apply and Apply At - Map iterators: Each, Each Left and each Solution¶ Code and instructions at StephenTaylor-KX/klondike. Cards¶ Represent cards as indexes into a canonical 52-card deck. SUITS:"SHCD" NUMBERS:"A23456789TJQK" SYM:`$NUMBERS cross SUITS / card symbols SYM,:`$("[]";"__") / hidden card; empty stack HC:52 / hidden card ES:53 / empty stack SP:54 / blank space NUMBER:1+til[13]where 13#4 / card numbers SUIT:52#SUITS / card suits COLOR:"RB" SUIT in "SC" / card colors We add display symbols for hidden cards (face down) and an empty pile. Also indexes for them and for a blank space in the display. Utilities¶ Certain expressions recur often enough to be abbreviated as utilities. ce:count each le:last each tc:('[til;count]) Syntactically, ce and le are projections of the each keyword. (Compare double:2* .) tc is a composition, equivalent to the lambda {til count x} . Layout and game¶ Represent the layout as thirteen lists within a dictionary representing the game state. TURN:3 / # cards to turn STOCK:0 WASTE:1 FOUNDATION:2+til 4 TABLEAU:6+til 7 deal:{[] g:()!(); deck:-52?52; / columns: stock, waste, 4 foundations, 7 piles g[`c]:13#enlist 0#0; g[`c;TABLEAU]:(sums til 7)_ 28#deck; / tableau g[`x]:le g[`c;TABLEAU]; / exposed g[`c;STOCK]:28_ deck; g[`s]:0; / score g[`p]:0; / # passes turn g } The upper-case constants substitute for what would otherwise appear as numeric constants in the code. ‘The Cannon Test’ in “Three Principles of Coding Clarity”, Vector 26:4 q)g:deal[] q)g c | (6 8 42 21 27 13 11 22 48 26 2 10 15 16 25 45 28 23 1 24 20;44 31 35;`lon.. x | 12 0 14 46 38 33 29 s | 0 p | 0 pm| (1 7 2;1 1 10;1 11 10) q)g`c 6 8 42 21 27 13 11 22 48 26 2 10 15 16 25 45 28 23 1 24 20 44 31 35 `long$() `long$() `long$() `long$() ,12 41 0 4 30 14 32 50 34 46 3 40 7 9 38 19 18 17 36 43 33 51 39 49 47 37 5 29 The turn function is defined below. From the above, we surmise it turns TURN cards from the stock pile onto the waste pile, and returns a game dictionary. It also writes possible moves as property pm , of which more below. Entries in the game dictionary: c layout columns representing stock, waste, foundation and tableau p number of passes through the stock pm possible moves, a list of triples: # cards, from pile, to pile s score x cards exposed on the tableau Display¶ We need to interpret the game dictionary visually, showing - cards as symbols - face-down cards masked - possible moves as cards to be moved q)see g 21 [] 9D __ __ __ __ 0 4S [] [] [] [] [] [] AS [] [] [] [] [] 4C [] [] [] [] QC [] [] [] TC [] [] 9H [] 8H "_____________________" "score: 0" AS 9D TC 9H TC Stock, waste, foundation¶ The first row of the display shows - the number of cards in the stock (21) - the stock cards face down [] - the top card exposed on the waste (9D) - the four empty piles of the foundation The second row shows the number of passes through the stock (0). There follows the tableau and a line. Below the line, the score and the two possible moves: 9D or 9H to TC. see:{[g] / display game / stock, waste, foundations top:@[;0;HC|]ES^le g[`c]STOCK,WASTE,FOUNDATION; show (`$string count[g[`c;STOCK]],g`p),'SYM 2 7#(2#top),SP,(2_ top),7#SP; / columns show SYM {flip x[;til max ce x]} {@[x; where 0=ce x; ES,]} {[g;c] g[`c;c]|HC*not g[`c;c] in g[`x]}[g] TABLEAU; show 21#"_"; show "score: ",string g`s; show $[0=count g`pm; "No moves possible"; {[g;n;f;t] SYM first each neg[n,1]#'g[`c;f,t]}[g;].'g`pm ]; } To compose the first line, le g[`c] STOCK,WASTE,FOUNDATION finds the top card from six piles, returning nulls from empty piles. ES^ replace the nulls with the index for an empty stack. Apply At substitutes in the stock pile for anything but an empty stack. Note the three uses of projection. g[`c] STOCK,WASTE,FOUNDATION is syntactically equivalent tog[`c;STOCK,WASTE,FOUNDATION] , arguably a trivial and distracting use of projection. But we findg[`c] often indexed. The projection helps focus on what varies: the second index.- The ternary (three-argument) form of Apply At takes as third argument a unary; in this case the projection HC| . - Apply At is also projected, on its second and third arguments, again solely to separate them from the much longer expression which calculates the first argument. The projection @[;0;HC|] is a unary, and we can read it as “applyHC| to the first item of the argument”. The following line applies the card symbols SYM , composes a 2×7 table, and prefixes it with the number of cards in the stock pile, and the number of passes made. This requires little code and no comment. Tableau¶ The next line composes the display of the tableau. It is a long line and it looks intimidating. Q: Why is a line of my Java code so much easier to read than a line of q? A: Because the Java line isn’t doing very much. show SYM {flip x[;til max ce x]} {x,'(0=ce x)#'ES} {[g;c]g[`c;c]|HC*not g[`c;c]in g[`x]}[g] TABLEAU; Its work is done by three lambdas, and we quickly see that it could have been written as three lines: a:{[g;c] g[`c;c]|HC*not g[`c;c] in g[`x]}[g] TABLEAU; b:{x,'(0=ce x)#'ES} a; show SYM {flip x[;til max ce x]} b; Composing it as a single line eliminates a and b . It is a trade-off. The longer line is harder to read. On the other hand, a variable definition is a request to the the reader: Remember this for later. Eliminating the variables is a clear sign to the reader that the intermediate results are not used elsewhere. The first lambda evaluated flags the exposed cards in the tableau columns and replaces the others with the hidden-card index. Note how the implicit iteration of q’s atomic primitives do this without loops or control structures. q)g[`c;TABLEAU] in g`x ,1b 01b 001b 0001b 00001b 000001b 0000001b q)g[`c;TABLEAU]|HC*not g[`c;TABLEAU] in g`x ,12 52 0 52 52 14 52 52 52 46 52 52 52 52 38 52 52 52 52 52 33 52 52 52 52 52 52 29 Exploiting implicit iteration is a core skill in writing q. See how close the primitives get you to what you want before introducing iterators – let alone loops! The second lambda, {x,'(0=ce x)#'ES} , replaces empty piles with the empty-stack symbol. Using a boolean as the left argument of Take is a form of conditional. For a long list with few 1s in the boolean the efficiency of Apply At would be better, because it would act only where 0=ce x . {@[x; where 0=ce x; ES,]} Again, we see the ternary form of Apply At, with a projection (here ES, ) as its unary third argument. Indexing is also atomic. Simply applying SYM to this nested list of indexes produces a legible display. q)SYM g[`c;TABLEAU]|HC*not g[`c;TABLEAU] in g`x ,`4S `[]`AS `[]`[]`4C `[]`[]`[]`QC `[]`[]`[]`[]`TC `[]`[]`[]`[]`[]`9H `[]`[]`[]`[]`[]`[]`8H But we want better. {flip x[;til max ce x]} produces it. The flip is the transformation we want, but we cannot flip a general list, only a matrix. q)SYM {flip x[;til max ce x]} g[`c;TABLEAU]|HC*not g[`c;TABLEAU] in g`x 4S [] [] [] [] [] [] AS [] [] [] [] [] 4C [] [] [] [] QC [] [] [] TC [] [] 9H [] 8H How did we lose the backtick symbols? x[;til max ce x] changed the general list into a symbol matrix, which is displayed without backticks. Score, possible moves¶ The last part of the display is the possible moves. If there are any, they are visualized with {[g;n;f;t] SYM first each neg[n,1]#'g[`c;f,t]}[g;;;].'g`pm We have here a quaternary lambda, projected on one argument; the Apply operator; and the Each iterator. The possible moves are a list of triples. Each triple a number of cards, a from-column, and a to-column. q)g`pm 1 7 2 1 1 10 1 11 10 We see above a quaternary lambda {[g;n;f;t] ... } applied to this list of triples. But the lambda is first projected {[g;n;f;t] ... }[g;;;] locking in g as its constant first argument; so the projection is a ternary function. The ternary is then applied with the Apply and Each operators: .' . The Apply operator makes the ternary function a unary of a list of its arguments. That is, {[g;n;f;t] ... }[g;;;]. applied to the triple g[`pm;0] gets 1, 7, and 2 as arguments n , f , and t . The Each iterator applies this unary to each item (triple) in the list g`pm . That is how the lambda is applied to g and the values in g`pm . What does the lambda do? Say it is applied to 3 7 11 , i.e. move the last three cards from column 7 to column 11. neg[n,1] gives us -3 -1 and we take the last three and the last one cards respectively from g[`c;7] and g[`c;11] . The first of each of these are, respectively the card to be moved and the card to which it is to be moved. It remains only to apply SYM to see the corresponding card symbols. Possible moves¶ Two kinds of moves are possible: - to the foundation, from the waste or the tableau, a card of the same suit and next higher value to the target – and any ace may move to an empty column - to the tableau, from the waste, tableau, or foundation, a card of different color and one lower value to the target card – and any king may move to an empty column Only exposed cards may be moved. On the waste and foundation only the last card in a column is exposed. On the tableau, g`x lists exposed cards. Moving a card on the tableau moves with it all the cards below it. With the globals defined at the beginning we have all we need to establish a card’s number, color and suit. The rules above will suggest to many readers control structures and loops. They are not for us. Instead, in rpm , we list all possible moves, and select any that conform. The position of a card in the layout is given by a pair: its column index, and its position within that column. The unary top finds the positions of the top (last) cards in its argument columns, with no result items from empty columns. top:{(y,'i-1)where 0<i:ce x y}[g`c]; / positions of top cards To the foundation¶ fm:{[c;m] cards:c ./:m[;0 1]; / cards to move nof:SYM?`${(NUMBERS NUMBER x),'SUIT x}le c m[;2]; / next cards on foundation m where(cards=nof)or(NUMBER[cards]=1)and SUIT[cards]=SUITS FOUNDATION?m[;2] }[g`c] top[WASTE,TABLEAU] cross FOUNDATION; The candidate moves to the foundation are: q)top[WASTE,TABLEAU] cross FOUNDATION 1 2 2 1 2 3 1 2 4 1 2 5 6 0 2 6 0 3 .. That gives us a list of triples: from-column, from-index, and to-column. We pass this list to a lambda as m ; and the game columns g`c as c . For the moves, the cards to be moved are thus cards:c ./:m[;0 1]; A list is a function of its indexes. The use here of Apply with an iterator again highlights a founding insight of q: a list is a function of its indexes. That has deep implications. Here we note only that Apply applies a function to its arguments or a list to its indexes – the syntax is the same. m[;0 1] is a list of pairs. The first pair here is 1 2 . The first result item from ./:m[;0 1] is thus c . 1 2 , equivalent to c[1;2] , which is to say the third card from the second column. c ./:m[;0 1] is a form of scattered indexing. The third column of m holds the indexes of the target (foundation) columns for the candidate moves. The next code line defines nof (next on foundation), the corresponding card that could be placed on that foundation pile. nof:SYM?`${(NUMBERS NUMBER x),'SUIT x}le c m[;2]; The first index in m[;2] is 2, the first foundation index, i.e. Spades. If the top card on column 2 were 3S, the corresponding nof would be 4S. The definition of nof contains no Add. How is the next value obtained? The NUMBER list returns origin-1 numbers, e.g. a 1 for an ace indexes a "2" from NUMBERS . The last line of the lambda m where(cards=nof)or(NUMBER[cards]=1)and SUIT[cards]=SUITS FOUNDATION?m[;2] returns items (triples) from m where either the card-to-be-moved is the next wanted on the foundation, or it is the ace of the suit for that foundation pile. (There is no need to see if the pile is empty. If the ace is available for moving, its foundation pile is empty.) Aces next? The last line of the lambda tests to see if a card either matches its nof or is an ace of the target column’s suit. That test (ace and suit) could be omitted if the nof list included aces. How could that be arranged? (Clue: le c m[;2] returns nulls from empty piles.) Does the result read better? To the tableau¶ The above gave us a list of possible moves to the foundation. The next section of rpm produces a list of moves to the tableau. xit:raze TABLEAU cross'where each g[`c;TABLEAU]in g`x; / positions exposed in tableau tm:{[c;m] cards:c ./:m[;0 1]; tgts:le c m[;2]; m where (.[<>;COLOR(cards;tgts)]and 1=.[-]NUMBER(tgts;cards)) or (tgts=0N)and NUMBER[cards]=13 }[g`c] (top[WASTE,FOUNDATION],xit) cross TABLEAU; It follows a similar method. The two main differences are - the list of cards is the top cards from the waste and foundation piles, and also xit , the cards exposed in the tableau - target cards must be a different color and the next-higher value, or 0N (from an empty pile) if the move card is an ace Note how Apply is used to apply operators between pairs of lists. .[-]NUMBER(tgts;cards) / NUMBER[tgts]-NUMBER cards The code in rpm does a fair bit of looking up, and mapping from card IDs to suits, numbers and colors. For example, column numbers in m[;2] are found in the list of foundation columns and mapped to suits: SUITS FOUNDATION?m[;2] The look-up is done by Find and the mapping by indexing, in this case specified simply by juxtaposition. Because indexing is atomic it maps implicitly across multiple lists, e.g. NUMBER(tgts;cards) . We finish rpm by converting the from-column, from-index, to-index triples of fm and tm to the number-of-cards, from-column, to-column triples of g`pm g[`pm]:{(ce[x y[;0]]-y[;1]),'y[;0 2]}[g`c] fm,tm; Move and turn¶ Two functions change the game state: move and turn . (turn is also called by the game constructor deal .) Both call move_ to move cards between columns. Its job is to - move one or more cards - possibly expose a card on the tableau - adjust the game score move_:{[g;n;f;t] / move n cards in g from g[`c;f] to g[`c;t] g[`c;t],:neg[n]#g[`c;f]; g[`c;f]:neg[n]_ g[`c;f]; let:le g[`c;TABLEAU]; g[`s]+:5 0@all let in g`x; / turned over tableau card? g[`x]:distinct g[`x],let; g[`s]+:$[f=WASTE; 5 10@t in FOUNDATION; f in TABLEAU; 0 10@t in FOUNDATION; f in FOUNDATION; -15; 0 ]; / score rpm g } Moving the cards is light work: g[`c;t],:neg[n]#g[`c;f]; g[`c;f]:neg[n]_ g[`c;f]; The scoring follows the original Windows implementation, as described in Wikipedia. Once the game state has changed, rpm records the new possible moves. turn ¶ The turn function has a simple move to make: three cards from the stock to the waste; fewer if there are fewer than three cards in the stock. And if there are none, to switch the stock and waste piles. turn:{[g;n] trn:0=count g[`c;STOCK]; g[`c;STOCK,WASTE]:g[`c;trn rotate STOCK,WASTE]; g[`p]+:trn; / # passes move_[g; n&count g[`c;STOCK]; STOCK; WASTE] }[;TURN] The script sets TURN as 3; other versions of Klondike use different values. As in the tableau display we used a boolean as the left argument of Take, so here we use it as the left argument of rotate , an effective conditional. move ¶ The move_ function does nothing to validate the specified move; it assumes it is valid. Validation is the job of move , which is part of the tiny user interface. It takes as arguments the game state and one or two card symbols, e.g. g:move[g] `AS / move AS to foundation g:move[g] `KH / move KH to an empty tableau pile g:move[g] `9C`TH / move 9C to TH on the tableau g:move[g] `9C`TC / move 9C to TC on the foundation move has to - validate its arguments - validate the move - call move_ To validate its arguments, it confirms the game is a dictionary with the keys expected, and that the move is one or two card symbols. if[not 99h~type g; '"not a game"]; if[not all `c`p`x`pm in key g; '"not a game"]; if[abs[type y]<>11; '"type"]; if[(type[y]>0)and 2<>count y; '"length"]; if[not all b:y in SYM; '"invalid card: "," "sv string y where not b]; To call move_ it must derive from the card symbols the number of cards to be moved, and the from- and to-columns. cards:SYM?y; / map cards to n,f,t cl:ce g`c; / column lengths f:first where cl>i:g[`c]?'first cards; / from column n:cl[f]-i[f]; / # cards to move t:$[2=count cards; first where cl>g[`c]?'cards 1; $[1=NUMBER first cards; first[FOUNDATION]+SUITS?SUIT first cards; first[TABLEAU]+first where 0=ce g[`c;TABLEAU] ] ]; But first it must validate the proposed move. That seems to call for logic expressing the rules on what can be moved where. But no. Those rules have already been applied by rpm to record the possible moves for this game state in g`pm . All move needs to do is confirm the proposed move is listed there. if[not(n,f,t)in g`pm; '"invalid move"]; And we are done. Example usage¶ q)see g:deal[] 21 [] 2S __ __ __ __ 0 4C [] [] [] [] [] [] 5D [] [] [] [] [] 6H [] [] [] [] KD [] [] [] 2D [] [] 3D [] JH "_____________________" "score: 0" 2S 3D 4C 5D 3D 4C q)see g:g move/(`2S`3D;`4C`5D;`3D`4C) 21 [] 9D __ __ __ __ 0 __ [] [] [] [] [] [] 5D [] [] [] [] [] 4C 6H [] [] [] [] 3D KD [] [] [] 2S 2D 3H [] [] JH "_____________________" "score: 20" 2S 3H KD q)see g:move[g] `KD 21 [] 9D __ __ __ __ 0 KD [] [] [] [] [] [] 5D [] [] [] [] [] 4C 6H 5S [] [] [] 3D [] [] [] 2S 2D 3H [] [] JH "_____________________" "score: 25" 2S 3H 5S 6H q)see g:move[g] `5S`6H 21 [] 9D __ __ __ __ 0 KD [] [] [] [] [] [] 5D [] 4D [] [] [] 4C 6H [] [] [] 3D 5S [] [] [] 2S 2D 3H [] [] JH "_____________________" "score: 30" 2S 3H 4D 5S q)see g:move[g] `4D`5S 21 [] 9D __ __ __ __ 0 KD [] [] 5H [] [] [] 5D [] [] [] [] 4C 6H [] [] [] 3D 5S [] [] [] 2S 4D 2D 3H [] [] JH "_____________________" "score: 35" 4C 5H 2S 3H q)see g:turn g 18 [] 3C __ __ __ __ 0 KD [] [] 5H [] [] [] 5D [] [] [] [] 4C 6H [] [] [] 3D 5S [] [] [] 2S 4D 2D 3H [] [] JH "_____________________" "score: 35" 3C 4D 4C 5H 2S 3H Conclusion¶ That is all it takes to implement Klondike in the q session. What is there to notice about the code? Plenty of iteration is involved, but very little is described in the code. Almost all of it is implicit in the q primitives, and the rest is specified with a handful of iterators: Each, Each Right and each . There are no do or while constructs whatsoever. There is a good deal of mapping between lists, done very readably with indexing, such as (NUMBER[cards]=1) and SUIT[cards]=SUITS FOUNDATION?m[;2] Many choices are made, but if is used only to validate arguments and signal errors. Cond appears a few times; many more choices are represented with boolean indexes or arguments to non-logical primitives, such as "RB" SUIT in "SC" 5 10@t in FOUNDATION (0=ce x)#'ES Further study¶ - Write an autoplay function that stops when the game is won or no more useful moves are possible. - Write an HTML5 interface for the game engine. - Use the Machine Learning Toolkit to train a champion Klondike player. - “Three Principles of Coding Clarity”, Vector 26:4 - Remarks on Style
/- compares the attribute of given table to expectation given in csv attrcheck:{[tab;attribute;col] .lg.o[`dqe;"checking attributes on table ",string tab]; dictmeta:exec c!a from meta tab where c in col; dictcheck:(f col)!(f:{$[0>type x;enlist x;x]})attribute; $[dictmeta~dictcheck; (1b;"attribute of ",(","sv string(),col)," matched expectation"); (0b;"Expected attribute of column ",(","sv string(),col)," was ",(","sv string(),attribute),". Attribute of column ",(","sv string(),col)," is ",(","sv string(),value dictmeta))] } ================================================================================ FILE: TorQ_code_dqc_chkslowsub.q SIZE: 342 characters ================================================================================ \d .dqc /- Function to check for slow subscribers chkslowsub:{[threshold] .lg.o[`dqe;"Checking for slow subscribers"]; overlimit:(key .z.W) where threshold<sum each value .z.W; $[0=count overlimit; (1b;"no data queues over the limit, in ",string .proc.procname); (0b;"handle(s) ",("," sv string overlimit)," have queues")] } ================================================================================ FILE: TorQ_code_dqc_constructcheck.q SIZE: 446 characters ================================================================================ \d .dqc /- function to check for table,variable,function or view constructcheck:{[construct;chktype] chkfunct:{system x," ",string $[null y;`;y]}; dict:`table`variable`view`function!chkfunct@/:"avbf"; .lg.o[`dqe;"checking if ", (s:string construct)," ",(s2:string chktype), " exists"]; c:construct in dict[chktype][]; (c;s," ",s2," ",$[c;"exists";"missing from process"];$[chktype=`table;construct in tables[];{x~key x}construct]) } ================================================================================ FILE: TorQ_code_dqc_datechk.q SIZE: 527 characters ================================================================================ \d .dqc /- function to check that date vector contains latest date in an hdb datechk:{[] .lg.o[`datechk;"Checking if latest date in hdb is corect"]; if[not `PV in key`.Q; .lg.o[`datechk;"The directory is not partitioned"]; :(0b;"The directory is not partitioned")]; if[not `date in .Q.pf; .lg.o[`datechk;"date is not a partition field value"]; :(0b;"date is not a partition field value")]; c:(last .Q.pv)=.z.d-1+k*(k:.z.d mod 7)in 0 1; (c;"Latest date in hdb is ", $[c;"correct";"not correct"]) } ================================================================================ FILE: TorQ_code_dqc_daytoday.q SIZE: 464 characters ================================================================================ \d .dqc /- compares the value of a column in DQEDB from previous T+1 to T+2 (assuming the column has one value per day) daytoday:{[tab;cola;colb;vara;varb] listt:{?[tab;((=;cola;enlist vara);(=;colb;enlist varb);(=;.Q.pf;x));1b;()]}each -2#.Q.PV; (c;"The value of ",(string vara)," and ",(string varb),$[c:(first listt[0]`resvalue)=first listt[1]`resvalue;" matched ";" did not match "]," in the days: ",(string last .Q.PV)," and ",string first -2#.Q.PV) } ================================================================================ FILE: TorQ_code_dqc_dfilechk.q SIZE: 718 characters ================================================================================ \d .dqc /- function to check .d file. Sample use: .dqc.dfilechk[`trade] dfilechk:{[tname] .lg.o[`dfilechk;"Checking if two latest .d files match"]; if[not `PV in key`.Q; .lg.o[`dfilechk;"The directory is not partitioned"]; :(0b;"The directory is not partitioned")]; if[2>count .Q.PV; .lg.o[`dfilechk;"There is only one partition"]; :(1b;"There is only one partition, therefore there are no two .d files to compare")]; u:` sv'.Q.par'[`:.;-2#.Q.PV;tname],'`.d; /- check all .d files exist $[all .os.Fex each u; (c;"Two latest .d files ",$[c:(~). get each u;"";"do not "],"match"); (0b;"Two partitions are available but there are no two .d files for the given table to compare")] } ================================================================================ FILE: TorQ_code_dqc_freeform.q SIZE: 343 characters ================================================================================ \d .dqc /- takes a query as a string, tries to evaluate it and adds to results table /- whether or not it passed freeform:{[query] if[not 10h=type query; :(0b;"error: query must be sent as type string")]; if[11h=type a:@[value;query;{`error}]; c:not `error=a; :(c;query,$[c;" passed";" failed"])]; (1b;query," passed") } ================================================================================ FILE: TorQ_code_dqc_infinitychk.q SIZE: 513 characters ================================================================================ \d .dqc /- Check percentage of infinities in each of the columns of t, where the columns /- to watch are specified in colslist, and a percentage threshold thres. infinitychk:{[t;colslist;thres] .lg.o[`dqe;"checking ",string[t]," for infinities in columns ",", "sv string(),colslist]; d:({sum x in (0w;-0w;0W;-0W)}each flip tt)*100%count tt:((),colslist)#get t; $[count b:where d>thres; (0b;"Following columns above threshold: ",(", " sv string b),"."); (1b;"No columns above threshold.") ] } ================================================================================ FILE: TorQ_code_dqc_memoryusage.q SIZE: 445 characters ================================================================================ \d .dqc / - Check percentage of memory usage compared to max memory memoryusage:{[perc] .lg.o[`dqc;"checking whether the percetnage of memory usage exceeds ",(string 100*perc),"%"]; used:.Q.w[]`used; maxm:.Q.w[]`mphy; if[perc>=1; :(0b;"error: percentage is greater than or equal to 1")]; (c;"memory usage of the process ",$[c:used<perc*maxm;"does not";"does"]," exceed ",(string 100*perc),"% of maximum physical memory capacity") } ================================================================================ FILE: TorQ_code_dqc_nullchk.q SIZE: 529 characters ================================================================================ \d .dqc /- Function to check the percentage of nulls in each column from colslist of a /- table t against a threshold thres, a list of threshold percentages for each /- column. nullchk:{[t;colslist;thres] .lg.o[`dqc;"checking ",string[t]," for nulls in columns ",", "sv string(),colslist]; d:({sum$[0h=type x;0=count@'x;null x]}each flip tt)*100%count tt:((),colslist)#get t; $[count b:where d>thres; (0b;"Following columns above threshold: ",(", " sv string b),"."); (1b;"No columns above threshold.") ] } ================================================================================ FILE: TorQ_code_dqc_pctAvgDailyChange.q SIZE: 1,068 characters ================================================================================ \d .dqc /- Check that current result of a given function applied to a given table is /- within threshold limits of n days average taken from results table. /- Parameters: fname - name of function from dqe engine; tabname - name of /- table; rt - results table in dqedb; ndays - number of previous days to /- compute daily average; thres - threshold is a number from 0 to 1 that /- corresponds to a range from 0% to 100% pctAvgDailyChange:{[fname;tabname;rt;ndays;thres] .lg.o[`pctAvgDailyChange;"Checking daily average change"]; if[ndays>-1+count .Q.pv; :(0b;"error: number of days exceeds number of available dates")]; previous:select avg resvalue from rt where date within(-1*ndays;-1)+last date,funct=fname,table=tabname; current:select avg resvalue from rt where date=last date,funct=fname,table=tabname; c:abs[current[0;`resvalue]- previous[0;`resvalue]]<=thres*previous[0;`resvalue]; (c;"count ",$[c;"doesn't differ";"differs"]," from ",(string ndays)," days average by more than ",(string thres),"%") } ================================================================================ FILE: TorQ_code_dqc_rangechk.q SIZE: 1,021 characters ================================================================================ \d .dqc /- Check that values of specified columns colslist in table (name) tn are within /- the range defined by the tables tlower and tupper. rangechk:{[tn;colslist;tlower;tupper;thres] .lg.o[`dqc;"checking columns ",(0N!", "sv string(),colslist)," of table ",string[tn]," are within specified range"]; if[0=count colslist; :(0b; "ERROR: No columns specified in colslist.")]; tab:get tn; if[1<>sum differ count each (tab;tupper;tlower); :(0b; "ERROR: Input tables are different lengths.") ]; if[any any tupper<tlower;:(0b;"ERROR: tlower and tupper wrong way round.")]; /- exclude columns that do not have pre-defined limits colslist:((),colslist) except exec c from meta tab where t in "csSC "; tupper:colslist#tupper; tlower:colslist#tlower; /- dictionary with results by columns d:sum[tt within (tlower;tupper)]*100%count tt:colslist#tab; $[count b:where d<thres; (0b;"Following columns below threshold: ",(", " sv string b),"."); (1b;"No columns below threshold.") ] } ================================================================================ FILE: TorQ_code_dqc_refdatacheck.q SIZE: 719 characters ================================================================================ \d .dqc /- Check whether the referenced column of a table is in another column of /- another table. Takes four symbols as input, the table names and the columns /- to check. refdatacheck:{[tablea;tableb;cola;colb] .lg.o[`refdatacheck;"checking whether reference data is covered in the other column"]; msg:$[c:all r:tablea[cola]in tableb colb; "All data from ",(string cola)," of ",(string tablea),"exists in ",(string colb)," of ",string tableb; "The following data did not exist in ",(string colb)," of ",(string tableb),": ","," sv string tablea[cola]where not r]; .lg.o[`refdatacheck;"refdatacheck completed; All data from ",(string cola),$[c;"did";"did not"]," exist in ",string colb]; (c;msg) } ================================================================================ FILE: TorQ_code_dqc_schemacheck.q SIZE: 679 characters ================================================================================ \d .dqc /- checks that the meta of a table matches expectation schemacheck:{[tab;colname;types;forkeys;attribute] .lg.o[`dqc;"checking schema of table mathces expectation"]; origschema:0!meta tab; checkschema:([]c:colname;t:types;f:forkeys;a:attribute); $[all c:checkschema~'origschema; (1b;"Schema of ",(string tab)," matched proposed schema"); (0b;"The following columns from the schema of table ",(string tab)," did not match expectation: ",(", "sv string origschema[`c][where not c]),". Expected schema from the columns: ",(.Q.s1`type`fkey`attr!checkschema[where not c][`t`f`a]),". Actual Schema: ",.Q.s1`type`fkey`attr!origschema[where not c][`t`f`a])] } ================================================================================ FILE: TorQ_code_dqc_symfilecheck.q SIZE: 304 characters ================================================================================ \d .dqc hdbdir:@[value;`hdbdir;`hdb] /- check that the sym file exists symfilecheck:{[directory;filename] .lg.o[`dqc;"checking ",(1_string[filename])," exists in ",(1_string[directory])]; (c;"sym file named ",(string filename)," ",$[c:.os.Fex .Q.dd[directory]filename;"exists";"doesn't exist"]) } ================================================================================ FILE: TorQ_code_dqc_symfilegrowth.q SIZE: 1,405 characters ================================================================================ \d .dqc
Loading from large files¶ The Load CSV form of the File Text operator loads a CSV file into a table in memory, from which it can be serialized in various ways. If the data in the CSV file is too large to fit into memory, we need to break the large CSV file into manageable chunks and process them in sequence. Function .Q.fs (file streaming) and its variants help automate this process. .Q.fs loops over a file in conveniently-sized chunks of complete records, and applies a function to each chunk. This lets you implement a streaming algorithm to convert a large CSV file into an on-disk database without holding all the data in memory at once. Using .Q.fs ¶ Suppose our CSV file contains the following: 2019-10-03, 24.5, 24.51, 23.79, 24.13, 19087300, AMD 2019-10-03, 27.37, 27.48, 27.21, 27.37, 39386200, MSFT 2019-10-04, 24.1, 25.1, 23.95, 25.03, 17869600, AMD 2019-10-04, 27.39, 27.96, 27.37, 27.94, 82191200, MSFT 2019-10-05, 24.8, 25.24, 24.6, 25.11, 17304500, AMD 2019-10-05, 27.92, 28.11, 27.78, 27.92, 81967200, MSFT 2019-10-06, 24.66, 24.8, 23.96, 24.01, 17299800, AMD 2019-10-06, 27.76, 28, 27.65, 27.87, 36452200, MSFT If you call .Q.fs with the function 0N! , you get a list with the rows as elements: q).Q.fs[0N!]`:file.csv ("2019-10-03,24.5,24.51,23.79,24.13,19087300,AMD";"2019-10-03,27.37,27.48,27... 387 You can get a list with the columns as elements like this: q).Q.fs[{0N!("DFFFFIS";",")0:x}]`:file.csv (2019.10.03 2019.10.03 2019.10.04 2019.10.04 2019.10.05 2019.10.05 2019.10.06.. 387 Having that, the next step is to table it: q)colnames:`date`open`high`low`close`volume`sym q).Q.fs[{0N! flip colnames!("DFFFFIS";",")0:x}]`:file.csv +`date`open`high`low`close`volume`sym!(2019.10.03 2019.10.03 2019.10.04 2019... 387 And finally we can insert each row into a table q).Q.fs[{`trade insert flip colnames!("DFFFFIS";",")0:x}]`:file.csv 387 q)trade date open high low close volume sym ------------------------------------------------ 2019.10.03 24.5 24.51 23.79 24.13 19087300 AMD 2019.10.03 27.37 27.48 27.21 27.37 39386200 MSFT 2019.10.04 24.1 25.1 23.95 25.03 17869600 AMD 2019.10.04 27.39 27.96 27.37 27.94 82191200 MSFT 2019.10.05 24.8 25.24 24.6 25.11 17304500 AMD 2019.10.05 27.92 28.11 27.78 27.92 81967200 MSFT 2019.10.06 24.66 24.8 23.96 24.01 17299800 AMD 2019.10.06 27.76 28 27.65 27.87 36452200 MSFT The above sequence created the table in memory, but if it is too large to fit, we can insert the rows directly into a table on disk: q).Q.fs[{`:newfile upsert flip colnames!("DFFFFIS";",")0:x}]`:file.csv 387 q)value `:newfile date open high low close volume sym ------------------------------------------------ 2019.10.03 24.5 24.51 23.79 24.13 19087300 AMD 2019.10.03 27.37 27.48 27.21 27.37 39386200 MSFT 2019.10.04 24.1 25.1 23.95 25.03 17869600 AMD 2019.10.04 27.39 27.96 27.37 27.94 82191200 MSFT 2019.10.05 24.8 25.24 24.6 25.11 17304500 AMD 2019.10.05 27.92 28.11 27.78 27.92 81967200 MSFT 2019.10.06 24.66 24.8 23.96 24.01 17299800 AMD 2019.10.06 27.76 28 27.65 27.87 36452200 MSFT Variants of .Q.fs extend it to named pipes and control chunk size. .Q.fsn for chunk size .Q.fps , .Q.fpn for named pipes Data-loading example¶ Q makes it easy to load data from files (CSV, TXT, binary etc.) into a database. The simplest case is to read a file completely into memory and save it to a table on disk using .Q.dpft or set . However, this is not always possible and different techniques may be required, depending on how the data is presented. Ideally data is presented in a form consistent with how it is stored in the database and in file sizes which can be easily read into memory all at once. Loading is fastest when the number of different writes to different database partitions is minimized. An example of this in a date-partitioned database with financial data would be a single file per date and per instrument, or a single file per date. A slightly different example might have many small files to be loaded (e.g. minutely bucketed data per date and per instrument), in which case the performance would be maximized by reading many files for the same date at once, and writing in one block to a single date partition. Unfortunately it is not always possible or is too expensive to structure the input data in a convenient way. The example below considers the techniques required to load data from multiple large CSV files. Each CSV file contains one month of trade data for all instruments, sorted by time. We want to load it into a date-partitioned database with the data parted by instrument. Assume we cannot read the full file into memory. We must - read data in chunks using .Q.fsn - append data to splayed tables using manual enumerations and upsert - re-sort and set attributes on disk when all the data is loaded - write a daily statistics table as a splayed table at the top level of the database KxSystems/cookbook/dataloader/gencsv.q Test CSV generator KxSystems/cookbook/dataloader/loader.q Full loader The loader could be made more generic, but has been kept simple to preserve clarity. Unlike other database technologies, you do not need to define the table schema before you load the data, i.e. there is no separate “create” step. The schema is defined by the format of the written data, so the schema is often defined by the data loaders. Data loader¶ A data loader should always produce ample debug information. Each step may take considerable time reading from or writing to disk; best to see what the loader is doing rather than a blank console. The following structure is fairly common for loaders. loaddata - A function to load in a chunk of data and write it out to the correct table structures - Loads data into the table partitions. The main load is done using 0: , which can take either data or the name of a file as its right argument.loaddata builds a list of partitions that it has modified during the load. final - A function to do the final tasks once the load is complete. - Used to re-sort and re-apply attributes after the main load is done. It re-sorts each partitioned table only if necessary. It uses the list of partitions built by loaddata to know which tables to modify. It creates a top-level view table (daily) from each partition it has modified. loadallfiles - The wrapper function which generates the list of files to load, loads them, then invokes final . It takes a directory as its argument, to find the files to load. Example¶ Run gencsv.q to build the raw data files. You can modify the config to change the size, location or number of files generated. > q gencsv.q KDB+ 4.0 2020.10.02 Copyright (C) 1993-2020 Kx Systems 2019.02.25T14:21:00.477 writing 1000000 rows to :examplecsv/trades2019_01.csv for date 2019.01.01 2019.02.25T14:21:02.392 writing 1000000 rows to :examplecsv/trades2019_01.csv for date 2019.01.02 2019.02.25T14:21:04.049 writing 1000000 rows to :examplecsv/trades2019_01.csv for date 2019.01.03 2019.02.25T14:21:05.788 writing 1000000 rows to :examplecsv/trades2019_01.csv for date 2019.01.04 2019.02.25T14:21:07.593 writing 1000000 rows to :examplecsv/trades2019_01.csv for date 2019.01.05 2019.02.25T14:21:09.295 writing 1000000 rows to :examplecsv/trades2019_01.csv for date 2019.01.06 ... 2019.02.25T14:23:30.795 writing 1000000 rows to :examplecsv/trades2019_03.csv for date 2019.03.28 2019.02.25T14:23:32.611 writing 1000000 rows to :examplecsv/trades2019_03.csv for date 2019.03.29 2019.02.25T14:23:34.404 writing 1000000 rows to :examplecsv/trades2019_03.csv for date 2019.03.30 2019.02.25T14:23:36.113 writing 1000000 rows to :examplecsv/trades2019_03.csv for date 2019.03.31 Run loader.q to load the data. You might want to modify the config at the top of the loader to change the HDB destination, compression options, and the size of the data chunks read at once. > q loader.q KDB+ 4.0 2020.10.02 Copyright (C) 1993-2020 Kx Systems 2019.02.25T14:24:54.201 **** LOADING :examplecsv/trades2019_01.csv **** 2019.02.25T14:24:55.116 Reading in data chunk 2019.02.25T14:24:55.899 Read 1896517 rows 2019.02.25T14:24:55.899 Enumerating 2019.02.25T14:24:56.011 Writing 1000000 rows to :hdb/2019.01.01/trade/ 2019.02.25T14:24:56.109 Writing 896517 rows to :hdb/2019.01.02/trade/ 2019.02.25T14:24:56.924 Reading in data chunk 2019.02.25T14:24:57.671 Read 1896523 rows 2019.02.25T14:24:57.671 Enumerating 2019.02.25T14:24:57.759 Writing 103482 rows to :hdb/2019.01.02/trade/ 2019.02.25T14:24:57.855 Writing 1000000 rows to :hdb/2019.01.03/trade/ 2019.02.25T14:24:57.953 Writing 793041 rows to :hdb/2019.01.04/trade/ 2019.02.25T14:24:58.741 Reading in data chunk 2019.02.25T14:24:59.495 Read 1896543 rows 2019.02.25T14:24:59.495 Enumerating 2019.02.25T14:24:59.581 Writing 206958 rows to :hdb/2019.01.04/trade/ 2019.02.25T14:24:59.679 Writing 1000000 rows to :hdb/2019.01.05/trade/ 2019.02.25T14:24:59.770 Writing 689585 rows to :hdb/2019.01.06/trade/ ... 2019.02.25T14:27:50.205 Sorting and setting `p# attribute in partition :hdb/2019.01.01/trade/ 2019.02.25T14:27:50.328 Sorting table 2019.02.25T14:27:52.067 `p# attribute set successfully 2019.02.25T14:27:52.067 Sorting and setting `p# attribute in partition :hdb/2019.01.02/trade/ 2019.02.25T14:27:52.322 Sorting table 2019.02.25T14:27:55.787 `p# attribute set successfully 2019.02.25T14:27:55.787 Sorting and setting `p# attribute in partition :hdb/2019.01.03/trade/ ... 2019.02.25T16:10:26.912 **** Building daily stats table **** 2019.02.25T16:10:26.913 Building dailystats for date 2019.01.01 and path :hdb/2019.01.01/trade/ 2019.02.25T16:10:27.141 Building dailystats for date 2019.01.02 and path :hdb/2019.01.02/trade/ 2019.02.25T16:10:27.553 Building dailystats for date 2019.01.03 and path :hdb/2019.01.03/trade/ 2019.02.25T16:10:27.790 Building dailystats for date 2019.01.04 and path :hdb/2019.01.04/trade/ ... Handling duplicates¶ The example data loader appends data to existing tables. This may cause potential issues with duplicates – partitioned/splayed tables cannot have keys, and any file loaded more than once will cause the data to be inserted multiple times. There are a few approaches to preventing duplicates: - Maintain a table of files which have already been loaded, and do a pre-load check to see if the file has already been loaded. If not already loaded, load it and update the table. The duplicate detection can be done on the file name and/or by generating a MD5 hash for the supplied file. This gives a basic level of protection - For each table, define a key and check for duplicates based on that key. This will probably greatly increase the loading time, and may be prone to error. (It is perfectly valid for some datasets to have duplicate rows.) - Depending on how the data is presented, it may be possible to do basic duplicate detection by counting the rows already in the database based on certain key fields and comparing with those present in the file. An example approach to removing duplicates can be seen in the builddailystats function in loader.q . Parallel loading¶ The key consideration when doing parallel loading is to ensure separate processes do not touch the same table structures at the same time. The enumeration operation .Q.en enforces a locking mechanism to ensure that two processes do not write to the sym file at the same time. Apart from that, it is up to the programmer to manage. In this example we can load different files in parallel as we know that the files do not overlap in terms of the partitioned tables that they will write to, provided that we set the builddaily flag in the loadallfiles function to false. This will ensure parallel loaders do not write to the daily table concurrently. (The daily table would then have to be built in a separate step). Loaders which may write data to the same tables (in the same partitions) at the same time cannot be run safely in parallel. Aborting the load¶ Aborting the load, by using commands such as pressing Ctrl-C, kill -9 , and errors such as wsfull ) is not recommended. It can result in an incomplete write of data to the database. Usually the side effects can be corrected with some manual work, such as re-saving the table without the partially loaded data and running the loader again. However, if the data loader is aborted while it is writing to the database (as opposed to reading from the file) then the effects may be trickier to correct as the affected table may have some columns written to and some not, leaving the table as an invalid structure. In this instance it may be possible to recover the data by manually truncating the column files individually. In-memory enumeration¶ With some loader scripts the enumeration step can become a bottleneck. One solution is to enumerate in-memory only, write the data to disk, then update the sym file on disk when done. This function will enumerate in-memory rather than on-disk and can be used instead of .Q.en : enm:{@[x;f where 11h=type each x f:key flip 0!x;`sym?]} This may improve performance, but means loading is no longer parallelizable; and if the loader fails before it completes then all the newly loaded data must be deleted, as the enumerations will have been lost. Utilities¶ Utility script KxSystems/kdb/utils/csvguess.q generates CSV loader scripts automatically. This is especially useful for very wide or long CSV files where it is time-consuming to specify the correct types for each column. This also includes an optimized on-disk sorter, and the ability to create a loader to load and enumerate quickly all the symbol columns, requiring parallel loading processes only to read the sym file. Splaying large files¶ Enumerating by hand¶ Recall how to convert a large CSV file into an on-disk database without holding all the data in memory at once: q)colnames: `date`open`high`low`close`volume`sym q).Q.fs[{ .[`:newfile; (); ,; flip colnames!("DFFFFIS";",")0:x]}]`:file.csv 387 q)value `:newfile date open high low close volume sym ------------------------------------------------ 2019.10.03 24.5 24.51 23.79 24.13 19087300 AMD 2019.10.03 27.37 27.48 27.21 27.37 39386200 MSFT 2019.10.04 24.1 25.1 23.95 25.03 17869600 AMD 2019.10.04 27.39 27.96 27.37 27.94 82191200 MSFT 2019.10.05 24.8 25.24 24.6 25.11 17304500 AMD 2019.10.05 27.92 28.11 27.78 27.92 81967200 MSFT 2019.10.06 24.66 24.8 23.96 24.01 17299800 AMD 2019.10.06 27.76 28 27.65 27.87 36452200 MSFT ... To save splayed, we have to enumerate symbol columns; here, the sym column. q)sym: `symbol$() q)colnames: `date`open`high`low`close`volume`sym q)fn: {.[`:dir/trade/; (); ,; update sym:`sym?sym from flip colnames!("DFFFFIS";",")0:x]} q).Q.fs[fn;]`:file.csv 387 But we also have to save the sym list for when the splayed database is opened. q)`:dir/sym set sym `:dir/sym Check this works. > q dir KDB+ 4.0 2020.10.02 Copyright (C) 1993-2020 Kx Systems m64/ 12()core 65536MB sjt mackenzie.local 127.0.0.1 EXPIRE .. q)\v `s#`sym`trade q)sym `AMD`MSFT q)select distinct sym from trade sym ---- AMD MSFT Enumerating using .Q.en ¶ Recall also how to save a table to disk splayed: q)`:dir/tr/ set .Q.en[`:dir] tr `:dir/tr/ Instead of doing the steps by hand, we can have .Q.en do them. q)colnames: `date`open`high`low`close`volume`sym q)fn: {.[`:dir/trade/;(); ,; .Q.en[`:dir]flip colnames!("DFFFFIS";",")0:x]} q).Q.fs[fn;]`:file.csv 387 And we can verify this works. > q dir KDB+ 4.0 2020.10.02 Copyright (C) 1993-2020 Kx Systems m64/ 12()core 65536MB sjt mackenzie.local 127.0.0.1 EXPIRE ... q)\v `s#`sym`trade q)sym `AMD`MSFT q)select distinct sym from trade sym ---- AMD MSFT Encrypted data files¶ To load encrypted data files (which for security can’t be stored decrypted on disk) into kdb+ and save tables in encrypted format. - Extract encrypted CSV data to named pipe - Read named pipe into kdb+ - Save to disk encrypted # make pipe mkfifo named_pipe # decrypt to named pipe openssl enc -aes-256-cbc -d –k password -in trades.csv.dat > named_pipe & / read in the data to q .Q.fps[{`trade insert (“STCCFF”;”,”) 0: x}]`:named_pipe / save to disk encrypted AES256CBC (`:2020.03.05/trade/;17;6;6) set .Q.en[`:.;trade] set , .Q.fps (pipe streaming) Named pipes Bulk Copy Program¶ Microsoft’s Bulk Copy Program (bcp) is supported using the text format. Export: `t.bcp 0:"\t"0:value flip t / remove column headings Import: flip cols!("types";"\t")0:`t.bcp /add headings Inserting data into SQL Server¶ Create the table in SQL Server if it does not already exist. Once the table exists in SQL Server: `t.bcp 0:"\t"0:value flip t \bcp t in t.bcp -c -T
Q client for ODBC¶ In Windows and Linux, you can use ODBC to connect to a non-kdb+ database from q. Installation¶ To install, download - KxSystems/kdb/c/odbc.k into the q directory - the appropriate odbc.so orodbc.dll : | q | q/l32 | q/l64 | q/w32 | q/w64 | |---|---|---|---|---| | ≥V3.0 | odbc.so | odbc.so | odbc.dll | odbc.dll | | ≤V2.8 | odbc.so | odbc.so | odbc.dll | odbc.dll | Mixed versions If you mix up the library versions, you’ll likely observe a type error when opening the connection. Start kdb+ and load odbc.k – this populates the .odbc context. Unix systems Ensure you have unixODBC installed, and that LD_LIBRARY_PATH includes the path to the odbc.so, e.g. for 64-bit Linux $ export LD_LIBRARY_PATH=\(LD_LIBRARY_PATH:\)QHOME/l64 unixODBC configuration guide Method¶ First open an ODBC connection to a database. To do so, define a DSN (database source name), and then connect to the DSN using .odbc.open . This returns a connection handle, which is used for subsequent ODBC calls: q)\l odbc.k q)h:.odbc.open "dsn=northwind" / use DSN to connect northwind database q).odbc.tables h / list tables `Categories`Customers`Employees`Order Details`Orders`Products.. q).odbc.eval[h;"select * from Orders"] / run a select statement OrderID CustomerID EmployeeID OrderDate RequiredDate.. -----------------------------------------------------.. 10248 WILMK 5 1996.07.04 1996.08.01 .. 10249 TRADH 6 1996.07.05 1996.08.16 .. 10250 HANAR 4 1996.07.08 1996.08.05 .. .. Alternatively, use .odbc.load to load the entire database into q: q)\l odbc.k q).odbc.load "dsn=northwind" / load northwind database q)Orders OrderID| CustomerID EmployeeID OrderDate RequiredDate .. -------| ----------------------------------------------.. 10248 | WILMK 5 1996.07.04 1996.08.01 .. 10249 | TRADH 6 1996.07.05 1996.08.16 .. 10250 | HANAR 4 1996.07.08 1996.08.05 .. .. ODBC functions¶ Functions defined in the .odbc context: close ¶ Closes an ODBC connection handle: .odbc.close x Where x is the connection value returned from .odbc.open . eval ¶ Evaluate a SQL expression: .odbc.eval[x;y] Where - x is either - the connection value returned from .odbc.open . - a 2 item list containing the connection value returned from .odbc.open , and a timeout (long). - the connection value returned from - y is the statement to execute on the data source. q)sel:"select CompanyName,Phone from Customers where City='London'" q)b:.odbc.eval[h;sel] q)b CompanyName Phone ---------------------------------------- "Around the Horn" "(171) 555-7788" "B's Beverages" "(171) 555-1212" "Consolidated Holdings" "(171) 555-2282" "Eastern Connection" "(171) 555-0297" "North/South" "(171) 555-7733" "Seven Seas Imports" "(171) 555-1717" q)select from b where Phone like "*1?1?" CompanyName Phone ------------------------------------- "B's Beverages" "(171) 555-1212" "Seven Seas Imports" "(171) 555-1717" q)b:.odbc.eval[(h;5);sel) / same query with 5 second timeout load ¶ Loads an entire database into the session: .odbc.load x Where x is the same parameter definition as that passed to .odbc.open . q).odbc.load "dsn=northwind" q)\a `Categories`Customers`Employees`OrderDetails`Orders`Products`Shippers`Supplie.. q)Shippers ShipperID| CompanyName Phone ---------| ----------------------------------- 1 | "Speedy Express" "(503) 555-9831" 2 | "United Package" "(503) 555-3199" 3 | "Federal Shipping" "(503) 555-9931" open ¶ Open a connection to a database. .odbc.open x Where x is a - string representing an ODBC connection string. Can include DSN and various driver/vendor defined values. For example: q)h:.odbc.open "dsn=kdb" q)h:.odbc.open "driver=Microsoft Access Driver (*.mdb, *.accdb);dbq=C:\\CDCollection.mdb" q)h:.odbc.open "dsn=kdb;uid=my_username;pwd=my_password" - mixed list of connection string and timeout (long). For example: q)h:.odbc.open ("dsn=kdb;";60) - symbol representing a DSN. The symbol value may end with the following supported values for shortcut operations: .dsn is a shortcut for file DSN. For example:h:.odbc.open `test.dsn / uses C:\Program Files\Common Files\odbc/data source\test.dsn on windows / and /etc/ODBCDataSources/test.dsn on linux .mdb is a shortcut for the Microsoft Access driver. For example: Note that the driver name above must match the driver installed. If the driver name differs, an alternative is to the use a string value rather than this shortcut.q)h:.odbc.open `$"C:\\CDCollection.mdb" / resolves to "driver=Microsoft Access Driver (*.mdb);dbq=C:\\CDCollection" .mdf is a shortcut for the SQL Server driver. For example: Note that the driver name above must match the driver installed. If the driver name differs, an alternative is to the use a string value rather than this shortcut.q)h:.odbc.open `my_db.mdf / resolves to "driver=sql server;server=(local);trusted_connection=yes;database=my_db" - list of three symbols. First symbol represents the DSN, the second is the username, and the third symbol is for password. Returns an ODBC connection handle. tables ¶ List tables in database: .odbc.tables x Where x is the connection value returned from .odbc.open . q).odbc.tables h `Categories`Customers`Employees`Order Details`Orders`Products... views ¶ List views in database: .odbc.views x Where x is the connection value returned from .odbc.open . q).odbc.views h `Alphabetical List of Products`Category Sales for 1997`Current... Tracing¶ ODBC has the capability to trace the ODBC API calls to a log file; sometimes this can be helpful in resolving unusual or erroneous behavior. On Unix, you can activate the tracing by adding [ODBC] Trace = 1 TraceFile =/tmp/odbc.log to the odbcinst.ini file, which can typically be found in /etc or /usr/local/etc . MSDN.aspx) for tracing on Windows
Reference architecture for AWS¶ kdb+ is the technology of choice for many of the world’s top financial institutions when implementing a tick-capture system for timeseries analysis. kdb+ is capable of processing large amounts of data in a very short space of time, making it the ideal technology for dealing with the ever-increasing volumes of financial tick data. KX customers can lift and shift their kdb+ plants to the cloud and exploit virtual machines (VM) with storage. This is the classic approach that relies on the existing license. To benefit more from the cloud technology we recommend migrating to kdb Insights. kdb Insights kdb Insights provides a range of tools to build, manage and deploy kdb+ applications in the cloud. It supports interfaces for deployment and common ‘Devops‘ orchestration tools such as Docker, Kubernetes, Helm, etc. It supports integrations with major cloud logging services. It provides a kdb+ native REST client, Kurl, to authenticate and interface with other cloud services. kdb Insights also provides kdb+ native support for reading from cloud storage. By taking advantage of the kdb Insights suite of tools, developers can quickly and easily create new and integrate existing kdb+ applications on Google Cloud. Deployment: - Use Helm and Kubernetes to deploy kdb+ applications to the cloud Service integration: - QLog – Integrations with major cloud logging services - Kurl – Native kdb+ REST client with authentication to cloud services Storage: - kdb+ Object Store – Native support for reading and querying cloud object storage Architectural components¶ The core of a kdb+ tick-capture system is called kdb+tick. The kdb+tick architecture allows the capture, processing, and querying of timeseries data against real-time, streaming and historical data. This reference architecture describes a full solution running kdb+tick within Amazon Web Services (AWS) which consists of these bare-minimum functional components: - Datafeeds - Feedhandlers - Tickerplant - Real-time database - Historical database - KX gateway One architectural pattern for kdb+tick in Amazon Web Services is depicted below. The kdb+ historical database (HDB) can be stored in FSx Lustre and tiered to S3 or, with kdb Insights, the HDB data can be directly accessed from a kdb+ process. A simplified architecture diagram for kdb+tick in Amazon Web Services Worthy of note in this reference architecture is the ability to place kdb+ processing functions either in one Elastic Compute Cloud (EC2) instance or distributed across many EC2 instances. kdb+ processes can communicate with each other through built-in language primitives: this allows for flexibility in final design layouts. Data transportation between kdb+ processes, and overall external communication, is by low-level TCP/IP sockets. If two components are on the same EC2 instance, local Unix sockets can be used to reduce communication overhead. Many customers have tickerplants set up on their premises. The AWS reference architecture allows them to manage a hybrid infrastructure that communicates with tickerplants both on premises and in the cloud. However, the benefits of migrating on-premises solutions to the cloud are vast. These include flexibility, auto-scaling, improved transparency in cost management, access to management and infrastructure tools built by Amazon, quick hardware allocation and many more. Datafeeds¶ These are the sources of the data we aim to ingest into our system. For financial use cases, data may be ingested from B-pipe (Bloomberg), or Elektron (Refinitiv) data or any exchange that provides a data API. Often the streaming data is available on a pub-sub component like Kafka, Solace, etc. - all popular sources have an open-source interface to kdb+. The data feeds are in a proprietary format, but always one with which KX is familiar. Usually this means that a feedhandler just needs to be aware of the specific data format. Due to the flexible architecture of KX, most, if not all, the underlying kdb+ processes that constitute the system can be placed in any location of this architecture. For example, for latency, compliance or other reasons, the data feeds may be relayed through your existing on-premises data center. Or the connection from the feed handlers may be made directly from this Virtual Private Cloud (VPC) into the market data venue. The kdb+ infrastructure is often used also to store internally derived data. This can optimize internal data flow and help remove latency bottlenecks. The pricing of liquid products, for example on B2B markets, is often done by a complex distributed system. This system changes often due to new models, new markets or other internal system changes. Data in kdb+ that will be generated by these internal steps will also require processing and handling huge amounts of timeseries data. When all the internal components of these systems send data to kdb+, a comprehensive impact analysis captures any changes. Feedhandler¶ A feedhandler process captures external data and translates it into kdb+ messages. You can use multiple feed handlers to gather data from several sources and feed it to the kdb+ system for storage and analysis. There are a number of open source (Apache 2 licensed) Fusion interfaces between KX and other third-party technologies. Feed handlers are typically written in Java, Python, C++, and q. Tickerplant¶ The tickerplant (TP) is a specialized, single-threaded kdb+ process that operates as a link between your datafeed and a number of subscribers. It implements a pub-sub pattern: specifically, it receives data from the feedhandler, stores it locally in a table, then saves it to a log file. It publishes this data to a realtime database (RDB) and any clients who have subscribed to it. It then purges its local tables of data. Tickerplants can operate in two modes: - Batch mode - Collects updates in its local tables. It batches up for a period of time and then forwards the update to realtime subscribers in a bulk update. - Real-time (zero latency) mode - Forwards the input immediately. This requires smaller local tables but has higher CPU and network costs. Bear in mind that each message has a fixed network overhead. Supported API calls: - Subscribe: adds subscriber to message receipt list and sends subscriber table definitions - Unsubscribe: removes subscriber from message receipt list Events: - End of Day: at midnight, the TP closes its log files, autocreates a new file, and notifies the realtime database (RDB) of the start of the new day Realtime database¶ The realtime database (RDB) holds all the intraday data in memory, to enable fast, powerful queries. For example, at the start of the business day, the RDB sends a message to the tickerplant and receives a reply containing the data schema, the location of the log file, and the number of lines to read from the log file. It then receives subsequent updates from the tickerplant as they are published. One of the key design choices for Amazon Web Services is the size of memory for this instance, as ideally we need to contain the entire business day/period of data in memory. Purpose: - Subscribed to the messages from the tickerplant - Stores (in-memory) the messages received - Allows this data to be queried intra-day Actions: - On message receipt: inserts into local, in-memory tables - End of Day receipt: usually writes intraday data down then sends a new End of Day message to the HDB. Optionally RDB sorts certain tables (e.g. by sym and time) to speed up queries. An RDB can operate in single- or multi-input mode. The default mode is single-input, in which user queries are served sequentially and queries are queued till an update from the TP is processed (inserted into the local table). In standard tick scripts, the RDB tables are indexed, typically by the product identifier. An index is a hash table behind the scene. Indexing has a significant impact on the speed of the queries at the cost of slightly slower ingestion. The insert function takes care of the indexing, i.e. during an update it also updates the hash table. Performance of the CPU and memory in the chosen AWS instance will have some impact on the overall sustainable rates of ingest and queryable rate of this realtime kdb+ function. Historical database¶ The historical database (HDB) is a simple kdb+ process with a pointer to the persisted data directory. A kdb+ process can read this data and memory-maps it, allowing for fast queries across a large volume of data. Typically, the RDB is instructed by the tickerplant to save its data to the data directory at EOD from where the HDB can refresh its memory mappings. HDB data is partitioned by date in the standard tickerplant. If multiple disks are attached to the box, then data can be segmented and kdb+ makes use of parallel IO operations. Segmented HDB requires a par.txt file that contains the locations of the individual segments. A HDB query is processed by multiple threads and map-reduce is applied if multiple partitions are involved in the query. Purpose: - Provides a queryable data store of historical data - In instances involving research and development or data analytics, you can create customer reports on order execution times Actions: - End of Day receipt: reloads the database to get the new days’ data from the RDB write-down HDBs are often expected to be mirrored locally. Some users, (e.g. quants) need a subset of the data for heavy analysis and backtesting where the performance is critical. KX gateway¶ In production, a kdb+ system may be accessing multiple timeseries datasets, usually each one representing a different market-data source, or using the same data, refactored for different schemas. All core components of a kdb+tick can handle multiple tables. However, you can introduce multiple TPs, RDBs and HDBs based on your fault-tolerance requirements. This can result in a large number of q components and a high infrastructure segregation. A KX gateway generally acts as a single point of contact for a client. A gateway collects data from the underlying services, combines datasets and may perform further data operations (e.g. aggregation, joins, pivoting, etc.) before it sends the result back to the user. The specific design of a gateway can vary in several ways according to expected use cases. For example, in a hot-hot set up, the gateway can be used to query services across availability zones. The implementation of a gateway is largely determined by the following factors. - Number of clients or users - Number of services and sites - Requirement of data aggregation - Support of free-form queries - Level of redundancy and failover The task of the gateway can be broken down into the following steps. - Check user entitlements and data-access permissions - Provide access to stored procedures, utility functions and business logic - Gain access to data in the required services (TP, RDB, HDB) - Provide the best possible service and query performance The KX gateway must be accessible through Amazon GC2 security rules from all clients of the kdb+ service. In addition, the location of the gateway service needs to be visible to the remaining kdb+ processes constituting the full KX service. Storage and filesystem¶ kdb+tick architecture needs storage space for three types of data: - TP log - If the tickerplant (TP) needs to handle many updates, then writing to TP needs to be fast since slow I/O may delay updates and can even cause data loss. Optionally, you can write updates to the TP log in batches (e.g. every second) as opposed to real time. You will suffer data loss if TP or instance is halted unexpectedly or stops/restarts, as the recently received updates are not persisted. Nevertheless, you already suffer data loss if a TP process or the AWS instance goes down or restarts. The extra second of data loss is probably marginal to the whole outage window. - If the RDB process goes down, then it can replay data to recover from the TP log. The faster it can recover, the less data is waiting in the TP output queue to be processed by the restarted RDB. Hence, a fast read operation is critical to resilience. Amazon EBS io2 with block express or a subsection of an existing Amazon FSx for Lustre file system are good storage solutions to use for a TP log. - sym file (and par.txt for segmented databases) - The sym file is written by the realtime database (RDB) after end-of-day, when new data is appended to the historical database (HDB). The HDB processes will then read the sym file to reload new data. Time to read and write the sym file is often marginal compared to other I/O operations. It is beneficial to write the sym file to a shared file system like Amazon FSx for Lustre or Amazon EFS. This provides flexibility in the AWS Virtual Private Cloud (VPC), as any AWS instance can assume this responsibility in a stateless fashion. - HDB data - Performance of the filesystem solution will determine the speed and operational latency for kdb+ to read its historical (at rest) data. - Both EBS (io2 Block Express) and FSx for Lustre can provide good query execution times for important business queries. Each EBS Block express volume supports up to 256K IOPS and 4GBps of throughput and a maximum volume size capacity of 64TiB with sub-millisecond, low-variance I/O latency. Amazon EBS io2 volumes support multi-attached instances, up to 16 Linux instances built on Nitro System in the same Availability Zone can be attached to EBS io2. For larger capacity requirements, FSx for Lustre is a good choice for the HDB. Amazon FSx for Lustre file systems scale to hundreds of GB/s of throughput and millions of IOPS. FSx for Lustre also supports concurrent access to the same file or directory from thousands of compute instances. One advantage of storing your HDB within the AWS ecosystem is the flexibility of storage. This is usually distinct from “on-prem” storage, whereby you may start at one level of storage capacity and grow the solution to allow for dynamic capacity growth. One huge advantage of most AWS storage solutions (e.g persistent disks) is that disks can grow dynamically without the need to halt instances, this allows you to change resources dynamically. For example, start with small disk capacity and grow capacity over time. Best practice is to replicate data. Data replication processes can use lower cost/lower performance object storage in AWS or data can replicate across availability zones. For example, you might have a service failover from Europe to North America, or vice-versa. kdb+ uses POSIX filesystem semantics to manage HDB structure directly on a POSIX style filesystem stored in persistent storage (e.g. Amazon EBS or FSx for Lustre). Migrating a kdb+ historical database to the Amazon Cloud Simple Storage Service (S3)¶ S3 is an object store that scales to exabytes of data. There are different storage classes (Standard, Standard IA, Intelligent Tiering, One Zone, Glacier, Glacier Deep Archive) for different availability. Infrequently used data can use cheaper but slower storage. The kdb Insights native object store functionality allows users to read HDB data from S3 object storage. The HDB par.txt file can have segment locations that are on AWS S3 object storage. In this pattern, the HDB can reside entirely on S3 storage or spread across EBS, EFS or S3 as required. There is a relatively high latency when using S3 cloud storage compared to storage services EBS Block Express or FSx for Lustre. The performance of kdb+ when working with S3 can be improved by taking advantage of the caching feature of the kdb+ native object store. The results of requests to S3 can be cached on a local high-performance disk thus increasing performance. The cache directory is continuously monitored and a size limit is maintained by deleting files according to a LRU (least recently used) algorithm. Caching coupled with enabling secondary threads can increase the performance of queries against a HDB on S3 storage. The larger the number of secondary threads, irrespective of CPU core count, the better the performance of kdb+ object storage. Conversely the performance of cached data appears to be better if the secondary-thread count matches the CPU core count. We recommend using compression on the HDB data residing on S3. This can reduce the cost of object storage and possible egress costs and also counteract the relatively high-latency and low bandwidth associated with S3 object storage. Furthermore, S3 is useful for archiving, tiering, and backup. The TP log file and the sym can be stored each day and archived for a period of time. The lifecycle management of the object store simplifies clean-up, whereby one can set expiration time on any file. The versioning feature of S3 is particularly useful when a sym file bloat happens due to feed misconfiguration or upstream change. Migrating back to a previous version restores the health of the whole database. S3 provides strong read-after-write consistency. After a successful write or update of an object, any subsequent read request immediately receives the latest version of the object. S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with all changes reflected. This is especially useful when there are many kdb+ processes reading from S3 as it ensures consistency. A kdb+ feed can subscribe to a S3 file update where the upstream drops into a bucket and can start its processing immediately. The data is available earlier compared to the solution when the feed is started periodically (e.g. in every hour). Elastic Block Store (EBS)¶ EBS is a good storage service to store HDB and Tickerplant data, and is fully compliant with kdb+. EBS supports all of the POSIX semantics required. With the introduction of io2 EBS volumes, users experienced increased performance of 500 I/OPS per GiB and more durability, reducing the possibility of a storage volume failure. With the introduction of io2 Block Express, users experienced even more performance - volumes will give you up to 256K IOPS & 4000 MBps of throughput and a maximum volume size of 64 TiB, all with sub-millisecond, low-variance I/O latency. AWS blog: New EBS Volume Type io2 – 100× Higher Durability and 10× More IOPS/GiB Now in Preview – Larger Faster io2 EBS Volumes with Higher Throughput Elastic File Store (EFS)¶ EFS is an NFS service by AWS that offers NFS service for nodes in the same availability zone, and can run across zones, or be exposed externally. EFS can be used to store HDB and tickerplant data, and is fully compliant with kdb+. FSx for Lustre¶ Amazon FSx for Lustre is POSIX-compliant and is built on Lustre, a popular open-source parallel filesystem that provides scale-out performance that increases linearly with a filesystem’s size. FSx filesystems scale to hundreds of GBs of throughput and millions of IOPS. It also supports concurrent access to the same file or directory from thousands of compute instances and provides consistent, sub-millisecond latencies for file operations, which makes it especially suitable for storing and retrieving HDB data. FSx for Lustre persistent filesystem provides highly available and durable storage for kdb+ workloads. The fileservers in a persistent filesystem are highly available and data is automatically replicated within the same availability zone. FSx for Lustre persistent filesystem allows you to choose from three deployment options. - PERSISTENT-50 - PERSISTENT-100 - PERSISTENT-200 Each of these deployment options comes with 50 MB/s, 100 MB/s, or 200 MB/s baseline disk throughput per TiB of filesystem storage. Other storage solutions¶ This document contains the storage solution provided by Amazon. There are other vendors who offer kdb+-compliant storage options - these are described in more details under Other File Systems at https://code.kx.com/q/cloud. Memory¶ The tickerplant (TP) uses very little memory during normal operation in realtime mode, while a full record of intraday data is maintained in the realtime database. Abnormal operation occurs if a realtime subscriber (including RDB) is unable to process the updates. TP stores these updates in the output queue associated with the subscriber. A large output queue needs a large memory. TP may even hit memory limits and exit in extreme cases. Also, TP in batch mode needs to store data (e.g. for a second). This also increases the memory needed. Consequently, the memory requirement of the TP box depends on the setup of the subscribers and the availability requirements of the tick system. The main consideration for an instance hosting the RDB is to use a memory-optimized VM instance such as the m5.8xlarge (with 128 GB memory), m5.16xlarge (256 GB memory), etc. AWS also offers VM with extremely large memory, u-24tb1.metal , with 24 TiB of memory, for clients who need to store large amounts of high-frequency data in memory, in the RDB, or even to keep more than one partition of data in the RDB form. Bear in mind, the trade-off of large memory and RDB recovery time. The larger the tables, the longer it takes for the RDB to start from the TP log. To alleviate this problem, clients may split a large RDB into two. The driving rule for separating the tables into two clusters is the join operation between them. If two tables are never joined, then they can be placed into separate RDBs. We recommend large memories for HDB boxes. User queries may require large temporal space for complex queries. Query execution times are often dominated by IO cost to get the raw data. OS-level caching stores frequently used data. The larger the memory, the less cache miss will happen and the faster the queries will run. CPU¶ The CPU load generated by the tickerplant (TP) depends on the number of publishers and their verbosity (number of updates per second) and the number of subscribers. Subscribers may subscribe to partial data, but any filtering applied will consume further CPU cycles. The CPU requirement of the real-time database (RDB) comes from - appending updates to local tables - user queries Local table updates are very efficient especially if TP sends batch updates. Nevertheless, a faster CPU results in faster ingestion and lower latency. User queries are often CPU-intensive. They perform aggregation and joins, and call expensive functions. If the RDB is set up in multithreaded input mode then user queries are executed in parallel. Furthermore, kdb+ 4.0 supports multithreading in most primitives, including sum , avg , dev , etc. If the RDB process is heavily used and hit by many queries, then it is recommended to start with secondary threads. VMs with plenty of cores are recommended for RDB processes with large numbers of user queries. If the infrastructure is sensitive to the RDB EOD work, then powerful CPUs are recommended. Sorting tables before splaying is a CPU-intensive task. Historical databases (HDB) are used for user queries. In many cases the IO dominates execution times. If the box has large memory and OS level caching reduces IO operations efficiently then CPU performance will directly impact execution times. Locality, latency, and resilience¶ The standard on-premise tick setup has the components on the same server. The tickerplant (TP) and realtime database (RDB) are linked via the TP log file and the RDB and historical database (HDB) are bound due to RDB EOD splaying. Customized tickerplants relax this constraint to improve resilience. One motivation might be to avoid HDB queries impacting data capture in TP. You can set up an HDB writer on the HDB box and the RDB can send its tables via IPC at midnight and delegate the IO work together with the sorting and attribute handling. We recommend placing the feed handlers outside the TP box on another VM between TP and data feed. This way any malfunction of the feed handler has a smaller impact on TP stability. The kdb+tick architecture can also be set up with placement groups in mind, depending on the use case. A placement group is a configuration option AWS offers which lets you place a group of interdependent instances in a certain way across the underlying hardware on which those instances reside. The instances could be placed close together, spread through different racks, or spread through different availability zones. - Cluster placement group - The cluster placement group configuration allows you to place your group of interrelated instances close together in order to achieve the best throughput and low latency results. This option lets you pack the instances together only inside the same availability zone, either in the same Virtual Private Cloud (VPC) or between peered VPCs. - Spread placement groups - With spread placement groups, each single instance runs on separate physical hardware racks. So, if you deploy five instances and put them into this type of placement group, each one of those five instances will reside on a different rack with its own network access and power, either within a single availability zone or in a multi-availability-zone architecture. Disaster recovery¶ A disaster recovery plan is usually based on requirements from both the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) specifications, which can guide the design of a cost-effective solution. However, every system has its own unique requirements and challenges. Here we suggest best-practice methods for dealing with the various possible failures one needs to be plan. In all the various combinations of failover operations that can be designed, the end goal is always to maintain availability of the application and minimize any disruption to the business. In a production environment, some level of redundancy is always required. Requirements may vary depending on the use case, but in nearly all instances requiring high availability, the best option is to have a hot-hot (or ‘active-active’) configuration. Four main configurations are found in production. - Hot-hot - is the term for an identical mirrored secondary system running, separate to the primary system, capturing and storing data but also serving client queries. In a system with a secondary server available, hot-hot is the typical configuration as it is sensible to use all available hardware to maximize operational performance. The KX gateway handles client requests across availability zones and collects data from several underlying services, combining data sets and if necessary, performing an aggregation operation before returning the result to the client. - Hot-warm - The secondary system captures data but does not serve queries. In the event of a failover, the KX gateway will reroute client queries to the secondary (warm) system. - Hot-cold - The secondary system has a complete backup or copy of the primary system at some previous point in time (recall that kdb+ databases are just a series of operating system files and directories) with no live processes running. A failover in this scenario involves restoring from this latest backup, with the understanding that there may be some data loss between the time of failover to the time the latest backup was made. - Pilot Light (or cold hot-warm) - The secondary is on standby and the entire system can quickly be started to allow recovery in a shorter time period than a hot-cold configuration. Typically, kdb+ is deployed in a high-value system. Hence, downtime can impact business which justifies the hot-hot setup to ensure high availability. Usually, the secondary will run on completely separate infrastructure, with a separate filesystem, and save the data to a secondary database directory, separate from the primary. In this way, if the primary system or underlying infrastructure goes offline, the secondary would be able to take over completely. The usual strategy for failover is to have a complete mirror of the production system (feedhandler, tickerplant, and realtime subscriber), and when any critical process goes down, the secondary takes over. Switching from production to disaster recovery systems can be implemented seamlessly using kdb+ interprocess communication. Disaster-recovery planning for kdb+tick systems Data recovery for kdb+ tick Network¶ The network bandwidth needs to be considered if the tickerplant components are not located on the same VM. The network bandwidth between AWS VMs depends on the type of the VMs. For example, a VM of type m5.2xlarge has a maximum network bandwidth 10 Gbps and a larger instance m5.16xlarge can sustain between 10–25 Gbps. The C5n instances, built on the AWS Nitro system, have up to 100 Gbps network bandwidth. For a given update frequency you can calculate the required bandwidth by employing the -22! internal function that returns the length of the IPC byte representation of its argument. The tickerplant copes with large amounts of data if batch updates are sent. You might want to consider Enhanced networking that provides high-performance networking capabilities on certain instances. The employed virtualization technique has higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables customers to run applications requiring high levels of inter-node communications at scale on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the performance of inter-instance communications, which is critical to scaling these applications. EFA provides lower and more consistent latency and higher throughput than the TCP transport traditionally used in cloud-based HPC systems Make sure the network is not your bottleneck in processing the updates. A network load balancer is a type of Elastic Load Balancer by Amazon. It is used for ultra-high performance, TLS offloading at scale, centralized certificate deployment, support for UDP, and static IP addresses for your application. Operating at the connection level, Network Load Balancers are capable of handling millions of requests per second securely while maintaining ultra-low latencies. Load balancers can distribute load among applications that offer the same service. kdb+ is single-threaded by default. You can set multithreaded input mode in which requests are processed in parallel. This however, is not recommended for gateways (due to socket usage limitation) and for q servers that process data from disk, like HDBs. A better approach is to use a pool of HDB processes. Distributing the queries can either be done by the gateway via async calls or by a load balancer. If the gateways are sending sync queries to the HDB load balancer, then we recommend a gateway load balancer to avoid query contention in the gateway. Furthermore, there are other tickerplant components that enjoy the benefit of load balancers to handle simultaneous requests better. Adding a load balancer on top of an historical database (HDB) pool is quite simple, it needs only three steps. - Create a network load balancer with protocol TCP. Set the name, availability zone, target group and security group. The security group needs to have an inbound rule to the HDB port. - Create a launch template. A key part here is the User Data window where you can type a startup-script. It mounts the volume that contains the HDB data and the q interpreter, sets environment variables (e.g. QHOME ) and starts the HDB. The HDB accepts incoming TCP connections from the load balancer so you need to set up an inbound firewall rule via a security group. You can also use an image (AMI) that you created earlier from an existing EC2. - Create an Auto Scale instance group (set of virtual machines) with autoscaling to better handle peak loads. Set the recently created instance group as a target group. All clients will access the HDB pool via the load balancer’s DNS name (together with the HDB port) and the load balancer will distribute the requests among the HDB servers seamlessly. General TCP load balancers with an HDB pool offer better performance than a stand-alone HDB. However, utilizing the underlying HDBs is not optimal. Consider three clients C1, C2, C3, and two servers HDB1 and HDB2. C1 is directed to HDB1 when establishing the TCP connection, C2 to HDB2 and C3 to HDB1 again. If C1 and C3 send heavy queries and C2 sends a few lightweight queries, then HDB1 is overloaded and HDB2 is idle. To improve the load distribution the load balancer needs to go under the TCP layer and needs to understand the kdb+ protocol. Logging¶ AWS provides a fully managed logging service that performs at scale and can ingest application and system log data. AWS CloudWatch allows you to view, search and analyze system logs. It provides an easy-to-use and customizable interface so that e.g. DevOps can quickly troubleshoot applications. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time. Events are organized into log streams and each stream is part of a log group. Related applications typically belong to the same log group. You don’t need to modify your tick scripts to enjoy the benefits of CloudWatch. A log agent can be installed and configured to forward your application log to CloudWatch. The EC2 machine of the agent needs proper entitlements by having the appropriate WatchLog policy in its IAM rule. In the host configuration file you need to provide the log file to watch and to which log stream the new entries should be sent. Almost all kdb+ tick components can benefit from cloud logging. Feed handlers log new data arrival, data and connection issues. The TP logs new or disappearing publishers and subscribers. It can log if the output queue is above a threshold. The RDB logs all steps of the EOD process which includes sorting and splaying of all tables. The HDB and gateway can log every user query. kdb+ users often prefer to save log messages in kdb+ tables. Tables that are unlikely to change are specified by a schema, while entries that require more flexibility use key-value columns. Log tables are ingested by log tick plans and these Ops tables are separated from the tables required for the business. One benefit of storing log messages is the ability to process log messages in qSQL. Timeseries join functions include as-of and window joins. For example, gateway functions are executed hundreds of times during the day. The gateway query executes RDB and HDB queries, often via a load balancer. All these components have their own log entries. You can simply employ a window join to find relevant entries and perform aggregation to get an insight of the performance characteristics of the execution chain. Note that nothing prevents you from logging both to kdb+ and to CloudWatch. kdb Insights QLog provides kdb+ cloud logging functionality. QLog supports multiple endpoint types through a simple interface and provides the ability to write to them concurrently. The logging endpoints in QLog are encoded as URLs with two main types: file descriptors and REST endpoints. The file descriptor endpoints supported are :fd://stdout :fd://stderr :fd:///path/to/file.log REST endpoints are encoded as standard HTTP/S URLs such as: https://logs.${region}.amazonaws.com . QLog generates structured, formatted log messages tagged with a severity level and component name. Routing rules can also be configured to suppress or route based on these tags. Existing q libraries that implement their own formatting can still use QLog via the base APIs. This enables them to do their own formatting but still take advantage of the QLog-supported endpoints. Integration with cloud logging applications providers can easily be achieved using logging agents. These can be set up alongside running containers/virtual machines to capture their output and forward to logging endpoints, such as CloudWatch. CloudWatch supports monitoring, alarming and creating dashboards. It is simple to create a metric filter based on a pattern and set an alarm (e.g. sending email) if a certain criterion holds. You may also wish to integrate your KX monitoring for kdb+ components into this cloud logging and monitoring framework. The purpose is the same: to get insights into performance, uptime and overall health of the applications and the servers pool. You can visualize trends via dashboards. Interacting with AWS services¶ People interact with AWS services manually via the console web interface. You may also need to interact from a q process. There are three easy ways to do this. For demonstration we will invoke a lambda function called myLambda from a q process. The lambda requires a payload JSON with two name-value pairs as input. JSON serialization and deserialization is supported by q functions .j.j and .j.k , respectively. In our use case, the payload needs to be base64-encoded. This is also supported natively in q by function .Q.btoa . Via AWS CLI¶ A q process can run shell commands using the system keyword. We assume that AWS CLI is installed on the script-runner machine. q) fn: "myLambda" q) payload: .j.j `name1`name2!("value 1"; "value 2") q) command: "aws lambda invoke --function-name ", fn, " --payload ", .Q.btoa[payload], " response.txt" q) .j.k raze system command StatusCode | 200f ExecutedVersion| "$LATEST" Unfortunately, this approach needs string manipulation, so it is not always convenient. Via EmbedPy¶ Amazon provides a Python client library to interact with AWS services. Using embedPy, a q process can load a Python environment and easily transfer data between the two environments. q) system "l p.q" q)p)import json # to create payload easily q)p)import boto3 # to invoke a lambda q)p)client = boto3.client('lambda') q)p)response= client.invoke(FunctionName='myLambda', Payload=json.dumps({'name1': 'value 1', 'name2': 'value 2'})) q)p)result= response['Payload'].read() Natively via Kurl REST API¶ Finally, you can send HTTP requests to the AWS REST API endpoints. kdb Insights provides a native q REST API called Kurl. Kurl provides ease-of-use cloud integration by registering AWS authentication information. When running on a cloud instance, and a role is available, Kurl will discover and register the instance metadata credentials. When running outside the cloud, OAuth2, ENV, and file-based credential methods are supported. Kurl takes care of your credentials and properly formats the requests. In the code below the variables fn and payload are as in the previous example. q) system "l kurl.q"; q) resp: .kurl.sync (`$"https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/", fn, "/invocations";`POST; enlist[`body]!enlist payload); q) if[not 200 = first resp; '("Error invoking function ", last resp)]; Amazon Lambda Functions¶ Function as a Service (FaaS) is an interesting cloud technology that lets developers create an application without considering the complexity of building and maintaining the infrastructure that runs it. Cloud providers support only a handful of programming languages natively. AWS’ FaaS solution, Lambda, supports Bash scripts that can start any executable, including a q script. kdb+ on AWS Lambda is serverless as there are no servers to manage or maintain. When your lambda service is not used then you don’t have infrastructure costs. The cost is transparent and you can charge those who actually use your service. Also, the infrastructure scales well and parallel execution of your lambda is not limited by your hardware that is typically fixed with an on-premise solution. Furthermore, your lambda is executed in its own environment, so you can worry less about the protection against side effects compared to a static solution. There are many use cases for employing lambdas in kdb+tick. First, the batch feed handlers that run when new data is dropped can run by lambda. This integrates well with S3. For example a new CSV file in an S3 bucket can immediately trigger a lambda that runs the feed handler. Developers only need to estimate the total amount of memory that will be used by the feed handler. All the backend infrastructure is managed by AWS. The scalability has real business value compared to on-premise solutions, where typically a set of feed handlers need to be allocated on a set of machines. The DevOps team needs to manually arrange the placements, which is prone to error especially due to the dynamic nature of load. Another use case is to start a gateway by lambda to execute a client query. This provides cost transparency, zero cost when service is not used, and full client query isolation. Cloud Map: service discovery¶ Feeds and the RDB need to know the address of the tickerplant. The gateway and the RDB need to know the address of the HDB. In a microservice infrastructure like kdb+tick, these configuration details are best stored in a configuration-management service. This is especially true if the addresses are constantly changing and new services are added dynamically. Service discovery can be managed from within kdb+ or by using a service such as AWS Cloud Map. This service keeps track of all your application components, their locations, attributes and health status. Cloud Map organizes services into namespaces. A service must have an address and can have multiple attributes. You can add a health check to any service. A service is unhealthy if the number of times the health check failed is above a threshold. Set a higher threshold for HDBs if you allow long-running queries. kdb+ can easily interact with the AWS Cloud Map REST API using Kurl. Kurl can be extended to create/query namespaces, discover or register/deregister instances to facilitate service discovery of your kdb+ processes running in your tick environment. For example, a kdb+ gateway can fetch from Cloud Map the addresses of healthy RDBs and HDBs. The aws console also comes with a simple web interface to visualize the status of your kdb+ processes/instances. Access management¶ We distinguish application- and infrastructure-level access control. Application-level access management controls who can access kdb+ components and run commands. Tickerplant (TP), realtime database (RDB) and historical database (HDB) are generally restricted to kdb+ infra admins only and the gateway is the access point for the users. One responsibility of the gateway is to check if the user can access the tables (columns and rows) s/he is querying. This generally requires checking user ID (returned by .z.u ) against some organizational entitlement database, cached locally in the gateway and refreshed periodically. AWS Systems Manager Session Manager Session Manager is a fully managed AWS Systems Manager capability that lets you manage your kdb+ Amazon EC2 instances through an interactive one-click browser-based shell or through the AWS Command Line Interface (CLI). Session Manager provides secure and auditable instance management for your kdb+ tick deployment without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. We would use this for permissioning access to the KX gateway. This is a key task for the administrators of the KX system and both user and API access to the entire system is controlled entirely through the KX gateway process. Hardware¶ It’s worth noting several EC2 instance types that are especially performant for kdb+ workloads. The R5 family of EC2 instance types are memory-optimized. Although the R5b and R5 CPU-to-memory ratio and network performance are the same, R5b instances support bandwidth up to 60 Gbps and EBS performance of 260K IOPS, providing 3× higher EBS-Optimized performance compared to R5 instances. The Nitro system is a collection of building blocks that can be assembled in different ways, providing the flexibility to design and rapidly deliver EC2 instance types with a selection of compute, storage, memory, and networking options. | service | EC2 instance type | storage | CPU, memory, I/O | |---|---|---|---| | Tickerplant | Memory Optimized R4, R5, R5b, X1 | Io2 EBS FSx Lustre | High-Perf Medium Medium | | Realtime database | Memory Optimized R4, R5, R5b, X1 | High-Perf High-Capacity Medium | | | Historical database | Memory Optimized R4, R5, R5b, X1 | Io2 EBS FSx Lustre ObjectiveFS WekaIO | Medium Perf Medium High | | Complex event processing (CEP) | Memory Optimized R4, R5, R5b, X1 | Medium Perf Medium High | | | Gateway | Memory Optimized R4, R5, R5b, X1 | Medium-Perf Medium High | Further reading¶ kdb tick: standard tick.q scripts Building real-time tick subscribers Data recovery for kdb+ tick Disaster-recovery planning for kdb+tick systems Intraday writedown solutions Query Router: a kdb+ framework for a scalable load-balanced system Order Book: a kdb+ intraday storage and access methodology kdb+tick profiling for throughput optimization Migrating a kdb historical database to AWS Serverless kdb+ on AWS Lambda
\c 20 77 \l funq.q \l zoo.q \l iris.q -1"computing the silhouette demonstrates cluster quality"; -1"by generating a statistic that varies between -1 and 1"; -1"where 1 indicates a point is very close to all the items"; -1"within its own cluster and very far from all the items"; -1"in the next-best cluster while -1 indicates the reverse"; -1"a negative value indicates a point is closer to the next-best cluster"; -1""; -1"we now apply silhouette analysis to the zoo data set"; df:`.ml.edist -1"using distance metric: ", string df; t:(2#/:zoo.t),'([]silhouette:.ml.silhouette[df;zoo.X;zoo.y]) -1"sorting by avg silhouette within each cluster"; -1"then by actual data point silhouette value"; -1"provides good intuition on cluster quality"; show select[>([](avg;silhouette) fby typ;silhouette)] from t -1"assert average silhouette"; .ut.assert[.3] .ut.rnd[.1] exec avg silhouette from t -1"we see that mammals platypus, seal, dolphin and porpoise"; -1"as well as all the reptiles are better classified"; -1"as another type"; show 75_select[>([](avg;silhouette) fby typ;silhouette)] from t -1"we can run the same analysis on the iris data set"; t:iris.t,'([]silhouette:.ml.silhouette[df;iris.X;iris.y]) -1"we see that iris-setosa is the best cluster"; -1"and iris-versicolor and iris-virginica are worse"; show select avg silhouette by species from t -1"assert average silhouette"; .ut.assert[.5] .ut.rnd[.1] exec avg silhouette from t ================================================================================ FILE: funq_smsspam.q SIZE: 271 characters ================================================================================ smsspam.f:"smsspamcollection" smsspam.b:"http://archive.ics.uci.edu/ml/machine-learning-databases/" smsspam.b,:"00228/" -1"[down]loading sms-spam data set"; .ut.download[smsspam.b;;".zip";.ut.unzip] smsspam.f; smsspam.t:flip `class`text!("S*";"\t")0: `:SMSSpamCollection ================================================================================ FILE: funq_sparse.q SIZE: 726 characters ================================================================================ \c 20 100 \l funq.q -1 "given a matrix with many missing values,"; show X:100 100#"f"$10000?0b -1 "we can record the non-zero values to create a sparse matrix"; show S:.ml.sparse X -1 "the representation includes the number of rows and columns"; -1 "followed by the x and y coordinates and finally the matrix valus"; .ut.assert[X] .ml.full S / matrix -> sparse -> matrix == matrix / sparse matrix multiplication == mmu -1 "we can perform sparse matrix transposition"; .ut.assert[flip X] .ml.full .ml.smt S -1 "sparse matrix multiplication"; .ut.assert[X$X] .ml.full .ml.smm[S;S] -1 "sparse matrix addition"; .ut.assert[X+X] .ml.full .ml.sma[S;S] -1 "sparse tensors"; .ut.assert[T] .ml.full .ml.sparse T:2 3 4#"f"$24?0b ================================================================================ FILE: funq_stopwords.q SIZE: 476 characters ================================================================================ stopwords.f:"stop-word-list.txt" stopwords.b:"http://xpo6.com/wp-content/uploads/2015/01/" -1"[down]loading xpo6 stop words"; .ut.download[stopwords.b;;"";""] stopwords.f; stopwords.xpo6:asc enlist[""],read0 `$":",stopwords.f stopwords.f:"stop.txt" stopwords.b:"http://snowball.tartarus.org/algorithms/english/" -1"[down]loading snowball stop words"; .ut.download[stopwords.b;;"";""] stopwords.f; stopwords.snowball:asc distinct trim {(x?"|")#x} each read0 `$":",stopwords.f ================================================================================ FILE: funq_supportvectormachine.q SIZE: 663 characters ================================================================================ \l funq.q \l iris.q stdout:1@ .svm.set_print_string_function`stdout -1"enumerate species so we can use the integer value for svm"; y:`species?iris.y -1"svm parameter x is a sparse matrix: - list of dictionaries"; prob:`x`y!(0 1 2 3i!/:flip iris.X;"f"$"i"$y) -1"define and check svm parameters"; .svm.check_parameter[prob] param:.svm.defparam[prob] .svm.param -1"build model by training svm on full dataset"; m:.svm.train[prob;param] -1"cross validate"; .svm.cross_validation[prob;param;2i]; -1"how well did we learn"; avg prob.y=p:.svm.predict[m] each prob.x -1"lets view the confusion matrix"; show .ut.totals[`TOTAL] .ml.cm[`species!"i"$prob.y] `species!"i"$p ================================================================================ FILE: funq_svm.q SIZE: 907 characters ================================================================================ .svm.dll:`libsvm^.svm.dll^:`; / optional override .svm,:(.svm.dll 2: (`lib;1))` .svm,:`C_SVC`NU_SVC`ONE_CLASS`EPSILON_SVR`NU_SVR!"i"$til 5 .svm,:`LINEAR`POLY`RBF`SIGMOID`PRECOMPUTED!"i"$til 5 \d .svm param:(!) . flip ( (`svm_type;C_SVC); (`kernel_type;RBF); (`degree;3i); (`gamma;-1f); / use defaults (`coef0;0f); (`cache_size;100f); (`eps;.001); (`C;1f); (`weight_label;::); (`weight;::); (`nu;.5); (`p;.1); (`shrinking;1i); (`probability;0i)) defparam:{[prob;param] if[0f>param`gamma;param[`gamma]:1f%max(last key@)each prob`x]; param} sparse:{{("i"$1+i)!x i:where not 0f=x} each flip x} prob:{`x`y!(sparse x;y)} read_problem:{[s] i:s?\:" "; y:i#'s; x:{(!/)"I: "0:x _y}'[1+i;s]; if[3.5>.z.K;x:("i"$key x)!value x]; `x`y!"F"$(x;y)} write_problem:{ s:(("+";"")0>x`y),'string x`y; s:s,'" ",/:{" " sv ":" sv' string flip(key x;value x)} each x`x; s:s,\:" "; s} ================================================================================ FILE: funq_testlinear.q SIZE: 1,156 characters ================================================================================ \l linear.q \l ut.q .linear.set_print_string_function ` .ut.assert[230i] .linear.version .ut.assert[s] .linear.write_problem prob:.linear.read_problem s:read0 `:liblinear/heart_scale .ut.assert[::] .linear.check_parameter[prob] param:.linear.defparam[prob] .linear.param .ut.assert[prob] .linear.prob_inout prob m1:.linear.train[prob;param] m2:.linear.load_model `:liblinear/heart_scale.model do[1000;m:.linear.load_model `:liblinear/heart_scale.model] m3:{.linear.save_model[`model] x;m:.linear.load_model[`model];hdel`:model;m} m mp:1#`solver_type .ut.assert[@[m;`param;{y#x};mp]] @[m;`param;{y#x};mp] do[1000;param ~ b:.linear.param_inout param] .ut.assert[m] .linear.model_inout m do[1000;.linear.model_inout m] .ut.assert[1b].75<avg prob.y=.linear.cross_validation[prob;param;2i] .ut.assert[0 -1 0f] .linear.find_parameters[prob;param;2i;-0f;-0f] .ut.assert[0i] .linear.check_probability_model m .ut.assert[.linear.predict[m;prob.x]] .linear.predict[m] each prob.x .ut.assert[.linear.predict_values[m;prob.x]] flip .linear.predict_values[m] each prob.x .ut.assert[.linear.predict_probability[m;prob.x]] flip .linear.predict_probability[m] each prob.x ================================================================================ FILE: funq_testporter.q SIZE: 375 characters ================================================================================ \l funq.q b:"https://tartarus.org/martin/PorterStemmer/" -1"[down]loading porter stemmer vocabulary"; pin:read0 .ut.download[b;;"";""] "voc.txt" -1"[down]loading stemmed vocabulary"; pout:read0 .ut.download[b;;"";""] "output.txt" -1"stemming vocabulary"; out:.porter.stem peach pin -1"incorrectly stemmed "; .ut.assert[0] count flip (pin;pout;out)@\: where not pout ~'out ================================================================================ FILE: funq_testsvm.q SIZE: 1,015 characters ================================================================================ \l svm.q \l ut.q .svm.set_print_string_function ` .ut.assert[323i] .svm.version .ut.assert[s] .svm.write_problem prob:.svm.read_problem s:read0 `:libsvm/heart_scale .ut.assert[::] .svm.check_parameter[prob] param:.svm.defparam[prob] .svm.param .ut.assert[prob] .svm.prob_inout prob m1:.svm.train[prob;param] m2:.svm.load_model `:libsvm/heart_scale.model do[1000;m:.svm.load_model `:libsvm/heart_scale.model] m3:{.svm.save_model[`model] x;m:.svm.load_model[`model];hdel`:model;m} m mp:`svm_type`kernel_type`gamma .ut.assert[@[m;`param;{y#x};mp]] @[m;`param;{y#x};mp] do[1000;param ~ .svm.param_inout param] .ut.assert[m] .svm.model_inout m do[1000;.svm.model_inout m] .ut.assert[1b] .8<avg prob.y=.svm.cross_validation[prob;param;2i] .ut.assert[0i].svm.check_probability_model m .ut.assert[.svm.predict[m;prob.x]] .svm.predict[m] each prob.x .ut.assert[.svm.predict_values[m;prob.x]] flip .svm.predict_values[m] each prob.x .ut.assert[.svm.predict_probability[m;prob.x]] flip .svm.predict_probability[m] each prob.x ================================================================================ FILE: funq_tfidf.q SIZE: 2,253 characters ================================================================================ \l funq.q \l stopwords.q \l bible.q \l moby.q \l emma.q \l pandp.q \l sands.q \l mansfield.q \l northanger.q \l persuasion.q -1 "remove punctuation from text"; c:.ut.sr[.ut.pw] peach moby.s -1 "tokenize and remove stop words"; c:(except[;stopwords.xpo6] " " vs ) peach lower c -1 "user porter stemmer to stem each word"; c:(.porter.stem') peach c -1 "building a term document matrix from corpus and vocabulary"; m:.ml.tdm[c] v:asc distinct raze c -1 "building a vector space model (with different examples of tf-idf)"; -1 "vanilla tf-idf"; vsm:0f^.ml.tfidf[::;.ml.idf] m -1 "log normalized term frequency, inverse document frequency max"; vsm:0f^.ml.tfidf[.ml.lntf;.ml.idfm] m -1 "double normalized term frequency, probabilistic inverse document frequency"; vsm:0f^.ml.tfidf[.ml.dntf[.5];.ml.pidf] m -1 "display values of top words based on tf-idf"; show vsm@'idesc each vsm -1 "display top words based on tf-idf"; show v 5#/:idesc each vsm vsm:0f^.ml.tfidf[::;.ml.idf] m X:.ml.normalize vsm C:.ml.skmeans[X] .ml.forgy[30] X -1"using tfidf and nb to predict which jane austen book a chapter came from"; t:flip `text`class!(emma.s;`EM) / emma t,:flip `text`class!(pandp.s;`PP) / pride and prejudice t,:flip `text`class!(sands.s;`SS) / sense and sensibility t,:flip `text`class!(mansfield.s;`MP) / mansfield park t,:flip `text`class!(northanger.s;`NA) / northanger abbey t,:flip `text`class!(persuasion.s;`PE) / persuasion
// check that ordering parameter contains only symbols and is paired in the format // (direction;column). checkordering:{[dict;parameter] if[11h=type dict parameter;dict[parameter]:enlist dict parameter]; input:dict parameter; if[11h<>type raze input; '`$.schema.errors[`checkorderingpair;`errormessage],.schema.examples[`ordering1;`example]]; if[0<>count except[count each input;2]; '`$.schema.errors[`checkorderingarrangment;`errormessage],.schema.examples[`ordering1;`example]]; if[0<>count except[first each input;`asc`desc]; '`$.schema.errors[`checkorderingdirection;`errormessage],.schema.examples[`ordering1;`example]]; grouping:$[`grouping in key dict;(),dict`grouping;()]; timebar:$[`timebar in key dict;dict`timebar;()]; if[`aggregations in key dict; aggs:dict`aggregations; aggs:flip(key[aggs]where count each get aggs;raze aggs); names:{ if[count[x 1]in 1 2; :`$raze string[x 0],.[string (),x 1;(::;0);upper]] }'[aggs]; if[any raze {1<sum y=x}[last each aggs]'[last each input]; '`$.schema.errors[`orderingvague;`errormessage],.schema.examples[`ordering2;`example]]; if[any not in[last each input;names,grouping,timebar[2],last each aggs]; '`$.schema.errors[`orderingnocolumn;`errormessage]]]; if[in[`columns;key dict]; if[not enlist[`]~columns:(),dict`columns; if[any not l:last'[input]in columns; badorder:","sv string last'[input]where not l; '`$.checkinputs.formatstring[.schema.errors[`badorder;`errormessage];`$badorder]]]]; :dict;}; // check that the instrumentcol parameter is of type symbol checkinstrumentcolumn:{[dict;parameter]:checktype[-11h;dict;parameter];}; checkrenamecolumn:{[dict;parameter] dict:checktype[99 -11 11h;dict;parameter]; input:dict parameter; if[type[input]in -11 11h;:dict]; if[99h~type input; if[not (type key input)~11h; '`$.schema.errors[`renamekey;`errormessage],.schema.examples[`renamecolumn;`example]]; if[not (type raze input)~11h; '`$.schema.errors[`renameinput;`errormessage],.schema.examples[`renamecolumn;`example]]]; :dict;}; checkpostprocessing:{[dict;parameter] dict:checktype[100h;dict;parameter]; if[1<>count (get dict parameter)[1]; '`$.schema.errors[`postback;`errormessage]]; :dict;}; isstring:{[dict;parameter]:checktype[10h;dict;parameter];}; checktype:{[validtypes;dict;parameter] inputtype:type dict parameter; if[not any validtypes~\:inputtype;'`$.checkinputs.formatstring[.schema.errors[`checktype;`errormessage];`parameter`validtypes`inputtype!(parameter;validtypes;inputtype)]]; :dict; }; isboolean:{[dict;parameter]:checktype[-1h;dict;parameter];}; isnumb:{[dict;parameter]:checktype[-7h;dict;parameter]}; checkjoin:{[dict;parameter]:checktype[107h;dict;parameter];}; checkpostback:{[dict;parameter] if[()~dict parameter;:dict]; if[not `sync in key dict;'`$.schema.errors[`asyncpostback;`errormessage]] if[not dict`sync;'`$.schema.errors[`asyncpostback;`errormessage]] :checkpostprocessing[dict;parameter]}; checktimeout:{[dict;parameter] checktype[-16h;dict;parameter]; :dict}; ================================================================================ FILE: TorQ_code_common_compress.q SIZE: 10,803 characters ================================================================================ / Data Intellect ([email protected]) USAGE OF COMPRESSION: NOTE: Please use with caution. To SHOW a table of files to be compressed and how before execution, use: -with a specified csv driver file: .cmp.showcomp[`:/path/to/hdb;`:/path/to/csv; maxagefilestocompress] OR -with compressionconfig.csv file located in the config folder (TORQ/src/config/compressionconfig.csv): .cmp.showcomp[`:/path/to/hdb;.cmp.inputcsv; maxagefilestocompress] To then COMPRESS all files: .cmp.compressmaxage[`:/path/to/hdb;`:/path/to/csv; maxagefilestocompress] OR .cmp.compressmaxage[`:/path/to/hdb;.cmp.inputcsv; maxagefilestocompress] If you don't care about the maximum age of the files and just want to COMPRESS up to the oldest files in the db then use: .cmp.docompression[`:/path/to/hdb;`:/path/to/csv] OR .cmp.docompression[`:/path/to/hdb;.cmp.inputcsv] csv should have the following format: table,minage,column,calgo,cblocksize,clevel default,10,default, 2, 17,6 quotes, 10,time, 2, 17, 5 quotes,10,src,2,17,4 depth, 10,default, 1, 17, 8 -tables in the db but not in the config tab are automatically compressed using default params -tabs with cols specified will have other columns compressed with default (if default specified for cols of tab, all cols are comp in that tab) -algo 0 decompresses the file, or if not compressed ignores -config file could just be one row to compress everything older than age with the same params: table,minage,column,calgo,cblocksize,clevel default,10,default,2,17,6 The gzip algo (2) is not necessarily included on windows and unix systems. See: code.kx.com/wiki/Cookbook/FileCompression for more details For WINDOWS users: The minimum block size for compression on windows is 16. \ \d .cmp inputcsv:@[value;`inputcsv;first .proc.getconfigfile["compressionconfig.csv"]]; if[-11h=type inputcsv;inputcsv:string inputcsv]; checkcsv:{[csvtab] // include snappy (3) for version 3.4 or after allowedalgos:0 1 2,$[.z.K>=3.4;3;()]; if[0b~all colscheck:`table`minage`column`calgo`cblocksize`clevel in (cols csvtab); .lg.e[`compression;err:inputcsv," has incorrect column layout at column(s): ", (" " sv string where not colscheck), ". Should be `table`minage`column`calgo`cblocksize`clevel."];'err]; if[count checkalgo:exec i from csvtab where not calgo in allowedalgos; .lg.e[`compression; err:inputcsv, ": incorrect compression algo in row(s): ",(", " sv string -1_allowedalgos)," or ",(string last allowedalgos),"."];'err]; if[count checkblock:exec i from csvtab where calgo in 1 2, not cblocksize in 12 + til 9; .lg.e[`compression; err:inputcsv,": incorrect compression blocksize at row(s): ", (" " sv string checkblock), ". Should be between 12 and 19."];'err]; if[count checklevel: exec i from csvtab where calgo in 2, not clevel in til 10; .lg.e[`compression;err:inputcsv,": incorrect compression level at row(s): ", (" " sv string checklevel), ". Should be between 0 and 9."];'err]; if[.z.o like "w*"; if[count rowwin:where ((csvtab[`cblocksize] < 16) & csvtab[`calgo] > 0); .lg.e[`compression;err:inputcsv," :incorrect compression blocksize for windows at row: ", (" " sv string rowwin), ". Must be more than or equal to 16."];'err]]; if[(any nulls: any null (csvtab[`column];csvtab[`table];csvtab[`minage];csvtab[`clevel]))>0; .lg.e[`compression;err:inputcsv," has empty cells in column(s): ", (" " sv string `column`table`minage`clevel where nulls)];'err];} loadcsv:{[inputcsv] compressioncsv::@[{.lg.o[`compression;"Opening ", x];("SISIII"; enlist ",") 0:"S"$x}; (string inputcsv); {.lg.e[`compression;"failed to open ", (x)," : ",y];'y}[string inputcsv]]; checkcsv[compressioncsv];} traverse:{$[(0=count k)or x~k:key x; x; .z.s each ` sv' x,/:k where not any k like/:(".d";"*.q";"*.k";"*#")]} hdbstructure:{ t:([]fullpath:(raze/)traverse x); // orig traverse // calculate the length of the input path base:count "/" vs string x; // split out the full path t:update splitcount:count each split from update split:"/" vs' string fullpath,column:`,table:`,partition:(count t)#enlist"" from t; // partitioned tables t:update partition:split[;base],table:`$split[;base+1],column:`$split[;base+2] from t where splitcount=base+3; // splayed t:update table:`$split[;base],column:`$split[;base+1] from t where splitcount=base+2; // cast the partition type t:update partition:{$[not all null r:"D"$'x;r;not all null r:"M"$'x;r;"I"$'x]}[partition] from t; /- work out the age of each partition $[14h=type t`partition; t:update age:.z.D - partition from t; 13h=type t`partition; t:update age:(`month$.z.D) - partition from t; // otherwise it is ints. If all the values are within 1000 and 3000 // then assume it is years t:update age:{$[all x within 1000 3000; x - `year$.z.D;(count x)#0Ni]}[partition] from t]; delete splitcount,split from t} showcomp:{[hdbpath;csvpath;maxage] /-load csv loadcsv[$[10h = type csvpath;hsym `$csvpath;hsym csvpath]]; .lg.o[`compression;"scanning hdb directory structure"]; /-build paths table and fill age $[count key (` sv hdbpath,`$"par.txt"); pathstab:update 0W^age from (,/) hdbstructure'[hsym each `$(read0 ` sv hdbpath,`$"par.txt")]; pathstab:update 0W^age from hdbstructure[hsym hdbpath]]; /-delete anything which isn't a table pathstab:delete from pathstab where table in `; /-tables that are in the hdb but not specified in the csv - compress with `default params comptab:2!delete minage from update compressage:minage from compressioncsv; /-specified columns and tables a:select from comptab where not table=`default, not column=`default; /-default columns, specified tables b:select from comptab where not table=`default,column=`default; /-defaults c:select from comptab where table = `default, column =`default; /-join on defaults to entire table t: pathstab,'(count pathstab)#value c; /- join on for specified tables t: t lj 1!delete column from b; /- join on table and column specified information t: t lj a; /- in case of no default specified, delete from the table where no data is joined on t: delete from t where calgo=0Nj,cblocksize=0Nj,clevel=0Nj; .lg.o[`compression;"getting current size of each file up to a maximum age of ",string maxage]; update currentsize:hcount each fullpath from select from t where age within (compressage;maxage) } compressfromtable:{[table] statstab::([] file:`$(); algo:`int$(); compressedLength:`long$();uncompressedLength:`long$()); {compress[x `fullpath;x `calgo;x `cblocksize;x `clevel; x `currentsize]} each table;} /- call the compression with a max age paramter implemented compressmaxage:{[hdbpath;csvpath;maxage] compressfromtable[showcomp[hdbpath;csvpath;maxage]]; summarystats[]; } docompression:compressmaxage[;;0W];
// @kind function // @category main // @subcategory set // // @overview // Save parameter information for a model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param paramName {string|symbol} The name of the parameter to be saved // @param params {dict|table|string} The parameters to save to file // // @return {null} registry.set.parameters:{[folderPath;experimentName;modelName;version;paramName;params] config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; paramName:$[-11h=type paramName; string paramName; 10h=type paramName; paramName; logging.error"ParamName must be of type string or symbol" ]; setParams:(experimentName;modelName;version;paramName;params;config); registry[storage;`set;`parameters]. setParams } // @kind function // @category main // @subcategory set // // @overview // Upsert relevant data from current run to metric table // // @param metricName {string} The name of the metric to be persisted // @param metricValue {float} The value of the metric to be persistd // @param metricPath {string} The path to the metric table // // @return {null} registry.set.modelMetric:{[metricName;metricValue;metricPath] enlistCols:`timestamp`metricName`metricValue; metricDict:enlistCols!(.z.P;metricName;metricValue); metricPath:hsym`$metricPath,"metric"; metricPath upsert metricDict; } ================================================================================ FILE: ml_ml_registry_q_main_update.q SIZE: 16,307 characters ================================================================================ // update.q - Main callable functions for retrospectively adding information // to the model registry // Copyright (c) 2021 Kx Systems Inc // // @overview // Update information within the registry // // @category Model-Registry // @subcategory Functionality // // @end \d .ml // @kind function // @category main // @subcategory update // // @overview // Update the config of a model that's already saved // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param model {any} `(<|dict|fn|proj)` The model to be saved to the registry. // @param modelName {string} The name to be associated with the model // @param modelType {string} The type of model that is being saved, namely // "q"|"sklearn"|"keras"|"python" // @param config {dict} Any additional configuration needed for // setting the model // // @return {null} registry.update.config:{[folderPath;experimentName;modelName;version;config] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;config]; modelType:first config`modelType; config:registry.config.model,config; modelPath:registry.util.path.modelFolder[config`registryPath;config;`model]; model:registry.get[`$modelType]modelPath; registry.util.set.requirements config; if[`data in key config; registry.set.monitorConfig[model;modelType;config`data;config] ]; if[`supervise in key config; registry.set.superviseConfig[config] ]; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the requirement details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param requirements {string[][];hsym;boolean} The location of a saved // requirements file, list of user specified requirements or a boolean // indicating if the virtual environment of a user is to be 'frozen' // // @return {null} registry.update.requirements:{[folderPath;experimentName;modelName;version;requirements] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; config[`requirements]:requirements; registry.util.set.requirements config; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the latency details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param model {fn} The model on which the latency is to be evaluated // @param data {table} Data on which to evaluate the model // // @return {null} registry.update.latency:{[folderPath;experimentName;modelName;version;model;data] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; mlops.update.latency[fpath;model;data]; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the null replacement details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param data {table} Data on which to determine the null replacement // // @return {null} registry.update.nulls:{[folderPath;experimentName;modelName;version;data] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; mlops.update.nulls[fpath;data]; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the infinity replacement details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param data {table} Data on which to determine the infinity replacement // // @return {null} registry.update.infinity:{[folderPath;experimentName;modelName;version;data] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; mlops.update.infinity[fpath;data]; if[`local<>config`storage;registry.cloud.update.publish config]; } // @kind function // @category main // @subcategory update // // @overview // Update the csi details of a saved model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param data {table} Data on which to determine historical distribution of the // features // // @return {null} registry.update.csi:{[folderPath;experimentName;modelName;version;data] config:registry.util.update.checkPrep[folderPath;experimentName;modelName;version;()!()]; fpath:hsym `$config[`versionPath],"/config/modelInfo.json"; .ml.mlops.update.csi[fpath;data]; if[`local<>config`storage;registry.cloud.update.publish config]; }
QSQL query templates¶ delete delete rows or columns from a table exec return columns from a table, possibly with new columns select return part of a table, possibly with new columns update add rows or columns to a table The query templates of qSQL share a query syntax that varies from the syntax of q and closely resembles conventional SQL. For many use cases involving ordered data it is significantly more expressive. Template syntax¶ Below, square brackets mark optional elements; a slash begins a trailing comment. select [Lexp] [ps] [by pb] from texp [where pw] exec [distinct] [ps] [by pb] from texp [where pw] update ps [by pb] from texp [where pw] delete from texp [where pw] / rows delete ps from texp / columns A template is evaluated in the following order. From phrase texp Where phrase pw By phrase pb Select phrase ps Limit expression Lexp From phrase¶ The From phrase from texp is required in all query templates. The table expression texp is - a table or dictionary (call-by-value) - the name of a table or dictionary, in memory or on disk, as a symbol atom (call-by-name) Examples: update c:b*2 from ([]a:1 2;b:3 4) / call by value select a,b from t / call by value select a,b from `t / call by name update c:b*2 from `:path/to/db / call by name Limit expressions¶ Limit expressions restrict the results returned by select or exec . (For exec there is only one: distinct ). They are described in the articles for select and exec . Result and side effects¶ In a select query, the result is a table or dictionary. In an exec query the result is a list of column values, or dictionary. In an update or delete query, where the table expression is a call - by value, the query returns the modified table or a dictionary - by name, the table or dictionary is amended in place (in memory or on disk) as a side effect, and its name returned as the result q)t1:t2:([]a:1 2;b:3 4) q)update a:neg a from t1 a b ---- -1 3 -2 4 q)t1~t2 / t1 unchanged 1b q)update a:neg a from `t1 `t1 q)t1~t2 / t1 changed 0b Phrases and subphrases¶ ps, pb, and pw are respectively the Select, By, and Where phrases. Each phrase is a comma-separated list of subphrases. A subphrase is a q expression in which names are resolved with respect to texp and any table/s linked by foreign keys. Subphrases are evaluated in order from the left, but each subphrase expression is evaluated right-to-left in normal q syntax. To use the Join operator within a subphrase, parenthesize the subphrase. q)select (id,'4),val from tbl x val ------- 1 4 100 1 4 200 2 4 300 2 4 400 2 4 500 Names in subphrases¶ A name in a subphrase is resolved (in order) as the name of - column or key name - local name in (or argument of) the encapsulating function - global name in the current working namespace – not necessarily the space in which the function was defined Dot notation allows you to refer to foreign keys. Suppliers and parts database sp.q q)\l sp.q +`p`city!(`p$`p1`p2`p3`p4`p5`p6`p1`p2;`london`london`london`london`london`lon.. (`s#+(,`color)!,`s#`blue`green`red)!+(,`qty)!,900 1000 1200 +`s`p`qty!(`s$`s1`s1`s1`s2`s3`s4;`p$`p1`p4`p6`p2`p2`p4;300 200 100 400 200 300) q)select sname:s.name, qty from sp sname qty --------- smith 300 smith 200 smith 400 smith 200 clark 100 smith 100 jones 300 jones 400 blake 200 clark 200 clark 300 smith 400 You can refer explicitly to namespaces. select (\`. \`toplevel) x from t Duplicate names for columns or groups select auto-aliases colliding duplicate column names for either select az,a from t , or select a by c,c from t , but not for select a,a by a from t . Such a collision throws a 'dup names for cols/groups a error during parse, indicating the first column name which collides. (Since V4.0 2020.03.17.) q)parse"select b by b from t" 'dup names for cols/groups b [2] select b by b from t ^ The easiest way to resolve this conflict is to explicitly rename columns. e.g. select a,b by c:a from t . When compiling functions, the implicit args x , y , z are visible to the compiler only when they are not inside the Select, By, and Where phrases. The table expression is not masked. This can be observed by taking the value of the function and observing the second item: the args. q)args:{(value x)1} q)args{} / no explicit args, so x is a default implicit arg of identity (::) ,`x q)/from phrase is not masked, y is detected as an implicit arg here q)args{select from y where a=x,b=z} `x`y q)args{[x;y;z]select from y where a=x,b=z} / x,y,z are now explicit args `x`y`z q)/call with wrong number of args results in rank error q){select from ([]a:0 1;b:2 3) where a=x,b=y}[0;2] 'rank [0] {select from ([]a:0 1;b:2 3) where a=x,b=y}[0;2] ^ q)/works with explicit args q){[x;y]select from ([]a:0 1;b:2 3) where a=x,b=y}[0;2] a b --- 0 2 Computed columns¶ In a subphrase, a q expression computes a new column or key, and a colon names it. q)t:([] c1:`a`b`c; c2:10 20 30; c3:1.1 2.2 3.3) q)select c1, c3*2 from t c1 c3 ------ a 2.2 b 4.4 c 6.6 q)select c1, dbl:c3*2 from t c1 dbl ------ a 2.2 b 4.4 c 6.6 In the context of a query, the colon names a result column or key. It does not assign a variable in the workspace. If a computed column or key is not named, q names it if possible as the leftmost term in the column expression, else as x . If a computed name is already in use, q suffixes it with 1 , 2 , and so on as needed to make it unique. q)select c1, c1, 2*c2, c2+c3, string c3 from t c1 c11 x c2 c3 -------------------- a a 20 11.1 "1.1" b b 40 22.2 "2.2" c c 60 33.3 "3.3" Virtual column i ¶ A virtual column i represents the index of each record, i.e., the row number. Partitioned tables In a partitioned table i is the index (row number) relative to the partition, not the whole table. Because it is implicit in every table, it never appears as a column or key name in the result. q)select i, c1 from t x c1 ---- 0 a 1 b 2 c q)select from t where i in 0 2 c1 c2 c3 --------- a 10 1.1 c 30 3.3 Where phrase¶ The Where phrase with a boolean list selects records. q)select from t where 101b c1 c2 c3 --------- a 10 1.1 c 30 3.3 Subphrases specify successive filters. q)select from t where c2>15,c3<3.0 c1 c2 c3 --------- b 20 2.2 q)select from t where (c2>15) and c3<3.0 c1 c2 c3 --------- b 20 2.2 The examples above return the same result but have different performance characteristics. In the second example, all c2 values are compared to 15, and all c3 values are compared to 3.0. The two result vectors are ANDed together. In the first example, only c3 values corresponding to c2 values greater than 15 are tested. Efficient Where phrases start with their most stringent tests. Querying a partitioned table When querying a partitioned table, the first Where subphrase should select from the value/s used to partition the table. Otherwise, kdb+ will (attempt to) load into memory all partitions for the column/s in the first subphrase. Use fby to filter on groups. Aggregates¶ In SQL: SELECT stock, SUM(amount) AS total FROM trade GROUP BY stock In q: q)select total:sum amt by stock from trade stock| total -----| ----- bac | 1000 ibm | 2000 usb | 815 The column stock is a key in the result table. Mathematics for more aggregate functions Sorting¶ Unlike SQL, the query templates make no provision for sorting. Instead use xasc and xdesc to sort the query results. As the sorts are stable, they can be combined for mixed sorts. q)sp s p qty --------- s1 p1 300 s1 p2 200 s1 p3 400 s1 p4 200 s4 p5 100 s1 p6 100 s2 p1 300 s2 p2 400 s3 p2 200 s4 p2 200 s4 p4 300 s1 p5 400 q)`p xasc `qty xdesc select from sp where p in `p2`p4`p5 s p qty --------- s2 p2 400 s1 p2 200 s3 p2 200 s4 p2 200 s4 p4 300 s1 p4 200 s1 p5 400 s4 p5 100 Performance¶ - Select only the columns you will use. - Use the most restrictive constraint first. - Ensure you have a suitable attribute on the first non-virtual constraint (e.g. `p or`g on sym). - Constraints should have the unmodified column name on the left of the constraint operator (e.g. where sym in syms,…) - When aggregating, use the virtual field first in the By phrase. (E.g. select .. by date,sym from … ) Tip …where `g=,`s within … Maybe rare to get much speedup, but if the `g goes to 100,000 and then `s is 1 hour of 24 you might see some overall improvement (with overall table of 30 million). Q for Mortals §14.3.6 Query Execution on Partitioned Tables Multithreading¶ The following pattern will make use of secondary threads via peach select … by sym, … from t where sym in …, … when sym has a `g or `p attribute. (Since V3.2 2014.05.02) It uses peach for both in-memory and on-disk tables. For single-threaded, this is approx 6× faster in memory, 2× faster on disk, and uses less memory than previous releases – but mileage will vary. This is also applicable for partitioned DBs as select … by sym, … from t where date …, sym in …, … Table counts in a partitioned database Special functions¶ The following functions (essentially .Q.a0 in q.k ) receive special treatment within select : avg first prd cor last sum count max var cov med wavg dev min wsum When used explicitly, such that it can recognize the usage, q will perform additional steps, such as enlisting results or aggregating across partitions. However, when wrapped inside another function, q does not know that it needs to perform these additional steps, and it is then left to the programmer to insert them. q)select sum a from ([]a:1 2 3) a - 6 q)select {(),sum x}a from ([]a:1 2 3) a - 6 Cond¶ Cond is not supported inside qSQL expressions. q)u:([]a:raze ("ref/";"kb/"),\:/:"abc"; b:til 6) q)select from u where a like $[1b;"ref/*";"kb/*"] 'rank [0] select from u where a like $[1b;"ref/*";"kb/*"] ^ Enclose in a lambda q)select from u where a like {$[x;"ref/*";"kb/*"]}1b a b --------- "ref/a" 0 "ref/b" 2 "ref/c" 4 or use the Vector Conditional instead. Functional SQL¶ The interpreter translates the query templates into functional SQL for evaluation. The functional forms are more general, and some complex queries require their use. But the query templates are powerful, readable, and there is no performance penalty for using them. Wherever possible, prefer the query templates to functional forms. Stored procedures¶ Any suitable lambda can be used in a query. q)f:{[x] x+42} q)select stock, f amount from trade stock amount ------------ ibm 542 ... Parameterized queries¶ Query template expressions can be evaluated in lambdas. q)myquery:{[tbl; amt] select stock, time from tbl where amount > amt} q)myquery[trade; 100] stock time ------------------ ibm 09:04:59.000 ... Column names cannot be parameters of a qSQL query. Use functional qSQL in such cases. fby , insert , upsert , Functional SQL Views Q for Mortals §9.0 Queries: q-sql Q for Mortals §9.9.10 Parameterized Queries
/ see e.g. https://unix.stackexchange.com/a/14727 for info about xat r:update {("i"$first x)%10}ver, {("i"$first x)%10}vrr, .finos.unzip.priv.flags!.finos.unzip.priv.parseBits flg, .finos.unzip.priv.parseNum cmp, {"v"$24 60 60 sv 1 1 2*2 sv'0 5 11 cut reverse .finos.unzip.priv.parseBits x}mtm, {.finos.util.ymd . 1980 0 0+2 sv'0 7 11 cut reverse .finos.unzip.priv.parseBits x}mdt, .finos.unzip.priv.parseNum csz, .finos.unzip.priv.parseNum usz, .finos.unzip.priv.parseNum nln, .finos.unzip.priv.parseNum xln, .finos.unzip.priv.parseNum cln, .finos.unzip.priv.parseNum dnu, .finos.unzip.priv.parseBits iat, .finos.unzip.priv.parseBits xat, .finos.unzip.priv.parseNum lof from z; r:update fnm:`$"c"$x y+til nln, xfd:x y+nln+til xln, cmt:"c"$x y+nln+xln+til cln from r; (r;exec y+nln+xln+cln from r)} // Parse ZIP64 end-of-central-directory locator record. // @param x bytes // @return ZIP64 end-of-central-directory locator record .finos.unzip.priv.pecl64:{ r:.finos.unzip.priv.split[.finos.unzip.priv.wecl64;0]x; r:![r;();0b;{y!x y}[{(.finos.unzip.priv.parseNum;x)}'](key r)except`sig`cmt]; r} // Parse ZIP64 end-of-central-directory record. // @param x bytes // @return ZIP64 end-of-central-directory record .finos.unzip.priv.pecd64:{ r:.finos.unzip.priv.split[update xds:(count x)-56 from .finos.unzip.priv.wecd64;0]x; r:![r;();0b;{y!x y}[{(.finos.unzip.priv.parseNum;x)}'](key r)except`sig`xds]; r} // Parse an extra field record. // @param x (bytes;extra) // @param y index // @param z header // @return (record;next index) // @see .finos.unzip.priv.parse .finos.unzip.priv.pxfd:{ / parse fixed-order, variable-length data / @param x ([]n;w;f) / @param y bytes / @return dict p:{{((count y)#x)@'y}[(x`n)!x`f](sums prev{(1+(sums y)?x)#y}[count y]x`w)cut y}; e:x 1; x:x 0; r:update reverse id, .finos.unzip.priv.parseNum sz from z; d:(r`sz)#y _x; c:count d; r,:$[ / ZIP64 extended information extra field 0x0001~r`id; [ k:.finos.util.table[`n`w`f]( `usz;8;.finos.unzip.priv.parseNum; `csz;8;.finos.unzip.priv.parseNum; `lof;8;.finos.unzip.priv.parseNum; `dnu;4;.finos.unzip.priv.parseNum; ); p[k]d]; / Unix 0x000d~r`id; [ / fixed fields followed by a variable field k:.finos.util.table[`n`w`f]( `atime; 4;.finos.unzip.priv.parseUnixTime; `mtime; 4;.finos.unzip.priv.parseUnixTime; `uid ; 2;.finos.unzip.priv.parseNum; `gid ; 2;.finos.unzip.priv.parseNum; `var ;c-12;"c"$; ); p[k]d]; / Xceed unicode extra field ("NU") / parsing notes: / appears to be either two shorts or a long, followed by a short of size, followed by UTF-16 text / but first one/two fields are unknown 0x554e~r`id; [ .finos.log.warning"Xceed unicode extra field: unimplemented extra field; skipping"; ()]; / extended timestamp ("UT") 0x5455~r`id; [ / parse and remove flag byte f:.finos.unzip.priv.flags_xfd_0x5455!.finos.unzip.priv.parseBits first d; d:1_d; / check field size matches flag byte if[c<>1+4*$[`fd=e`context;sum f;`cd=e`context;f`mtime;'`domain]; '`parse; ]; k:.finos.util.table[`n`w`f]( `mtime;4;.finos.unzip.priv.parseUnixTime; `atime;4;.finos.unzip.priv.parseUnixTime; `ctime;4;.finos.unzip.priv.parseUnixTime; ); ((enlist`flg)!enlist f),p[k]d]; / Info-ZIP Unicode Path ("up", UPath) 0x7075~r`id; [ k:.finos.util.table[`n`w`f]( `ver; 1;.finos.unzip.priv.parseNum; `crc; 4;reverse; `unm;c-5;"c"$; ); p[k]d]; / Info-ZIP Unix (previous new) ("Ux") 0x7855~r`id; $[ not c; / central-header version (no data) (); [ k:.finos.util.table[`n`w`f]( `uid;2;.finos.unzip.priv.parseNum; `gid;2;.finos.unzip.priv.parseNum; ); p[k]d]]; / Info-ZIP Unix (new) ("ux") 0x7875~r`id; [ / check version if[1<>.finos.unzip.priv.parseNum 1#d; '`nyi; ]; / check field size is consistent with data if[c<>3+last{r:x 1;x:x 0;s:first x;((1+s)_x;r+s)}over(1_d;0); '`parse; ]; / pairs of size and data fields ((enlist`ver)!enlist .finos.unzip.priv.parseNum 1#d),.finos.unzip.priv.parseNum each`uid`gid!last{r:x 1;x:x 0;s:first x;x:1_x;$[s;(s _ x;r,enlist s#x);(x;r)]}over(1_d;())]; [ .finos.log.warning(-3!r`id),": unimplemented extra field id; skipping"; ()]]; (r;exec y+sz from r)} // Apply extra field. // Parse xfd into records and apply to parent record as appropriate. // Currently, extra is used for context information, so that fields that // differ in the central directory and local file header (e.g. 0x5455, // extended timestamp ("UT")) can be parsed properly. // @param x extra // @param y record containing xln and xfd fields // @return record with xfd parsed and other fields modified accordingly .finos.unzip.priv.axfd:{ .finos.log.debug"applying extra field"; r:y; r:$[ r`xln; [ / parse extra field r:update{x[;`id]!x}.finos.unzip.priv.parse[(.finos.unzip.priv.pxfd;.finos.unzip.priv.wxfd;x);xfd;count xfd]from r; / if ZIP64 record, upsert r,:exec{$[not any i:0x0001~/:x[;`id];();1=sum i;2_x first where i;'`parse]}xfd from r; / if UPath record, validate and upsert if[0x7075 in key r`xfd; r:$[ (.finos.util.crc32[0]string r`fnm)~0x00 sv r[`xfd;0x7075]`crc; r,(enlist`fnm)!enlist`$r[`xfd;0x7075]`unm; [ .finos.log.warning"invalid unicode path record; skipping"; r]]; ]; / ignore any other extra fields for now r]; r]; .finos.log.debug"done applying extra field"; r} // Parse a file data record. // @param x (bytes;extra) // @param y index // @param z header // @return (record;next index) // @see .finos.unzip.priv.parse .finos.unzip.priv.pfd:{ e:x 1; x:x 0; r:update {("i"$first x)%10}ver, first os, .finos.unzip.priv.flags!.finos.unzip.priv.parseBits flg, .finos.unzip.priv.parseNum cmp, {"v"$24 60 60 sv 1 1 2*2 sv'0 5 11 cut reverse .finos.unzip.priv.parseBits x}mtm, {.finos.util.ymd . 1980 0 0+2 sv'0 7 11 cut reverse .finos.unzip.priv.parseBits x}mdt, .finos.unzip.priv.parseNum nln, .finos.unzip.priv.parseNum xln from z; if[r[`flg]`data_descriptor; / data descriptor r,:`crc`csz`usz!4 cut -12#x; ]; r:update .finos.unzip.priv.parseNum csz, .finos.unzip.priv.parseNum usz from r; r:update fnm:`$"c"$x y+til nln from r; r:update xfd:x y+nln+til xln from r; if[(not r`xln)&any -1=r`csz`usz; '`parse; ]; r:.finos.unzip.priv.axfd[(enlist`context)!enlist`fd]r; .finos.log.debug"extracting data"," ",string .z.P; r:update fdt:x y+nln+xln+til csz-12*flg`encrypted_file, enc:x y+nln+xln+til 12*flg`encrypted_file, dtd:{$[y;x z+til 4*3+0x504b0708~x z+til 4;0#x]}[x;flg`data_descriptor]y+nln+xln+csz from r; .finos.log.debug"done extracting data"," ",string .z.P; / TODO can this filter be applied any earlier? r:$[ (e~(::))|(r`fnm)in e; [ .finos.log.info"inflating ",string r`fnm; $[ / no compression: copy 0=r`cmp;update fdu:"c"$fdt from r; / deflate: reframe as gzip stream and inflate 8=r`cmp;update fdu:"c"$(.Q.gz 0x1f8b0800000000000003,fdt,crc,4#reverse 0x00 vs usz mod prd 32#2)from r; '`nyi]]; update fdu:""from r]; (r;exec y+nln+xln+csz+count dtd from r)}
if[any[null h`w]|any null r[;1]; .lg.e[`runcheck;"unable to compare as process down or missing handle"]; .dqe.updresultstab[runtype;idnum;0Np;0b;"error:unable to compare as process down or missing handle";`failed;params;params`compresproc]; :()]; ] /- check if any handles exist, if not exit function if[0=count h;.lg.e[`runcheck;"cannot open handle to any given processes"];:()]; .dqe.getresult[runtype;value fn;(),params;idnum]'[h[`procname];h[`w]] } results:([]id:`long$();funct:`$();params:`$();procs:`$();procschk:`$();starttime:`timestamp$();endtime:`timestamp$();result:`boolean$();descp:();chkstatus:`$();chkruntype:`$()); loadtimer:{[DICT] .lg.o[`dqc;("Loading check - ",(string DICT[`action])," from configtable into timer table")]; /- Accounting for potential multiple parameters DICT[`params]: value DICT[`params]; DICT[`proc]: value DICT[`proc]; /- function that will be used in timer functiontorun:(`.dqe.runcheck;`scheduled;DICT`checkid;.Q.dd[`.dqc;DICT`action];DICT`params;DICT`proc); /- Determine whether the check should be repeated $[DICT[`mode]=`repeat; .timer.repeat[DICT`starttime;DICT`endtime;DICT`period;functiontorun;"Running check on ",string DICT`proc]; .timer.once[DICT`starttime;functiontorun;"Running check once on ",string DICT`proc]] } /- rerun a check manually reruncheck:{[chkid] .lg.o[`dqc;"rerunning check ",string chkid]; d:exec action, params, proc from .dqe.configtable where checkid=chkid; .lg.o[`dqc;"re-running check ",(string d`action)," manually"]; d[`params]:value d[`params] 0; d[`proc]:value raze d`proc; /- input man argument is `manual or `scheduled indicating manul run is on or off .dqe.runcheck[`manual;chkid;.Q.dd[`.dqc;d`action];d`params;d`proc]; } \d . .dqe.currentpartition:.dqe.getpartition[]; /- setting up .u.end for dqc .u.end:{[pt] .lg.o[`end; "Starting dqc end of day process."]; /- save down results and config tables {.dqe.endofday[.dqe.dqcdbdir;.dqe.getpartition[];x;`.dqe;.dqe.tosavedown[` sv(`.dqe;x)]]}each`results`configtable; /- get handles for DBs that need to reload hdbs:distinct raze exec w from .servers.SERVERS where proctype=`dqcdb; /- check list of handles to DQCDBs is non-empty, we need at least one to /- notify DQCDB to reload if[0=count hdbs;.lg.e[`.u.end; "No handles open to the DQCDB, cannot notify DQCDB to reload."]]; /- send message for DBs to reload .dqe.notifyhdb[.os.pth .dqe.dqcdbdir]'[hdbs]; /- clear check function timers .timer.removefunc'[exec funcparam from .timer.timer where `.dqe.runcheck in' funcparam]; /- clear writedown timer .timer.removefunc'[exec funcparam from .timer.timer where `.dqe.writedown in' funcparam]; /- clear writedownconfig timer .timer.removefunc'[exec funcparam from .timer.timer where `.dqe.writedownconfig in' funcparam]; /- clear .u.end timer .timer.removefunc'[exec funcparam from .timer.timer where `.u.end in' funcparam]; delete configtable from `.dqe; /- sets currentpartition to fit the partitiontype provided in settings .dqe.currentpartition:(`date^.dqe.partitiontype)$(.z.D,.z.d).dqe.utctime; /- sets .eodtime.nextroll to the next day so .u.end would run at the correct time .eodtime.nextroll:.eodtime.getroll[`timestamp$(.z.D,.z.d).dqe.utctime]; if[.dqe.utctime=1b;.eodtime.nextroll:.eodtime.getroll[`timestamp$.dqe.currentpartition]+(.z.T-.z.t)]; .lg.o[`dqc;"Moving .eodtime.nextroll to match current partition"]; .lg.o[`dqc;".eodtime.nextroll set to ",string .eodtime.nextroll]; .dqe.init[]; .lg.o[`end; "Finished dqc end of day process."] }; if[not .dqe.testing; .lg.o[`dqc;"Initializing dqc for the first time"]; .dqe.init[]; ]; ================================================================================ FILE: TorQ_code_processes_dqe.q SIZE: 9,036 characters ================================================================================ / - default parameters \d .dqe dqedbdir:@[value;`dqedbdir;`:dqedb]; // location of dqedb database utctime:@[value;`utctime;1b]; // define whether the process is on UTC time or not partitiontype:@[value;`partitiontype;`date]; // set type of partition (defaults to `date) hdbtypes:@[value;`hdbtypes;enlist`hdb]; // hdb types for use in saving getpartition:@[value;`getpartition; // determines the partition value {{@[value;`.dqe.currentpartition; (`date^partitiontype)$(.z.D,.z.d)utctime]}}]; writedownperiodengine:@[value;`writedownperiodengine;0D01:00:00]; // dqe periodically writes down to dqedb, writedownperiodengine determines the period between writedowns configcsv:@[value;`.dqe.configcsv;first .proc.getconfigfile["dqengineconfig.csv"]]; // loading up the config csv file resultstab:([]procs:`$();funct:`$();table:`$();column:`$();resvalue:`long$()); // schema for the resultstab that shows query results advancedres:([]procs:`$();funct:`$();table:`$();resultkeys:`$();resultdata:()); / - end of default parameters /- called at every EOD by .u.end init:{ .lg.o[`init;"searching for servers"]; /- Open connection to discovery .servers.startupdependent[`dqedb;10]; if[.dqe.utctime=1b;.eodtime.nextroll:.eodtime.getroll[`timestamp$.dqe.currentpartition]+(.z.T-.z.t)]; /- set timer to call EOD .timer.once[.eodtime.nextroll;(`.u.end;.dqe.getpartition[]);"Running EOD on Engine"]; /- store i numbers of rows to be saved down to DB .dqe.tosavedown:()!(); .dqe.configtimer[]; st:.dqe.writedownperiodengine+min .timer.timer[;`periodstart]; et:.eodtime.nextroll-.dqe.writedownperiodengine; if[((.z.Z,.z.z).dqe.utctime)>st;st:((.z.Z,.z.z).dqe.utctime)+.dqe.writedownperiodengine]; .lg.o[`init;"start time of periodic writedown is: ",string st]; .lg.o[`init;"end time of periodic writedown is: ",string et]; .timer.repeat[st;et;.dqe.writedownperiodengine;(`.dqe.writedownengine;`);"Running periodic writedown on resultstab"]; .timer.repeat[st;et;.dqe.writedownperiodengine;(`.dqe.writedownadvanced;`);"Running periodic writedown on advancedres"]; .lg.o[`init;"initialization completed"]; } /- update results table with results updresultstab:{[proc;fn;params;reskeys;resinput] .lg.o[`updresultstab;"Updating results for ",(string fn)," from proc ",string proc]; if[-7h=type resinput; if[not 11h=abs type params`col; params[`col]:`]; `.dqe.resultstab insert (proc;fn1:last` vs fn;reskeys;params`col;resinput); s:exec i from .dqe.resultstab where procs=proc,funct=fn1,table=reskeys,column=params[`col]; .dqe.tosavedown[`.dqe.resultstab],:s;] if[-7h<>type resinput; if[not 11=abs type params`tab;params[`tab]:`]; `.dqe.advancedres insert (proc;fn1:last` vs fn;params`tab;reskeys;resinput); s:exec i from .dqe.advancedres where procs=proc,funct=fn1,table=params[`tab],resultkeys=reskeys; .dqe.tosavedown[`.dqe.advancedres],:s;] } qpostback:{[proc;query;params;querytype;result] .dqe.updresultstab[first proc;query;params]'[$[`table=querytype;key result;`];value result]; .lg.o[`qpostback;"Postback successful for ",string first proc]; } /- sends queries to test processes runquery:{[query;params;querytype;rs] temp:(`,(value value query)[1])!(::), params; .lg.o[`runquery;"Starting query run for ",string query]; if[1<count rs;.lg.e[`runquery"error: can only send query to one remote service, trying to send to ",string count rs];:()]; if[not rs in exec procname from .servers.SERVERS;.lg.e[`runquery;"error: remote service must be a proctype";:()]]; h:.dqe.gethandles[(),rs]; .async.postback[h`w;((value query),params);.dqe.qpostback[h`procname;query;temp;querytype]]; .lg.o[`runquery;"query successfully ran for ",string query]; } loadtimer:{[d] .lg.o[`dqe;("Loading query - ",(string d[`query])," from config csv into timer table")]; d[`params]:value d[`params]; d[`proc]:value raze d[`proc]; functiontorun:(`.dqe.runquery;.Q.dd[`.dqe;d`query];d`params;d`querytype;d`proc); .timer.once[d`starttime;functiontorun;("Running check on ",string d[`proc])] } /- adds today's date to the time from config csv, before loading the queries to the timer configtimer:{[] t:.dqe.readdqeconfig[.dqe.configcsv;"S**SN"]; t:update starttime:(`date$(.z.D,.z.d).dqe.utctime)+starttime from t; {.dqe.loadtimer[x]}each t } writedownengine:{ if[0=count .dqe.tosavedown`.dqe.resultstab;:()]; dbprocs:exec distinct procname from raze .servers.getservers[`proctype;;()!();0b;1b]each .dqe.hdbtypes,`dqedb`dqcdb; // Get a list of all databases. restemp1:select from .dqe.resultstab where procs in dbprocs; restemp2:select from .dqe.resultstab where not procs in dbprocs; restemp3:.dqe.resultstab; .dqe.resultstab::restemp1; .dqe.savedata[.dqe.dqedbdir;.dqe.getpartition[]-1;.dqe.tosavedown[`.dqe.resultstab];`.dqe;`resultstab]; .dqe.resultstab::restemp2; .dqe.savedata[.dqe.dqedbdir;.dqe.getpartition[];.dqe.tosavedown[`.dqe.resultstab];`.dqe;`resultstab]; .dqe.resultstab::restemp3; /- get handles for DBs that need to reload hdbs:distinct raze exec w from .servers.SERVERS where proctype=`dqedb; /- send message for DBs to reload .dqe.notifyhdb[.os.pth .dqe.dqedbdir]'[hdbs]; }
extrapolate:{[F;v] v[`z2]:cubicextrapolation . v`f2`f3`d2`d3`z3; v[`z2]:$[$[0>v`z2;1b;0w=v`z2];$[.5>=v`limit;v[`z1]*EXT-1;.5*v[`limit]-v`z1]; / extrapolation beyond max? -> bisect $[-.5<v`limit;v[`limit]<v[`z2]+v`z1;0b];.5*v[`limit]-v`z1; / extrapolation beyond limit? -> set to limit $[-.5>v`limit;(EXT*v`z1)<v[`z2]+v`z1;0b];v[`z1]*EXT-1; v[`z2]<v[`z3]*neg INT;v[`z3]*neg INT; / too close to limit? $[-.5<v`limit;v[`z2]<(v[`limit]-v`z1)*1f-INT;0b];(v[`limit]-v`z1)*1f-INT; v[`z2]]; v[`f3]:v`f2;v[`d3]:v`d2;v[`z3]:neg v`z2; / set pt 3 = pt 2 v[`z1]+:v`z2;v[`X]+:v[`z2]*v`s; / update current estimates v[`f2`df2]:F v`X; v[`d2]:dot . v`df2`s; v} loop:{[n;F;v] v[`i]+:n>0; / count iterations?! v[`X]+:v[`z1]*v`s; / begin line search v[`f2`df2]:F v`X; v[`i]+:n<0; / count epochs?! v[`d2]:dot . v`df2`s; v[`f3]:v`f1;v[`d3]:v`d1;v[`z3]:neg v`z1; / initialize pt 3 = pt 1 v[`M]:$[n>0;MAX;MAX&neg n-v`i]; v[`success]:0b;v[`limit]:-1; / initialize quantities BREAK:0b; while[not BREAK; while[$[0<v`M;wolfepowell . v`d1`d2`f1`f2`z1;0b]; v[`limit]:v`z1; / tighten the bracket v:minimize[F;v]; v[`M]-:1;v[`i]+:n<0; / count epochs?! ]; if[wolfepowell . v`d1`d2`f1`f2`z1;BREAK:1b]; / failure if[v[`d2]>SIG*v`d1;v[`success]:1b;BREAK:1b]; / success if[v[`M]=0;BREAK:1b]; / failure if[not BREAK; v:extrapolate[F;v]; v[`M]-:1;v[`i]+:n<0; / count epochs?! ]; ]; v} onsuccess:{[v] v[`f1]:v`f2; 1"Iteration ",string[v`i]," | cost: ", string[v`f1], "\r"; v:@[v;`s;polackribiere . v`df1`df2]; / Polack-Ribiere direction v[`df2`df1]:v`df1`df2; / swap derivatives v[`d2]:dot . v`df1`s; / new slope must be negative, otherwise use steepest direction if[v[`d2]>0;v[`s]:neg v`df1;v[`d2]:dot[v`s;neg v`s]]; v[`z1]*:RATIO&v[`d1]%v[`d2]-REALMIN; / slope ratio but max RATIO v[`d1]:v`d2; v} fmincg:{[n;F;X] v:`X`i!(X;0); / zero the run length counter ls_failed:0b; / no previous line search has failed fX:(); v[`f1`df1]:F v`X; / get function value and gradient v[`s]:neg v`df1; / search direction is steepest v[`d1]:dot[v`s;neg v`s]; / this is the slope v[`z1]:(n:n,1)[1]%1f-v`d1; / initial step is red/(|s|+1) n@:0; / n is first element v[`i]+:n<0; / count epochs?! while[v[`i]<abs n; / while not finished X0:v`X`f1`df1; / make a copy of current values v:loop[n;F;v]; if[v`success;fX,:v`f2;v:onsuccess v]; if[not v`success; v[`X`f1`df1]:X0; / restore point from before failed line search / line search failed twice in a row or we ran out of time, so we give up if[$[ls_failed;1b;v[`i]>abs n];-1"";:(v`X;fX;v`i)]; v[`df2`df1]:v`df1`df2; / swap derivatives v[`z1]:1f%1f-v[`d1]:dot[v[`s]]neg v[`s]:neg v`df1; / try steepest ]; ls_failed:not v`success; / line search failure ]; -1"";(v`X;fX;v`i)} ================================================================================ FILE: funq_funq.q SIZE: 201 characters ================================================================================ \l ut.q \l ml.q \l fmincg.q \l porter.q / attempt to load c libraries (.ut.loadf ` sv hsym[`$getenv`QHOME],) each`qml.q`svm.q`linear.q; if[`qml in key `;system "l qmlmm.q"] / use qml matrix operators ================================================================================ FILE: funq_gemini.q SIZE: 759 characters ================================================================================ gemini.p:string `daily`hourly!`day`1hr gemini.c:string `BTCUSD`ETHUSD`LTCUSD`ETHBTC`ZECUSD`ZECBTC`ZECETH gemini.f:gemini.p {"_" sv ("gemini";y;x,".csv")}/:\: asc gemini.c gemini.y:string (`year$.z.D-1) + reverse neg til 3 gemini.f[`minutely]:raze gemini.y {"_" sv ("gemini";y;x;"1min.csv")}\:/: asc gemini.c gemini.b:"http://www.cryptodatadownload.com/cdd/" -1"[down]loading gemini data set"; .ut.download[gemini.b;;"";""] each raze gemini.f; .gemini.load:{[f] if[not count t:("* SFFFFF";1#",") 0: 1_read0 f;:()]; t:`time`sym`open`high`low`close`qty xcol t; t:update time:"P"$?[12>count each time;time;-3_/:time] from t; t:`sym xcols 0!select by time from t; / remove duplicates t} gemini,:({update `p#sym from x} raze .gemini.load peach::)'[`$gemini.f] ================================================================================ FILE: funq_hac.q SIZE: 2,328 characters ================================================================================ \c 40 100 \l funq.q \l iris.q \l seeds.q \l uef.q / hierarchical agglomerative clustering (HAC) -1"normalize seed data set features"; X:.ml.zscore seeds.X -1"build dissimilarity matrix"; D:.ml.f2nd[.ml.edist X] X -1"generate hierarchical clustering linkage stats"; L:.ml.link[`.ml.lw.ward] D -1"generate cluster indices"; I:.ml.clust[L] 1+til 10 -1"plot elbow curve (k vs ssw)"; show .ut.plt .ml.ssw[X] peach I -1"plot elbow curve (k vs % of variance explained)"; show .ut.plt (.ml.ssb[X] peach I)%.ml.sse[X] -1"link into 3 clusters"; I:.ml.clust[L] 3 -1"confirm accuracy"; g:(.ml.mode each seeds.y I)!I .ut.assert[0.9] .ut.rnd[.01] avg seeds.y=.ut.ugrp g -1"we can also check for maximum silhouette"; -1"plot silhouette curve (k vs silhouette)"; I:.ml.clust[L] 1+til 10 show .ut.plt (avg raze .ml.silhouette[.ml.edist;X]::) peach I -1"normalize iris data set features"; X:.ml.zscore iris.X -1"build dissimilarity matrix"; D:.ml.f2nd[.ml.edist X] X -1"generate hierarchical clustering linkage stats"; L:.ml.link[`.ml.lw.median] D -1"generate cluster indices"; I:.ml.clust[L] 1+til 10 -1"plot elbow curve (k vs ssw)"; show .ut.plt .ml.ssw[X] peach I -1"plot elbow curve (k vs % of variance explained)"; show .ut.plt (.ml.ssb[X] peach I)%.ml.sse[X] -1"link into 3 clusters"; I:.ml.clust[L] 3 -1"confirm accuracy"; g:(.ml.mode each iris.y I)!I .ut.assert[.97] .ut.rnd[.01] avg iris.y=.ut.ugrp g -1"generate clusters indices"; I:.ml.clust[L] 1+til 10 -1"plot silhouette curve (k vs silhouette)"; show .ut.plt (avg raze .ml.silhouette[.ml.edist;X]::) peach I -1"let's apply the analysis to one of the uef reference cluster datasets"; X:uef.d32 show .ut.plot[39;20;.ut.c10;sum] X -1"using pedist2 makes calculating the dissimilarity matrix much faster"; D:sqrt .ml.pedist2[X;X] -1"generate hierarchical clustering linkage stats with ward metric"; L:.ml.link[`.ml.lw.ward] D -1"generate cluster indices"; I:.ml.clust[L] ks:1+til 19 -1"plot elbow curve (k vs ssw)"; show .ut.plt .ml.ssw[X] peach I -1"plot elbow curve (k vs % of variance explained)"; show .ut.plt (.ml.ssb[X] peach I)%.ml.sse[X] -1"plot silhouette curve (k vs silhouette)"; show .ut.plt s:(avg raze .ml.silhouette[.ml.edist;X]::) peach I .ut.assert[16] ks i:.ml.imax s -1"plot the clustered data"; show .ut.plot[39;20;.ut.c68;.ml.mode] X[0 1],enlist .ut.ugrp I i ================================================================================ FILE: funq_hiragana.q SIZE: 1,636 characters ================================================================================ \c 40 100 \l funq.q \l etl9b.q / use a neural network to learn 71 hiragana characters / inspired by the presentation given by mark lefevre / http://www.slideshare.net/MarkLefevreCQF/machine-learning-in-qkdb-teaching-kdb-to-read-japanese-67119780 / dataset specification / http://etlcdb.db.aist.go.jp/?page_id=1711 -1"referencing etl9b data from global namespace"; `X`y`h set' etl9b`X`y`h -1"shrinking training set"; X:500#'X;y:500#y;h:500#h; -1"setting the prng seed"; system "S ",string "i"$.z.T -1"view 4 random drawings of the same character"; plt:value .ut.plot[32;16;.ut.c10;avg] .ut.hmap flip 64 cut -1 (,'/) plt each X@\:/: rand[count h]+count[distinct y]*til 4; -1"generate neural network topology with one hidden layer"; n:0N!"j"$.ut.nseq[2;count X;count h] Y:.ml.diag[last[n]#1f]@\:"i"$y rf:.ml.l2[1] / l2 regularization function -1"run mini-batch stochastic gradient descent",$[count rf;" with l2 regularization";""]; hgolf:`h`g`o`l!`.ml.sigmoid`.ml.dsigmoid`.ml.softmax`.ml.celoss -1"initialize theta with random weights"; theta:2 raze/ .ml.glorotu'[1+-1_n;1_n]; cf:first .ml.nncostgrad[rf;n;hgolf;Y;X]:: gf:last .ml.nncostgrad[rf;n;hgolf]:: theta:first .ml.iter[1;.01;cf;.ml.sgd[.4;gf;0N?;50;Y;X]] theta /ncgf:.ml.nncostgrad[rf;n;hgolf;Y;X] /first .fmincg.fmincg[10;cgf;theta] -1"checking accuracy of parameters"; avg y=p:.ml.imax .ml.pnn[hgolf;X] .ml.nncut[n] theta w:where not y=p -1"view a few confused characters"; -1 (,'/) plt each X@\:/: value ([]p;y) rw:rand w; -1 (,'/) plt each X@\:/: value ([]p;y) rw:rand w;
NASA Frontier Development Lab Exoplanets Challenge¶ The NASA Frontier Development Lab (FDL) is an applied artificial intelligence (AI) research accelerator, hosted by the SETI Institute in partnership with NASA Ames Research Centre. The programme brings commercial and private partners together with researchers to solve challenges in the space science community using new AI technologies. NASA FDL 2018 focused on four areas of research – Space Resources, Exoplanets, Space Weather and Astrobiology – each with their own separate challenges. This paper will focus on the Exoplanets challenge, which aimed to improve accuracy in finding new exoplanets. The TESS mission¶ The Transiting Exoplanet Survey Satellite (TESS) was launched in April 2018, with the objective of discovering new exoplanets in orbit around the brightest stars in the solar neighborhood. For the two-year mission, the sky was segmented into 26 sectors, each of which will be the focus of investigation for 27 days. TESS will spend the first year exploring the 13 sectors that cover the Southern Hemisphere, before rotating to explore the Northern Hemisphere during year two. Pictures will be taken, at a given frequency, to create a Satellite Image Time Series (SITS) for each sector. Once collected, SITS are passed through a highly complex data-processing pipeline, developed by NASA. During the first stage, calibration finds the optimal set of pixels representing each target star. Aggregate brightness is then extracted from the sequence and the pixels associated with each star, to create a light curve (one-dimensional time-series) for each target star. The raw light curve is processed to remove noise, trends and other factors introduced by the satellite itself. The final result is the corrected flux from the star, referred to from now on as a light curve. Light curves are the main subject of study when attempting to detect exoplanets. Variations in brightness of the target stars may indicate the presence of a transiting planet. The preprocessing pipeline searches for signals consistent with transiting planets, in order to identify planet candidates or Threshold Crossing Events (TCEs). However, the list of TCEs will likely contain a large number of false positives, caused by eclipsing binary systems, background eclipsing binaries or simple noise. At this stage, machine learning (ML) comes into play. In this paper we propose a Bayesian Neural Network to try and classify the extracted TCEs as real planets or false positives. We will take advantage of the strength of kdb+/q to manipulate and analyze time-series data, and embedPy to import the necessary Python ML libraries. The technical dependencies required for the below work are as follows: - embedPy - pydl 0.6.0 - scipy 0.19.1 - scikit-learn 0.19.1 - Matplotlib 2.1.0 - NumPy 1.14.5 - seaborn 0.8 - tensorflow-probability 0.3.0 - tensorflow 1.12.0 Data¶ TESS had yet to produce data during FDL 2018, so data from four different sectors were simulated by NASA. 16,000 stars were generated for each sector and planets were placed around some stars following well-known planet models. Eclipsing binaries and background eclipsing binaries were also injected into the simulation, and noise was added to the data. The generated data was given to the pipeline to extract the 64,000 light curves and identify TCEs. Strong signals were found in 9,139 light curves, which were passed to the data validation stage for further analysis. The light curve for each TCE was reprocessed to look for multiple signals in the same curve and allow identification of multiplanetary systems. The result was a total of 19,577 planet candidates identified over 9,139 stars. The data validation stage also found optimal parameters (epoch, period, duration) relating to each potential transit. These parameters describe how to ‘fold’ the light curves in order to create a local view of the transits to emphasize the signal and cancel any noise. An advantage of dealing with simulated data, is that we have the ground truth regarding exactly how many planets were injected into each star. In order to classify TCEs as either real planets or false positives, we consider the following data: - Ground truth - data about the injected signals (i.e. real planet or false positive) - Validation data - information about the TCEs found in the data validation stage - Light curves - light curves for each star, stored as Flexible Image Transport System (FITS) files Feature engineering¶ The TCEs are spread across four different sectors, each of which is processed separately during the first stages. Once a local view is created for each light curve, these are merged to form a single dataset, which is used by the Bayesian neural network for training and testing. TCE information¶ The following table gathers all required information provided by the validation process about the TCEs found in sector 1 as well as their labels: q)5#tces:("SJIIFFFI";(),csv)0:`:../data/sector1/tces/tceinfo.csv tceid catid n_planets tce_num tce_period tce_time0bk tce_duration planet ---------------------------------------------------------------------------------- 6471862_1 6471862 1 1 7.788748 1315.736 1.993841 1 6528628_1 6528628 2 1 2.031586 1312.364 3.604171 0 6528628_2 6528628 2 2 2.031944 1311.081 3.5 0 61088106_1 61088106 1 1 17.95108 1316.481 12.03493 0 314197710_1 314197710 1 1 19.29932 1318.371 8.749369 0 Each TCE is identified by its id (tceid ), which combines the star where it was found (catid ) and the number that identifies the TCE (tce_num ) in the set of TCEs (n_planets ) detected in the star. Column planet indicates the label of the TCE, which takes value 1 (positive class) if the TCE represents a real planet and 0 (negative class) otherwise. Finally, period, epoch and duration are used to fold the light curve. When dealing with a classification problem, an important aspect of the dataset is the distribution of classes in the data. High sensitivity during the data validation stage led to many more false positives than planets. We can see this imbalance by looking at the label distribution: / Load utility functions q)\l ../utils/utils.q q)dis:update pcnt:round[;0.01]100*num%sum num from select num:count i by planet from tces q)dis planet| num pcnt ------| ---------- 0 | 4109 81.72 1 | 919 18.28 Figure 1: Label distribution. Less than 20% of the TCEs are actually planets (a similar distribution is found in sectors 2, 3 and 4), a fact that should be considered later when preparing the data to train the classifier. Before looking at the light curves, we will split the TCEs into training and test sets. To monitor training and inform hyper-parameter tuning, we will also include a validation set. q)show tcessplit:`trn`val`tst!(0,"j"$.8 .9*numtces)_neg[numtces:count tces]?tces trn| +`tceid`catid`n_planets`tce_num`tce_period`tce_time0bk`tce_duration`plan.. val| +`tceid`catid`n_planets`tce_num`tce_period`tce_time0bk`tce_duration`plan.. tst| +`tceid`catid`n_planets`tce_num`tce_period`tce_time0bk`tce_duration`plan.. Local view extraction¶ The local view associated with each TCE can be extracted from the light curve using the period, epoch and duration. q)tsopdir:`:/home/nasafdl/data/tsop301/sector1 q)\l ../utils/extractlocal.q q)trnpro:processtce[tsopdir]each trnlabels:tcessplit`trn q)trndata:reverse fills reverse fills flip(`$string tcessplit[`trn]`catid)!trnpro@\:`local q)valpro:processtce[tsopdir]each vallabels:tcessplit`val q)valdata:reverse fills reverse fills flip(`$string tcessplit[`val]`catid)!valpro@\:`local q)tstpro:processtce[tsopdir]each tstlabels:tcessplit`tst q)tstdata:flip(`$string tcessplit[`tst]`catid)!tstpro@\:`local q)5#trndata 280051467_1 352480413_1 456593119_1 358510596_1 183642345_2 261657455_1 2653.. -----------------------------------------------------------------------------.. 0.2771269 0.2590945 0.08329642 -0.005732315 0.2286466 0.01266482 0.02.. 0.3392213 0.4783 0.08432672 -0.001671889 0.2198233 0.1922276 0.02.. 0.2883021 0.2146166 0.08432672 0.002503958 0.2176699 0.3185288 0.03.. 0.2873107 0.2509516 0.09310736 0.006680667 0.2189751 0.3078591 0.03.. 0.2997477 0.3955775 0.09130886 0.006680667 0.2031908 0.1432165 0.03.. q)5#trnlabels tceid catid n_planets tce_num tce_period tce_time0bk tce_duration planet ---------------------------------------------------------------------------------- 280051467_1 280051467 1 1 15.15262 1312.372 0.2404994 0 352480413_1 352480413 1 1 18.84509 1311.424 0.5792979 1 456593119_1 456593119 1 1 4.565184 1313.697 0.2467387 0 358510596_1 358510596 1 1 3.800774 1312.677 0.09960142 0 183642345_2 183642345 2 2 7.013778 1311.829 0.2995369 0 Since there is overlapping between sectors, we prepend the sector number to the column name to uniquely identify each TCE: q)addsect:{(`$"_" sv/: string y,'cols x)xcol x} q)5#trndata:addsect[trndata;`1] 1_280051467_1 1_352480413_1 1_456593119_1 1_358510596_1 1_183642345_2 1_26165.. -----------------------------------------------------------------------------.. 0.2771269 0.2590945 0.08329642 -0.005732315 0.2286466 0.01266.. 0.3392213 0.4783 0.08432672 -0.001671889 0.2198233 0.19222.. 0.2883021 0.2146166 0.08432672 0.002503958 0.2176699 0.31852.. 0.2873107 0.2509516 0.09310736 0.006680667 0.2189751 0.30785.. 0.2997477 0.3955775 0.09130886 0.006680667 0.2031908 0.14321.. To get a better understanding of what light curves are like, we can plot a random selection using Matplotlib via embedPy: q)sample:16?update tce_duration%24 from tces q)plt:.p.import`matplotlib.pyplot q)subplots:plt[`:subplots][4;4] q)fig:subplots[@;0] q)axarr:subplots[@;1] q)fig[`:set_size_inches;18.5;10.5] q){[i] j:cross[til 4;til 4]i; box:axarr[@;j 0][@;j 1]; box[`:plot]r[i]`local; box[`:axis]`off; box[`:set_title]"ID: ",string sample[i]`catid; }each til 16 q)plt[`:show][] Figure 2: Local view of some random TCEs. Several types of curves with distinct behaviors and dips are shown in the plot. Now, the objective is to find patterns that characterize the drops in brightness caused by real planets. Data preparation¶ Having performed the previous steps for each sector, we combine the data to create training, validation and test sets containing data from all four sectors: Training data: Training data contains 15662 TCEs: q)update pcnt:round[;.01]100*num%sum num from select num:count i by planet from trnlabels planet| num pcnt ------| ----------- 0 | 12560 80.19 1 | 3102 19.81 Validation data: Validation data contains 1958 TCEs: q)update pcnt:round[;.01]100*num%sum num from select num:count i by planet from vallabels planet| num pcnt ------| ---------- 0 | 1558 79.57 1 | 400 20.43 Test data: Test data contains 1957 TCEs: q)update pcnt:round[;.01]100*num%sum num from select num:count i by planet from tstlabels planet| num pcnt ------| ---------- 0 | 1574 80.43 1 | 383 19.57 We flip the data and drop column names, to create a matrix where each row represents a unique TCE. We also extract the labels as vectors: q)xtrain:value flip trndata q)ytrain:trnlabels`planet q)xval:value flip valdata q)yval:vallabels`planet q)xtest:value flip tstdata q)ytest:tstlabels`planet Training, validation and test sets fairly reproduce the proportion of planets in the whole dataset, where 20% of the TCEs are actually planets. This ratio could be an issue when training a model, since planets would have low importance in the gradient when updating the network weights. To mitigate this problem, we oversample the positive class. We add a random sample of planets to the training set, so that the final proportion of planets vs non-planets will be 50%-50%. Now, we can create the final balanced training set easier: / Initial proportion of actual planets q)show p0:avg ytrain 0.198058 / Final proportion of planets vs non-planets q)p1:0.5 q)sample:(nadd:(-) . sum each ytrain=/:(0 1))?xtrain where ytrain q)xoversampled:xtrain,sample q)yoversampled:ytrain,nadd#1 q)ind:neg[n]?til n:count yoversampled q)finalxtrain:xoversampled ind q)finalytrain:yoversampled ind Size of the final training set is 25120: planets| num pcnt -------| ---------- 0 | 12560 50 1 | 12560 50 Benchmark model¶ Our objective is to train a Bayesian neural network to identify dips in the light curves caused by planets. However, it is important to have a benchmark model that allows us to compare performance and better interpret results obtained by more complex models. Model¶ The model chosen as a benchmark is a linear classifier, which considers a linear combination of the features to make the predictions. The model tries to linearly separate classes and base its decisions on that. SGDClassifier is imported from sklearn using embedPy, and trained on the training dataset in order to find the weights of the linear combination that better separate classes: q)sgd:.p.import[`sklearn.linear_model][`:SGDClassifier][] q)sgd[`:fit][finalxtrain;finalytrain]; We do not optimize the parameters, so both the validation set and the test set are used to test the performance of the model. Predictions¶ Validation set¶ Once the model is trained, predictions can be obtained by calling the predict method: q)show valpreds:sgd[`:predict;xval]` 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 1 0 1 1 0 1 1 1.. Accuracy is usually computed to test the performance of a model. However, this is not always a good measure of model performance, especially when dealing with unbalanced datasets. Precision and recall are often better choices in this case, since they differentiate between models that prioritize false positives over false negatives. This allows the best model to be selected, based on the objective of a particular problem. The confusion matrix is also very useful because results can be better visualized: q)accuracy[yval;valpreds] 0.7793667 q)precision[1;yval;valpreds] 0.4766082 q)sensitivity[1;yval;valpreds] 0.815 q)cm:confmat[1;yval;valpreds] q)displayCM[value cm;`planet`noplanet;"Confusion matrix";()] Figure 3: Confusion matrix of the benchamark model with the validation set. The linear classifier is able to detect a high proportion of planets (75%), however, the precision of the model is low. We would like to maximize this. Test set¶ Results are also obtained using the test set, which will also allow us to compare results afterwards: q)show testpreds:sgd[`:predict;xtest]` 1 1 0 0 0 0 1 0 0 1 0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1.. q)accuracy[ytest;testpreds] 0.7726111 q)precision[1;ytest;testpreds] 0.4534535 q)sensitivity[1;ytest;testpreds] 0.7885117 q)cm:confmat[1;ytest;testpreds] q)displayCM[value cm;`planet`noplanet;"BNN confusion matrix";()] Figure 4: Confusion matrix of the benchmark model with the test set. Results are similar to those obtained on the validation set. Bayesian neural network model¶ Model¶ Machine-learning models usually depend on several parameters that need to be optimized to achieve the best performance. Bayesian neural networks are no exception to this. Learning rate, number of epochs, batch size, number of Monte Carlo samples, activation function or architecture highly condition results. Values for these parameters are defined in a dictionary and will be used to build and train the neural network: q)paramdict:`lr`mxstep`layersize`activation!(0.01;10000;128 128;`relu) q)paramdict,:`batchsize`nmontecarlo`trainsize!(512;500;count finalytrain) q)paramdict lr | 0.01 mxstep | 10000 layersize | 128 128 activation | `relu batchsize | 512 nmontecarlo| 500 trainsize | 25120 Before training the model, we need to split the data into batches. In order to do that, we create two functions: buildtraining - splits training dataset into batches of specified size and creates an iterator that allows the neural network to train on all the batches when learning builditerator - creates an iterator of size 1, which captures the whole set q)tf: .p.import`tensorflow q)np: .p.import[`numpy] q)pylist:.p.import[`builtins]`:list q)tuple: .p.import[`builtins]`:tuple q)array:np[`:array] q)buildtraining:{[x;y;size] dataset:tf[`:data.Dataset.from_tensor_slices]tuple(np[`:float32;x]`.;np[`:int32;y]`.); batches:dataset[`:repeat][][`:batch]size; iterator:batches[`:make_one_shot_iterator][]; handle:tf[`:placeholder][tf`:string;`shape pykw()]; feedable:tf[`:data.Iterator.from_string_handle][handle;batches`:output_types; batches`:output_shapes]; data:feedable[`:get_next][][@;]each 0 1; `local`labels`handle`iterator!{x`.}each raze(data;handle;iterator) } q)builditerator:{[x;y;size] dataset:tf[`:data.Dataset.from_tensor_slices]tuple(np[`:float32;x]`.;np[`:int32;y]`.); frozen:dataset[`:take][size][`:repeat][][`:batch]size; frozen[`:make_one_shot_iterator][] } An iterator with batches of size 512 is created using the training set while validation and test sets are converted to iterators of size 1 so they are represented in the same way as the training set and can be passed to the neural network and obtain predictions. q)traindict:buildtraining[finalxtrain;finalytrain;paramdict`batchsize] q)iterators:`val`test!{x`.}each builditerator ./: ((xval;yval;count yval);(xtest;ytest;count ytest)) Finally, we can pass this data to the Python process running in q and load the script that contains the model and the code to train it: q){.p.set[x]get x}each`paramdict`traindict`iterators q)\l ../utils/bnn.p Predictions¶ Predictions of the TCEs in the validation and test sets will be based on Monte Carlo samples of size 500 created by the trained Bayesian neural network. The model produces a probability of each TCE being of class 0 (non-planet) or class 1 (planet). Probabilities are distorted since the network was trained on an oversampled dataset and they need to be corrected. After doing this, the predicted class is the one that gives larger average probability. Validation set¶ To obtain the Monte Carlo sample for each TCE in the validation set, the validation iterator created before needs to be passed to the neural network 500 times: q)p)val_handle = sess.run(iterators['val'].string_handle()) q)p)probs=[sess.run((labels_distribution.probs), feed_dict={handle:val_handle}) for _ in range(paramdict['nmontecarlo'])] q)valprobs:`float$.p.get[`probs]` Variable valprobs contains the Monte Carlo sample of the probabilities provided by the Bayesian neural network. We correct these probabilities for the oversampling, creating the right distribution of the data. q)corprobs:{[p0;p1;p] 1%1+(((1%p0)-1)%(1%p1)-1)*(1%p)-1 } q)corvalprobs:{[p0;p1;p;i].[p;(::;::;i);corprobs . $[i=0;(1-p0;1-p1);(p0;p1)]]}[p0;p1]/[valprobs;0 1] Once corrected probabilities have been recovered, we compute the mean of the probabilities of each class and predict the instances as the class associated with the maximum mean probability: q)show valpreds:{x?max x}each avg corvalprobs 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0.. The accuracy of the predictions is tested and compared to the results provided by the pipeline using different metrics. q)accuracy[yval;valpreds] 0.9101124 q)precision[1;yval;valpreds] 0.8294118 q)sensitivity[1;yval;valpreds] 0.705 These results can be better visualized with the confusion matrix again: q)cm:confmat[1;yval;valpreds] q)displayCM[value cm;`planet`noplanet;"Confusion matrix";()] Figure 5: Confusion matrix of the Bayesian Neural Network with the validation set. Attending to these metrics, results are very satisfactory, especially compared to the results obtained by the linear classifier. model | acc prec sens ------| ------------------------- bnn | 0.9101124 0.8294118 0.705 linear| 0.7793667 0.4766082 0.815 In addition to a high accuracy (91%), the model gets 83% precision, which is quite good since it filters most of the false positives detected by the pipeline. In fact, the confusion matrix shows that only 58 TCEs that do not represent a planet are classified as real planets. Furthermore, for unbalanced datasets achieving high precision usually means losing sensitivity, however, this score is still good: 70%. In case we are not happy with these results, we can tune the parameters of the model and try to get results that better fit our requirements. Models with different parameters should be tested on the validation set and once the preferred model is chosen, it can be tested on the test dataset. Test set¶ Let’s assume the previously trained model fits our requirements since it is able to characterize dips in the light curves caused by different phenomena. Therefore, since we are happy enough with the results, we can consider this our final model without need of changing parameters and its performance can be tested on the test dataset. (The final model should be trained again using training and validation sets together but we will not train it again to keep the demonstration brief). To obtain the prediction of the test set we do exactly the same as we did with the validation set before: q)p)test_handle = sess.run(iterators['test'].string_handle()) q)p)probs=[sess.run((labels_distribution.probs), feed_dict={handle:test_handle}) for _ in range(paramdict['nmontecarlo'])] q)testprobs:`float$.p.get[`probs]` q)cortestprobs:{[p0;p1;p;i].[p;(::;::;i);corprobs . $[i=0;(1-p0;1-p1);(p0;p1)]]}[p0;p1]/[testprobs;0 1] q)show testpreds:{x?max x}each avg cortestprobs 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1.. And results are also tested using the same metrics and confusion matrix: q)accuracy[ytest;testpreds] 0.9095554 q)precision[1;ytest;testpreds] 0.8280255 q)sensitivity[1;ytest;testpreds] 0.6788512 q)cm:confmat[1;ytest;testpreds] q)displayCM[value cm;`planet`noplanet;"BNN confusion matrix";()] Figure 6: Confusion matrix of the Bayesian Neural Network with the test set. Although sensitivity is lower than obtained with the linear classifier, results still indicate that the network is able to deal with the low proportion of real planets and capture a high proportion of them (68%) by using oversampling. Moreover, even though getting both high recall and precision when dealing with unbalanced datasets is usually a complicated task, we can appreciate that the proposed solution achieves 83% precision, which highly improves the precision score obtained with the benchmark model and leads to higher accuracy too (90%). model | acc prec sens ------| ----------------------------- bnn | 0.9095554 0.8280255 0.6788512 linear| 0.7726111 0.4534535 0.7885117 Prediction confidence¶ An advantage of Bayesian neural networks is their capacity to quantify confidence in predictions. They output a distribution of the probabilities for each class, indicating whether a given prediction is nearly random or if the model is confident in it. We can check this confidence by plotting the distribution of the Monte Carlo samples of five random TCEs in the validation set. q)n:5 q)fig:plt[`:figure]`figsize pykw 9,3*n q){[n;xval;yval;pred;p;i] ind:rand count yval; ax:fig[`:add_subplot][n;3;1+3*i]; ax[`:plot]xval ind; ax[`:set_title]"Actual/Prediction: ",string[yval ind],"/",string pred ind; ax:fig[`:add_subplot][n;3;2+3*i]; {[ax;p;ind;j] sns[`:barplot][0 1;p[j;ind];`alpha pykw 0.1;`ax pykw ax]; ax[`:set_ylim]0 1; }[ax;p;ind]each til count p; ax[`:set_title]"Posterior samples"; ax:fig[`:add_subplot][n;3;3+3*i]; sns[`:barplot][0 1;avg p[;ind;];`ax pykw ax]; ax[`:set_ylim]0 1; ax[`:set_title]"Predictive probs"; }[n;xval;yval;valpreds;corvalprobs]each til n q)plt[`:show][] Figure 7: Confidence in the predictions of the Bayesian Neural Network. Visualizations¶ Finally, we can show the local views of some TCEs and their predictions to see how the neural network classifies them: q)subplots:plt[`:subplots][4;4] q)fig:subplots[@;0] q)axarr:subplots[@;1] q)fig[`:set_size_inches;18.5;10.5] q){[i] j:cross[til 4;til 4]i; box:axarr[@;j 0][@;j 1]; box[`:plot]xtest ind:rand count ytest; box[`:axis]`off; box[`:set_title]"Actual/Prediction:",string[ytest ind],"/",string testpreds ind; }each til 16 q)plt[`:show][] Figure 8: Local view of some random TCEs with their actual classes and predictions. We observe in these plots how the dip caused by a planet transit is different from that caused by other phenomena. Curves produced by real planets have dips that are not as sharp as other dips but they reach the minimum value for a longer period of time. Attending to previous results, the trained neural network seems able to capture this difference. Conclusions¶ For a long time exoplanets were identified by humans looking at the light curves and deciding if the drops in brightness detected in the curves were caused by the transit of a planet. This process was very slow and involved many resources devoted to it. In this paper we have demonstrated how the problem can be approached and solved using kdb+ and embedPy. Data can be loaded and managed using q, which allows us to easily explore it and filter the required data. Once significative data from the TCEs is gathered, light curves are rapidly folded to emphasize drops in brightness by exploiting advantages provided by kdb+ to deal with time series. Then, a simple linear classifier to classify the TCEs is trained to be used as benchmark model. Although the model can capture a high proportion of real planets, it is not able to filter false detections, which is the main goal. Therefore, a more complex model, a Bayesian neural network, is proposed to find a solution that fits our requirements. As the parameters of a network highly determine its performance, a dictionary of parameters is defined, which allows us to tune them easily and to find the set that provides the best performance on the validation dataset. The final model is tested on the test set, and displays significant improvement over the benchmark. Finally, an advantage of Bayesian neural networks is their ability to determine confidence in predictions, which can be very useful when analysing results. To sum up, the proposed solution achieves our main goal, detecting a high proportion of real planets in the set of planet candidates. Also, and more importantly, it achieves a high precision too, which would save a lot of money and time since further analysis of false detections is avoided. In addition, since confidence in predictions is also provided, some other criteria based on this confidence can be taken into account to decide when a planet candidate is worth further analysis. This possibility together with extra data preprocessing could be considered in future works to try and improve results. Author¶ Esperanza López Aguilera joined First Derivatives in October 2017 as a Data Scientist in the Capital Markets Training Program. Acknowledgements¶ I gratefully acknowledge the Exoplanet team at FDL, Chedy Raissi, Jeff Smith, Megan Ansdell, Yani Ioannou, Hugh Osborn and Michele Sasdelli, for their contributions and support.
// @kind function // @category fresh // @desc Extract features using FRESH // @param data {table} Input data // @param idCol {symbol[]} ID column(s) name // @param cols2Extract {symbol[]} Columns on which extracted features will // be calculated (these columns must be numerical) // @param params {table|symbol|symbol[]|null} // table - Should be a modified version of '.ml.fresh.params' table defining // the functions to be applied. // symbol|symbol[] - The functions to be applied when running feature // extraction or one of 'noHyperparameters', 'noPython', 'regression' or // 'classification' defining a subset of features appropriate for various // use-cases // null - Apply all features contained within the table '.ml.fresh.params' // @return {table} Table keyed by ID column and containing the features // extracted from the subset of the data identified by the ID column. fresh.createFeatures:{[data;idCol;cols2Extract;params] params:$[99h~type params;params;fresh.util.featureList[params]]; param0:exec f from params where valid,pnum=0; param1:exec f,pnames,pvals from params where valid,pnum>0; allParams:(cross/)each param1`pvals; calcs:param0,raze param1[`f]cross'param1[`pnames],'/:'allParams; cols2Extract:$[n:"j"$abs system"s"; $[n<m:count cols2Extract;(n;0N);m]#; enlist ]cols2Extract; calcs:cols2Extract cross\:calcs; colMapping:fresh.i.colMap each calcs; colMapping:(`$ssr[;".";"o"]@''"_"sv''string raze@''calcs)!'colMapping; toApply:((cols2Extract,\:idCol:idCol,())#\:data;colMapping); protect:fresh.i.protect[;;idCol]; res:(uj/)raze .[protect]peach flip toApply; idCol xkey fresh.i.expandResults/[0!res;exec c from meta[res]where null t] } // Multi-processing functionality loadfile`:util/mproc.q if[0>system"s";multiProc.init[abs system"s"]enlist".ml.loadfile`:fresh/init.q"]; ================================================================================ FILE: ml_ml_fresh_feat.q SIZE: 20,219 characters ================================================================================ // fresh/feat.q - Features // Copyright (c) 2021 Kx Systems Inc // // Features to be used in FRESH \d .ml // @kind function // @category freshFeat // @desc Calculate the absolute energy of data (sum of squares) // @param data {number[]} Numerical data points // @return {float} Sum of squares fresh.feat.absEnergy:{[data] data wsum data } // @kind function // @category freshFeat // @desc Calculate the absolute sum of the differences between // successive data points // @param data {number[]} Numerical data points // @return {float} Absolute sum of differences fresh.feat.absSumChange:{[data] sum abs 1_deltas data } // @kind function // @category freshFeat // @desc Calculate the aggregation of an auto-correlation over all // possible lags (1 - count[x]) // @param data {number[]} Numerical data points // @return {dictionary} Aggregation (mean, median, variance // and standard deviation) of an auto-correlation fresh.feat.aggAutoCorr:{[data] n:count data; statsACF:$[.ml.stats_break;`adjusted;`unbiased]; autoCorrFunc:$[(abs[var data]<1e-10)|1=n; 0; 1_fresh.i.acf[data;statsACF pykw 1b;`fft pykw n>1250]` ]; `mean`variance`median`dev!(avg;var;med;dev)@\:autoCorrFunc } // @kind function // @category freshFeat // @desc Calculate a linear least-squares regression for aggregated // values // @param data {number[]} Numerical data points // @param chunkLen {long} Size of chunk to apply // @return {dictionary} Slope, intercept and rvalue for the series // over aggregated max, min, variance or average for chunks of size chunklen fresh.feat.aggLinTrend:{[data;chunkLen] chunkData:chunkLen cut data; stats:(max;min;var;avg)@/:\:chunkData; trend:fresh.feat.linTrend each stats; statCols:`$"_"sv'string cols[trend]cross`max`min`var`avg; statCols!raze value flip trend } // @kind function // @category freshFeat // @desc Hypothesis test to check for a unit root in series // (Augmented Dickey Fuller tests) // @param data {number[]} Numerical data points // @return {dictionary} Test statistic, p-value and used lag fresh.feat.augFuller:{[data] `teststat`pvalue`usedlag!3#"f"$@[{fresh.i.adFuller[x]`};data;0n] } // @kind function // @category freshFeat // @desc Apply auto-correlation over a user-specified lag // @param data {number[]} Numerical data points // @param lag {long} Lag to apply to data // @return {float} Auto-correlation over specified lag fresh.feat.autoCorr:{[data;lag] mean:avg data; $[lag=0;1f;(avg(data-mean)*xprev[lag;data]-mean)%var data] } // @kind function // @category freshFeat // @desc Calculate entropy for data binned into n equi-distant bins // @param data {number[]} Numerical data points // @params numBins {long} Number of bins to apply to data // @return {float} Entropy of the series binned into numBins equidistant bins fresh.feat.binnedEntropy:{[data;numBins] n:count data; data-:min data; p:(count each group(numBins-1)&floor numBins*data%max data)%n; neg sum p*log p } // @kind function // @category freshFeat // @desc Calculate non-linearity of a time series with lag applied // @param data {number[]} Numerical data points // @param lag {long} Lag to apply to data // @return {float} Measure of the non-linearity of the series lagged by lag // Time series non-linearity: Schreiber, T. and Schmitz, A. (1997). PHYSICAL // REVIEW E, VOLUME 55, NUMBER 5 fresh.feat.c3:{[data;lag] avg data*/xprev\:[-1 -2*lag]data } // @kind function // @category freshFeat // @desc Calculate aggregate value of successive changes within // corridor // @param data {number[]} Numerical data points // @param lowerQuant {float} Lower quartile // @param upperQuant {float} Upper quartile // @param isAbs {boolean} Whether absolute values should be considered // @return {dictionary} Aggregated value of successive changes within corridor // specified by lower/upperQuant fresh.feat.changeQuant:{[data;lowerQuant;upperQuant;isAbs] quants:fresh.feat.quantile[data]lowerQuant,upperQuant; k:($[isAbs;abs;]1_deltas data)where 1_&':[data within quants]; statCols:`max`min`mean`variance`median`stdev; statCols!(max;min;avg;var;med;dev)@\:k } // @kind function // @category freshFeat // @desc Calculated complexity of time series based on peaks and // troughs in the dataset // @param data {number[]} Numerical data points // @param isAbs {boolean} Whether absolute values should be considered // @return {float} Measure of series complexity // Time series complexity: // http://www.cs.ucr.edu/~eamonn/Complexity-Invariant%20Distance%20Measure.pdf fresh.feat.cidCe:{[data;isAbs] comp:$[not isAbs; data; 0=s:dev data; :0.; (data-avg data)%s ]; sqrt k$k:"f"$1_deltas comp } // @kind function // @category freshFeat // @desc Count of values in data // @param data {number[]} Numerical data points // @return {long} Number of values within the series fresh.feat.count:{[data] count data } // @kind function // @category freshFeat // @desc Values greater than the average value // @param data {number[]} Numerical data points // @return {int} Number of values in series with a value greater than the mean fresh.feat.countAboveMean:{[data] sum data>avg data } // @kind function // @category freshFeat // @desc Values less than the average value // @param data {number[]} Numerical data points // @return {int} Number of values in series with a value less than the mean fresh.feat.countBelowMean:{[data] sum data<avg data } // @kind function // @category freshFeat // @desc Ratio of absolute energy by chunk // @param data {number[]} Numerical data points // @param numSeg {long} Number of segments to split data into // @return {dictionary} Sum of squares of each region of the series // split into n segments, divided by the absolute energy fresh.feat.eRatioByChunk:{[data;numSeg] k:((numSeg;0N)#data)%fresh.feat.absEnergy data; (`$"_"sv'string`chunk,'til[numSeg],'numSeg)!k$'k } // @kind function // @category freshFeat // @desc Position of first max relative to the series length // @param data {number[]} Numerical data points // @return {float} Position of the first occurrence of the maximum value in the // series relative to the series length fresh.feat.firstMax:{[data] iMax[data]%count data } // @kind function // @category freshFeat // @desc Position of first min relative to the series length // @param data {number[]} Numerical data points // @return {float} Position of the first occurrence of the minimum value in the // series relative to the series length fresh.feat.firstMin:{[data] iMin[data]%count data } // @kind function // @category freshFeat // @desc Calculate the mean, variance, skew and kurtosis of the // absolute Fourier-transform spectrum of data // @param data {number[]} Numerical data points // @return {dictionary} Spectral centroid, variance, skew and kurtosis fresh.feat.fftAggreg:{[data] a:fresh.i.abso[fresh.i.rfft data]`; l:"f"$til count a; mean:1.,(sum each a*/:3(l*)\l)%sum a; m1:mean 1;m2:mean 2;m3:mean 3;m4:mean 4; variance:m2-m1*m1; cond:variance<.5; skew:$[cond;0n;((m3-3*m1*variance)-m1*m1*m1)%variance xexp 1.5]; kurtosis:$[cond;0n;((m4-4*m1*m3-3*m1)+6*m2*m1*m1)%variance*variance]; `centroid`variance`skew`kurtosis!(m1;variance;skew;kurtosis) } // @kind function // @category freshFeat // @desc Calculate the fast-fourier transform coefficient of a series // @param data {number[]} Numerical data points // @param coeff {int} Coefficients to use // @return {dictionary} FFT coefficient given real inputs and extracting real, // imaginary, absolute and angular components fresh.feat.fftCoeff:{[data;coeff] r:(fresh.i.angle[fx;`deg pykw 1b]`; fresh.i.real[fx]`; fresh.i.imag[fx]`; fresh.i.abso[fx:fresh.i.rfft data]` ); fftKeys:`$"_"sv'string raze(`coeff,/:til coeff),\:/:`angle`real`imag`abs; fftVals:raze coeff#'r,\:coeff#0n; fftKeys!fftVals } // @kind function // @category freshFeat // @desc Check if duplicates present // @param data {number[]} Numerical data points // @return {boolean} Series contains any duplicate values fresh.feat.hasDup:{[data] count[data]<>count distinct data } // @kind function // @category freshFeat // @desc Check for duplicate of maximum value within a series // @param data {number[]} Numerical data points // @return {boolean} Does data contain a duplicate of the maximum value fresh.feat.hasDupMax:{[data] 1<sum data=max data } // @kind function // @category freshFeat // @desc Check for duplicate of minimum value within a series // @param data {number[]} Numerical data points // @return {boolean} Does data contain a duplicate of the minimum value fresh.feat.hasDupMin:{[data] 1<sum data=min data } // @kind function // @category freshFeat // @desc Calculate the relative index of a dataset such that the chosen // quantile of the series' mass lies to the left // @param data {number[]} Numerical data points // @param quantile {float} Quantile to check // @return {float} Calculate index fresh.feat.indexMassQuantile:{[data;quantile] n:count data; data:abs data; (1+(sums[data]%sum data)binr quantile)%n } // @kind function // @category freshFeat // @desc Calculate the adjusted G2 Fisher-Pearson kurtosis of a series // @param data {number[]} Numerical data points // @return {float} Adjusted G2 Fisher-Pearson kurtosis fresh.feat.kurtosis:{[data] k*:k:data-avg data; s:sum k; n:count data; ((n-1)%(n-2)*n-3)*(3*1-n)+n*(1+n)*sum[k*k]%s*s } // @kind function // @category freshFeat // @desc Check if the standard deviation of a series is larger than // ratio*(max-min) values // @param data {number[]} Numerical data points // @param ratio {float} Ratio to check // @return {boolean} Is standard deviation larger than ratio times max-min fresh.feat.largestDev:{[data;ratio] dev[data]>ratio*max[data]-min data }
sv ¶ “Scalar from vector” - join strings, symbols, or filepath elements - decode a vector to an atom x sv y sv[x;y] Join¶ Strings¶ Where y is a list of stringsx is a char atom, string, or the empty symbol returns as a string the strings in y joined by x . Where x is the empty symbol ` , the strings are separated by the host line separator: \n on Unix, \r\n on Windows. q)"," sv ("one";"two";"three") / comma-separated "one,two,three" q)"\t" sv ("one";"two";"three") / tab-separated "one\ttwo\tthree" q)", " sv ("one";"two";"three") / x may be a string "one, two, three" q)"." sv string 192 168 1 23 / form IP address "192.168.1.23" q)` sv ("one";"two";"three") / use host line separator "one\ntwo\nthree\n" Symbols¶ Where x is the empty symbol` y is a symbol list returns a symbol atom in which the items of y are joined by periods, i.e. q)` sv `quick`brown`fox `quick.brown.fox q)`$"."sv string `quick`brown`fox `quick.brown.fox Bytes¶ Since 4.1t 2024.01.11, y can be a list of byte vectors, which can be joined by byte(s) x . q)0x03 sv 0x02 vs 0x0102010201 0x0103010301 q)0x0203 sv 0x0203 vs "x"$til 6 0x0001020304 q)0x02 sv (enlist 0x01;enlist 0x01;enlist 0x01) 0x0102010201 Filepath components¶ Where x is the empty symbol` y is a symbol list of which the first item is a file handle returns a file handle where the items of the list are joined, separated by slashes. (This is useful when building file paths.) q)` sv `:/home/kdb/q`data`2010.03.22`trade `:/home/kdb/q/data/2010.03.22/trade If the first item is not a file handle, returns a symbol where the items are joined, separated by . (dot). This is useful for building filenames with a given extension: q)` sv `mywork`dat `mywork.dat vs partition Decode¶ Base to integer¶ Where x and y are numeric atoms or lists, y is evaluated to base x . q)10 sv 2 3 5 7 2357 q)100 sv 2010 3 17 20100317 q)0 24 60 60 sv 2 3 5 7 / 2 days, 3 hours, 5 minutes, 7 seconds 183907 When x is a list, the first number is not used. The calculation is done as: q)baseval:{y wsum reverse prds 1,reverse 1_x} q)baseval[0 24 60 60;2 3 5 7] 183907f Bytes to integer¶ Where x is0x0 y is a vector of bytes of length 2, 4 or 8 returns y converted to the corresponding integer. q)0x0 sv "x" $0 255 / short 255h q)0x0 sv "x" $128 255 -32513h q)0x0 sv "x" $0 64 128 255 / int 4227327 q)0x0 sv "x" $til 8 / long 283686952306183 q)256 sv til 8 / same calculation 283686952306183 Converting non-integers Use File Binary – e.g.: q)show a:0x0 vs 3.1415 0x400921cac083126f q)(enlist 8;enlist "f")1: a /float 3.1415 Bits to integer¶ Where x is0b y is a boolean vector of length 8, 16, 32, or 64 returns y converted to the corresponding integer or (in the case of 8 bits) a byte value. q)0b sv 64#1b -1 q)0b sv 32#1b -1i q)0b sv 16#1b -1h q)0b sv 8#1b 0xff Since 4.1t 2021.09.03, y also supports guids. q)0b sv 10001100011010111000101101100100011010000001010101100000100001000000101000111110000101111000010000000001001001010001101101101000b 8c6b8b64-6815-6084-0a3e-178401251b68 vs encode .Q.j10 (encode binhex), .Q.x10 (decode binhex) .Q.j12 (encode base36), .Q.x12 (decode base36) system ¶ Execute a system command system x system[x] Where x is a string representing a kdb+ system command or operating system shell command, and any parameters to it. Executes the command and returns the result as a list of character vectors. kdb+ system commands¶ Refer to the system commands reference for a full list of available commands. The system command does not include a leading \ . q)\l sp.q … q)\a / tables in namespace `p`s`sp q)count \a / \ must be the first character '\ q)system "a" / same command called with system `p`s`sp q)count system "a" / this returns a result 3 Changing working directory¶ In the event of an unexpected change to the working directory, Windows users please note https://devblogs.microsoft.com/oldnewthing/?p=24433 Operating system shell commands¶ As with \ , if the argument is not a q command, it is executed in the shell: q)system "pwd" "/home/guest/q" Binary output The result is expected to be text, and is captured into a list of character vectors. As part of this capture, line feeds and associated carriage returns are removed. This transformation makes it impractical to capture binary data from the result of the system call. Redirecting the output to a file or fifo for explicit ingestion may be appropriate in such cases. Directing output to a file¶ When redirecting output to a file, for efficiency purposes, avoiding using >tmpout needlessly; append a semi-colon to the command. q)system"cat x" is essentially the same as the shell command cat x > tmpout as kdb+ tries to capture the output. So if you do system"cat x > y" under the covers that looks like cat x > y > tmpout Not good. So if you add the semicolon system"cat x > y;" the shell interpreter considers it as two statements cat x > y; > tmpout Capture stderr output¶ You cannot capture the stderr output from the system call directly, but a workaround is / force capture to a file, and cat the file q)system"ls egg > file 2>&1;cat file" "ls: egg: No such file or directory" / try and fails to capture the text q)@[system;"ls egg";{0N!"error - ",x;}] ls: egg: No such file or directory "error - os" tables ¶ List of tables in a namespace tables x tables[x] Where x is a reference to a namespace, returns as a symbol vector a sorted list of the tables in x q)\l sp.q q)tables `. / tables in root namespace `p`s`sp q)tables[] / default is root namespace `p`s`sp q).work.tab:sp / assign table in work namespace q)tables `.work / tables in work ,`tab # Take¶ Select leading or trailing items from a list or dictionary, named entries from a dictionary, or named columns from a table x#y #[x;y] Where x is an int atom or vector, or a tabley is an atom, list, dictionary, table, or keyed table returns y as a list, dictionary or table described or selected by x . # is a multithreaded primitive. Atom or list¶ Where x is an int atom, and y is an atom or list, returns a list of length x filled from y , starting at the front if x is positive and the end if negative. q)5#0 1 2 3 4 5 6 7 8 /take the first 5 items 0 1 2 3 4 q)-5#0 1 2 3 4 5 6 7 8 /take the last 5 items 4 5 6 7 8 If x>count y , y is treated as circular. q)5#`Arthur`Steve`Dennis `Arthur`Steve`Dennis`Arthur`Steve q)-5#`Arthur`Steve`Dennis `Steve`Dennis`Arthur`Steve`Dennis q)3#9 9 9 9 q)2#`a `a`a If x is 0, an empty list is returned. q)trade:([]time:();sym:();price:();size:()) /columns can hold anything q)trade +`time`sym`price`size!(();();();()) q)/idiomatic way to initialize columns to appropriate types q)trade:([]time:0#0Nt;sym:0#`;price:0#0n;size:0#0N) q)trade +`time`sym`price`size!(`time$();`symbol$();`float$();`int$()) Where x is a vector, returns a matrix or higher-dimensional array; count x gives the number of dimensions. q)2 5#"!" "!!!!!" "!!!!!" q)2 3#til 6 (0 1 2;3 4 5) A 2×4 matrix taken from the list `Arthur`Steve`Dennis q)2 4#`Arthur`Steve`Dennis Arthur Steve Dennis Arthur Steve Dennis Arthur Steve Higher dimensions are not always easy to see. q)2 3 4#"a" "aaaa" "aaaa" "aaaa" "aaaa" "aaaa" "aaaa" q)show five3d:2 3 4#til 5 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 q)count each five3d 3 3 q)first five3d 0 1 2 3 4 0 1 2 3 4 0 1 A null in x will cause that dimension to be maximal. q)0N 3#til 10 0 1 2 3 4 5 6 7 8 ,9 Changes since V3.3¶ From V3.4, if x is a list of length 1, the result has a single dimension. q)enlist[2]#til 10 0 1 From V3.4, x can have length greater than 2 – but may not contain nulls. q)(2 2 3#til 5)~((0 1 2;3 4 0);(1 2 3;4 0 1)) 1b q)(enlist("";""))~1 2 0#"a" 1b q)all`domain=@[;1 2;{`$x}]each(#)@'(1 0 2;2 3 0N;0N 2 1;-1 2 3) 1b The effect of nulls in x changed in V3.3. Prior to V3.3: q)3 0N # til 10 (0 1 2 3;4 5 6 7;8 9) q)(10 0N)#(),10 10 q)4 0N#til 9 0 1 2 3 4 5 6 7 8 From V3.3: q)3 0N#til 10 0 1 2 3 4 5 6 7 8 9 q)2 0N#0#0 q)(10 0N)#(),10 `long$() `long$() `long$() `long$() `long$() `long$() `long$() `long$() `long$() ,10 q)4 0N#til 9 0 1 2 3 4 5 6 7 8 Dictionary¶ Leading/Trailing¶ Where x is an int atomy is a dictionary returns x entries from y . q)d:`a`b`c!1 2 3 q)2#d a| 1 b| 2 q)-2#d b| 2 c| 3 Keys¶ Where x is a symbol vectory is a dictionary returns from y entries for x . q)d:`a`b`c!1 2 3 q)`a`b#d a| 1 b| 2 q)enlist[`a]#d a| 1 Table¶ Rows¶ Where x is an int atomy is a table returns x rows from y . q)t:([] name:`Dent`Beeblebrox`Prefect; iq:98 42 126; age:20 22 25) q)2#t name iq age ----------------- Dent 98 20 Beeblebrox 42 22 q)-2#t name iq age ------------------ Beeblebrox 42 22 Prefect 126 25 Not currently supported for partitioned tables. .Q.ind can be used as an alternative to access indices. Columns¶ Where x is a symbol vectory is a table returns column/s x from y . t:([] name:`Dent`Beeblebrox`Prefect; iq:98 42 126; age:20 22 25) q)`name`age#t name age -------------- Dent 20 Beeblebrox 22 Prefect 25 Not currently supported for partitioned tables. Keyed table¶ Where x is a tabley is a keyed table- columns of x are keys ofy returns matching rows, together with the respective keys. This is similar to retrieving multiple records through the square brackets syntax, except Take also returns the keys. q)([]s:`s1`s2)#s s | name status city --| ------------------- s1| smith 20 london s2| jones 10 paris Q for Mortals §8.4.5 Retrieving Multiple Records tan , atan ¶ Tangent and arctangent tan x tan[x] atan x atan[x] Where x is a numeric, returns tan - the tangent of x , taken to be in radians. Integer arguments are promoted to floating point. Null is returned if the argument is null or infinity. - The function is equivalent to {(sin x)%cos x} . atan - the arctangent of x ; that is, the value whose tangent isx . - The result is in radians and lies between \(-\frac{\pi}{2}\) and \(\frac{\pi}{2}\). The range is approximate due to rounding errors. q)tan 0 0.5 1 1.5707963 2 0w / tangent 0 0.5463025 1.557408 3.732054e+07 -2.18504 0n q)atan 0.5 / arctangent 0.4636476 q)atan 42 1.546991 tan and atan are multithreaded primitives. Implicit iteration¶ tan and atan are atomic functions. q)tan (.2;.3 .4) 0.20271 0.3093362 0.4227932 q)atan (.2;.3 .4) 0.1973956 0.2914568 0.3805064 q)tan `x`y`z!3 4#til[12]%10 x| 0 0.1003347 0.20271 0.3093362 y| 0.4227932 0.5463025 0.6841368 0.8422884 z| 1.029639 1.260158 1.557408 1.96476 Domain and range¶ domain: b g x h i j e f c s p m d z n u v t range: f . f f f f f f f . f f f z f f f f til ¶ First x natural numbers til x til[x] Where x is a non-negative integer atom, returns a vector of the first x integers. q)til 0 `long$() q)til 1b ,0 q)til 5 0 1 2 3 4 q)til 5f 'type [0] til 5f ^ til and key are synonyms, but the above usage is conventionally reserved to til . til is a multithreaded primitive.
Using log files: logging, recovery and replication¶ Overview¶ Software or hardware problems can cause a kdb+ server process to fail, possibly resulting in loss of data if not saved to disk at the time of the failure. A kdb+ server can use logging updates to avoid data loss when failures occur. This should not be confused with a file that logs human readable warnings, errors, etc. It refers to a log of instructions to regain state. Automatic handling¶ Overview¶ The automatic log file creation requires little developer work, but without the advantages of the finer level of control that manual log creation provides. Automatic logging captures a message only if it changes the state of the process’ data. Creating a log file¶ Applies only to globals in the default namespace This is not triggered for function-local variables, nor globals that are not in the default namespace, e.g. those prefixed with a dot such as .a.b . This is the same restriction that applies to .z.vs . Logging is enabled by using the -l or -L command-line arguments. This example requires a file trade.q containing instructions to create a trade table: trade:([]time:`time$();sym:`symbol$();price:`float$();size:`int$()) Start kdb+, loading trade.q while enabling recording to trade.log (note: this also uses -p 5001 to allow client connections to port 5001): $ q trade -l -p 5001 Now update messages from clients are logged. For instance: q)/ this is a client q)h:hopen `:localhost:5001 q)h "insert[`trade](10:30:01.000; `intel; 88.5; 1625)" In the server instance, run count trade to check the trade table is now populated with one row. Assume that the kdb+ server process dies. If we now restart it with logging on, the updates logged to disk are not lost: q)count trade 1 Updates done locally in the server process are logged to disk only if they are sent as messages to self The syntax for this uses 0 as the handle: // in server 0 ("insert";`trade; (10:30:01.000; `intel; 88.5; 1625)) Check-pointing / rolling¶ A logging server uses a .log file and a .qdb data file. The command \l checkpoints the .qdb file and empties the log file. However, the checkpoint is path-dependent. Consider the following: /tmp/qtest$ q qtest -l q) A listing of the current directory gives: q)\ls "qtest.log" The system command \l can be used to roll the log file. The current log file is renamed with the qdb extension and a new log file is created. q)\l q)\ls "qtest.log" "qtest.qdb" However, if there is a change of directory within the q session then the *.qdb checkpoint file is placed in the latter directory. For instance: /tmp/qtest$ q qtest -l q)\cd ../newqdir q)\l results in /tmp/qtest$ ls qtest.log /tmp/qtest$ cd ../newqdir /tmp/newqdir$ ls qtest.qdb The simplest solution is to provide a full path to the log file at invocation. /tmp/qtest$ q /tmp/testlog -l q).z.f /tmp/testlog q)\cd ../newqdir q)\l results in /tmp/qtest$ ls . testlog.log testlog.qdb /tmp/qtest$ ls ../newqdir . File read order¶ When you type q logTest -l this reads the data file (.qdb ), log file, and the q script file logTest.q , if present. If any of the three files exists (.q , .qdb , and .log ), they should all be in the same directory. Logging options¶ The -l option is recommended if you trust (or duplicate) the machine where the server is running. The -L option involves an actual disk write (assuming hardware write-cache is disabled). Another option is to use no logging. This is used with test, read-only, read-mostly, trusted, duplicated or cache databases. Errors and rollbacks¶ If either message handler (.z.pg , or .z.ps ), throws any error and the state was changed during that message processing, this initiates a rollback. Replication¶ Given a logging q process listening on port 5000, e.g. started with q test -l -p 5000 an additional kdb+ process can replicate that logging process via the -r command line parameter q -r :localhost:5000:username:password if starting these processes from different directories, be sure to specify the absolute path for the logging process, e.g. q /mylogs/test -l -p 5000 the replicating process will receive this information when it connects. On start-up, the replicating process connects to the logging process, gets the log filename and record count, opens the log file, plays back that count of records from the log file, and continues to receive updates via TCP/IP. Each record is executed via value . If the replicating process loses its connection to the logging process, you can detect that with .z.pc . To resubscribe to the logging process, restart the replicating process. Currently, only a single replicating process can subscribe to the primary process. If another kdb+ process attempts to replicate from the primary, the previous replicating process will no longer receive updates. If you need multiple replicating processes, you might like to consider kdb+tick. Manual handling¶ Overview¶ Function calls and the contents of the their parameters can be recorded to a log file, which can then be replayed by a process. This is often used for data recovery. This technique allows more control over actions like log file naming conventions, what to log, log file locations and the ability to add logic around the log file lifecycle. Create a log file¶ To create a log file, do the following. - Initialize a log file by using set . Note thatq)logfile:hsym `$"qlog"; q)logfile set (); q)logfilehandle:hopen logfile; logfile set (); is equivalent to.[logfile;();:;()]; . An alternative method is to check for pre-existing log files by usingkey . If a new log file does not exist, the script can now be written to initialize it, otherwise it is opened for appending.q)logfile:hsym `$"qlog"; q)if[not type key logfile;logfile set()] q)logfilehandle:hopen logfile; - Close the log file when you are finished logging any messages. q)hclose logfilehandle Log writing¶ To record events and messages to a log, you must append a list consisting of a function name, followed by any parameters used. A tickerplant uses this concept to record all messages sent to its clients so they can use the log to recover. It records calling a function upd passing the parameters of a table name and the table content to append. For example, calling a function called upd with two parameters, x and y, can be recorded to a file as follows: q)logfilehandle enlist (`upd;x;y) When a kdb+ process plays back this log, a function called upd is called with the value of the two parameters. Multiple function calls can be logged also: q)logfilehandle ((`func1;param1);(`func2;param1;param2)) As log replay calls value to execute, you can also log q code as a string, for example logfilehandle enlist "upd[22;33]" Function calls that are used to updating data are typically written, without logging the function definition. This has the disadvantage of requiring that the recovery process defines the functions (for example, by loading a q script) prior to replaying the log. The advantages can outweigh the disadvantages, however, by allowing for bugs fixes within the function, or temporarily assigning the function to a different definition prior to playback, to provide bespoke logic for data sourced from a log file. Log rolling¶ A kdb+ process can run 24/7, but a log file may only be relevant for a specific timeframe or event. For example, the default tickerplant creates a new log for each day. You should ensure the current log is closed and a new log created on each event. A decision must be taken on whether to retain the old files or delete them, taking into account disk usage and what other processes may require them. A naming convention should be used to aid distinction between current log files and any old log files required for retention. The z namespace provides various functions for system information such as current date, time, etc. The below example demonstrates naming a log file after the current date: q)logfile:hsym `$"qlog_",string .z.D; Replaying log files¶ Streaming-execute over a file is used (for example in kdb+tick) to replay a log file in a memory-efficient manner. A logfile is essentially a list of lists and each list is read in turn and evaluated by .z.ps (which defaults to value ). Here, for demonstration purposes, we manually create a logfile, and play it back through -11! . This is functionally equivalent to doing value each get `:logfile but uses far less memory. q)`:logfile.2013.12.03 set () / create a new,empty log file `:logfile.2013.12.03 q)h:hopen `:logfile.2013.12.03 / open it q)h enlist(`f;`a;10) / append a record 3i q)h enlist(`f;`b;20) / append a record 3i q)hclose h / close the file q)/Define the function that is referenced in those records q)f:{0N!(x;y)} q)-11!`:logfile.2013.12.03 / playback the logfile (`a;10) (`b;20) 2 q)/ DO NOT DO THIS ON LARGE LOGFILES!!!! q)/This is the whole purpose of -11!x. q)value each get `:logfile.2013.12.03 (`a;10) (`b;20) `a 10 `b 20 If successful, the number of chunks executed is returned. If the end of the file is corrupt, a badtail error is signalled, which may be partially recovered. In the event that the log file references an undefined function, the function name is signalled as an error. This can be confusing if the missing function name is upd , as it does not reflect the same situation as the license expiry upd error. For example:. / Continuing the above example q)delete f from `. `. q)/function f no longer defined, so it signals an error q)-11!`:logfile.2013.12.03 'f Replay part of a file¶ Streaming-execute the first n chunks of logfile x , return the number of chunks if successful: -11!(n;x) . It is possible to use the above to playback n records from record M onwards. Firstly create a sample log file, which contains 1000 records as (( f;0);(f;1);( f;2);..;(f;999)) . q)`:log set();h:hopen`:log;i:0;do[1000;h enlist(`f;i);i+:1];hclose h; Then define function f to just print its arg, skip the first M records. If .z.ps is defined, -11! calls it for each record. q)m:0;M:750;f:0N!;.z.ps:{m+:1;if[m>M;value x;];};-11!(M+5-1;`:log) 750 751 752 753 754 Replay from corrupt logs¶ Given a valid logfile x , -11(-2;x) returns the number of chunks. Given an invalid logfile, returns the number of valid chunks and length of the valid part. q)logfile:`:good.log / a non-corrupted logfile q)-11!(-2;logfile) 26 q)logfile:`:broken.log / a manually corrupted logfile q)/define a dummy upd file as components are of the form (`upd;data) q)upd:{[x;y]} q)-11!logfile 'badtail q)-11!(-1;logfile) 'badtail q)hcount logfile 39623 q)-11!(-2;logfile) 26 35634 q)/ 26 valid chunks until position 35634 (out of 39623) q)-11!(26;logfile) 26 Replacing a corrupt log¶ It can be more efficient to replay from a corrupt file (due to disk usage), than to directly take the good chunks from a bad log to create a new log. The knowledge of how to create a log file, and how to replay part of a log file can be combined to convert a file that was previously giving the badtail error. Note that this does not fix the corrupted section, only removes the corrupted section from the file. The following example shows converting a bad.log into a good.log by temporarly overriding .z.ps which is called for each valid chunk (as defined by -11! ). It resets .z.ts to the system default after processing using \x . goodfile:hsym `:good.log; goodfile set (); goodfilehandle:hopen goodfile; chunks:first -11!(-2;`:bad.log); .z.ps:{goodfilehandle enlist x}; -11!(chunks;`:bad.log); system"x .z.ps"; hclose goodfilehandle; Alternatively, generic system tools can be used like the unix head command. For example, given that -11!(-2;`:bad.log) returns 2879 bytes. head -c 2879 bad.log > good.log github.com/simongarland/tickrecover/rescuelog.q contains some helper functions for recovering data from logs. Q for Mortals §13.2.6 Logging -l and -L prodrive11/log4q A concise implementation of logger for q applications
reciprocal ¶ Reciprocal of a number reciprocal x reciprocal[x] Returns the reciprocal of numeric x as a float. q)reciprocal 0 0w 0n 3 10 0w 0 0n 0.3333333 0.1 q)reciprocal 1b 1f reciprocal is a multithreaded primitive. Implicit iteration¶ reciprocal is an atomic function. q)reciprocal (12;13 14) 0.08333333 0.07692308 0.07142857 q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 21 3;4 5 6) q)reciprocal d a| 0.1 0.04761905 0.3333333 b| 0.25 0.2 0.1666667 q)reciprocal t a b -------------------- 0.1 0.25 0.04761905 0.2 0.3333333 0.1666667 q)reciprocal k k | a b ---| -------------------- abc| 0.1 0.25 def| 0.04761905 0.2 ghi| 0.3333333 0.1666667 Domain and range¶ domain b g x h i j e f c s p m d z n u v t range f . f f f f f f f . p f f z f f f f Range: fpz reverse ¶ Reverse the order of items of a list or dictionary reverse x reverse[x] Returns the items of x in reverse order. q)reverse 1 2 3 4 4 3 2 1 On atoms, returns the atom; on dictionaries, reverses the keys; and on tables, reverses the columns. q)d:`a`b!(1 2 3;"xyz") q)reverse d b| x y z a| 1 2 3 q)reverse each d a| 3 2 1 b| z y x q)reverse flip d a b --- 3 z 2 y 1 x rotate ¶ Shift the items of a list to the left or right x rotate y rotate[x;y] Where x is an integer atomy is a list returns y rotated by x items. Rotation is to the ‘left’ for positive x , to the ‘right’ for negative x . q)2 rotate 2 3 5 7 11 / rotate a list 5 7 11 2 3 q)-2 rotate 2 3 5 7 11 7 11 2 3 5 q)t:([]a:1 2 3;b:"xyz") q)1 rotate t / rotate a table a b --- 2 y 3 z 1 x q)0 1 -1 rotate' 3 4#til 12 0 1 2 3 5 6 7 4 11 8 9 10 rotate is a uniform function. save , rsave ¶ Write global data to file or splayed to a directory save ¶ Write a global variable to file and optionally format data save x save[x] Where x is a symbol atom or vector of the form [path/to/]v[.ext] in which v is the name of a global variablepath/to/ is a file path (optional). If a file- exists, it is overwritten - does not exist, it is created, with any required parent directories .ext is a file extension (optional) which effects the file content format. Options are:(none) for binary formatcsv for comma-separated valuestxt for plain text)xls for Excel spreadsheet formatxml for Extensible Markup Language (XML))json for JavaScript Object Notation (JSON) Since v3.2 2014.07.31. writes global variable/s v etc. to file and returns the filename/s. .h (data serialization tools) Examples¶ q)t:([]x:2 3 5; y:`ibm`amd`intel; z:"npn") q)save `t / binary `:t q)read0 `:t "\377\001b\000c\013\000\003\000\000\000x\000y\000z\000\000\.. "\000\003\000\000\000npn" q)save `t.csv / CSV `:t.csv q)read0 `:t.csv "x,y,z" "2,ibm,n" "3,amd,p" "5,intel,n" q)save `t.txt / text `:t.txt q)read0 `:t.txt / columns are tab separated "x\ty\tz" "2\tibm\tn" "3\tamd\tp" "5\tintel\tn" q)save `t.xls / Excel `:t.xls q)read0 `:t.xls "<?xml version=\"1.0\"?><?mso-application progid=\"Excel.Sheet\"?>" "<Workbook xmlns=\"urn:schemas-microsoft-com:office:spreadsheet\" x... q)save `t.xml / XML `:t.xml q)read0 `:t.xml / tab separated "<R>" "<r><x>2</x><y>ibm</y><z>n</z></r>" "<r><x>3</x><y>amd</y><z>p</z></r>" "<r><x>5</x><y>intel</y><z>n</z></r>" "</R>" q)save `$"/tmp/t" / file path `:/tmp/t q)a:til 6 q)b:.Q.a q)save `a`b / multiple files `:a`:b Use set instead to save - a variable to a file of a different name - local data rsave ¶ Write a table splayed to a directory rsave x rsave[x] Where x is a table name as a symbol atom, saves the table, in binary format, splayed to a directory of the same name. The table must be fully enumerated and not keyed. If the file - exists, it is overwritten - does not exist, it is created, with any required parent directories Limits¶ The usual and more general way of doing this is to use set , which allows the target directory to be specified. The following example uses the table sp created using the script sp.q q)\l sp.q q)rsave `sp / save splayed table `:sp/ q)\ls sp ,"p" "qty" ,"s" q)`:sp/ set sp / equivalent to rsave `sp `:sp/ set , .h.tx , .Q.dpft (save table), .Q.Xf (create file) File system Q for Mortals §11.2 Save and Load on Tables Q for Mortals §11.3 Splayed Tables select ¶ Select all or part of a table, possibly with new columns select is a qSQL query template and varies from regular q syntax. For the Select operator ? , see Functional SQL Syntax¶ Below, square brackets mark optional elements. select [Lexp] [ps] [by pb] from texp [where pw] where Lexp Limit expression ps Select phrase pb By phrase texp Table expression pw Where phrase The select query returns a table for both call-by-name and call-by-value. Since 4.1t 2021.03.30, select from partitioned tables maps relevant columns within each partition in parallel when running with secondary threads. Minimal form¶ The minimal form of the query returns the evaluated table expression. q)tbl:([] id:1 1 2 2 2;val:100 200 300 400 500) q)select from tbl id val ------ 1 100 1 200 2 300 2 400 2 500 Select phrase¶ The Select phrase specifies the columns of the result table, one per subphrase. Absent a Select phrase, all the columns of the table expression are returned. (Unlike SQL, no * wildcard is required.) q)t:([] c1:`a`b`c; c2:10 20 30; c3:1.1 2.2 3.3) q)select c3, c1 from t c3 c1 ------ 1.1 a 2.2 b 3.3 c q)select from t c1 c2 c3 --------- a 10 1.1 b 20 2.2 c 30 3.3 A computed column in the Select phrase cannot be referred to in another subphrase. Limit expression¶ To limit the returned results you can include a limit expression Lexp select[n] select[m n] select[order] select[n;order] select distinct where n limits the result to the firstn rows of the selection if positive, or the lastn rows if negativem is the number of the first row to be returned: useful for stepping through query results one block ofn at a timeorder is a column (or table) and sort order: use< for ascending,> for descending select[3;>price] from bids where sym=s,size>0 This would return the three best prices for symbol s with a size greater than 0. This construct works on in-memory tables but not on memory-mapped tables loaded from splayed or partitioned files. Performance select[n] applies the Where phrase on all rows of the table, and takes the first n rows, before applying the Select phrase. So if you are paging it is better to store the result of the query somewhere and select[n,m] from there, rather than run the filter again. select distinct returns only unique records in the result. By phrase¶ A select query that includes a By phrase returns a keyed table. The key columns are those in the By phrase; values from other columns are grouped, i.e. nested. q)k:`a`b`a`b`c q)v:10 20 30 40 50 q)select c2 by c1 from ([]c1:k;c2:v) c1| c2 --| ----- a | 10 30 b | 20 40 c | ,50 q)v group k / compare the group keyword a| 10 30 b| 20 40 c| ,50 Unlike in SQL, columns in the By phrase - are included in the result and need not be specified in the Select phrase - can include computed columns The ungroup keyword reverses the grouping, though the original order is lost. q)ungroup select c2 by c1 from ([]c1:k;c2:v) c1 c2 ----- a 10 a 30 b 20 b 40 c 50 q)t:([] name:`tom`dick`harry`jack`jill;sex:`m`m`m`m`f;eye:`blue`green`blue`blue`gray) q)t name sex eye --------------- tom m blue dick m green harry m blue jack m blue jill f gray q)select name,eye by sex from t sex| name eye ---| ------------------------------------------ f | ,`jill ,`gray m | `tom`dick`harry`jack `blue`green`blue`blue q)select name by sex,eye from t sex eye | name ---------| --------------- f gray | ,`jill m blue | `tom`harry`jack m green| ,`dick A By phrase with no Select phrase returns the last row in each group. q)select by sex from t sex| name eye ---| --------- f | jill gray m | jack blue Where there is a By phrase, and no sort order is specified, the result is sorted ascending by its key. Cond¶ Cond is not supported inside query templates: see qSQL. delete , exec , update qSQL, Functional SQL Q for Mortals §9.3 The select Template # Set Attribute¶ x#y #[x;y] Where y is a list or dictionary and atom x is - an item from the list `s`u`p`g , returnsy with the corresponding attribute set - the null symbol ` , returnsy with all attributes removed Attributes: `s#2 2 3 sorted items in ascending order list, dict, table `u#2 4 5 unique each item unique list `p#2 2 1 parted common values adjacent simple list `g#2 1 2 grouped make a hash table list Setting or unsetting an attribute other than sorted causes a copy of the object to be made. s , u and g are preserved on append in memory, if possible. Only s is preserved on append to disk. q)t:([1 2 4]y:7 8 9);`s#t;attr each (t;key t) ``s Applying p attribute is faster and uses less memory since 4.1t 2023.01.20. Attribute types¶ Sorted¶ The sorted attribute can be set on a simple or mixed list, a dictionary, table, or keyed table. q)`s#1 2 3 `s#1 2 3 q)`#`s#1 2 3 1 2 3 Setting the sorted attribute on an unsorted list signals an error. q)`s#3 2 1 's-fail [0] `s#3 2 1 ^ Setting/unsetting the sorted attribute on a list which is already sorted will not cause a copy to be made, and hence will affect the original list in-place. Setting the sorted attribute on a table sets the parted attribute on the first column. q)meta `s#([] ti:00:00:00 00:00:01 00:00:03; v:98 98 100.) c | t f a --| ----- ti| v p v | f Setting the sorted attribute on a dictionary or table, where the key is already in sorted order, in order to obtain a step-function, sets the sorted attribute for the key but copies the outer object. Unique¶ The unique attribute can be set on simple and mixed lists where all items are distinct. Grouped and parted¶ Attributes parted and grouped are useful for simple lists (where the datatype has an integral underlying value) in memory with a lot of repetition. The parted attribute asserts all common values in the list are adjacent. The grouped attribute causes kdb+ to create and maintain an index (hash table). If the data can be sorted such that p can be set, it effects better speedups than grouped, both on disk and in memory. The grouped attribute implies an entry’s data may be dispersed – and possibly slow to retrieve from disk. The parted attribute is removed by any operation on the list. q)`p#2 2 2 1 1 4 4 4 4 3 3 `p#2 2 2 1 1 4 4 4 4 3 3 q)2,`p#2 2 2 1 1 4 4 4 4 3 3 2 2 2 2 1 1 4 4 4 4 3 3 The grouped attribute is presently unsuitable for cycling through a small window of a domain, due to the retention of keys backing the attribute. q)v:`g#1#0 q)do[1000000;v[0]+:1] q)0N!.Q.w[]`used; v:`g#`#v; .Q.w[]`used 74275344 332368 Errors¶ s-fail not sorted ascending type tried to set u, p or g on wrong type u-fail not unique or not parted Performance¶ Some q functions use attributes to work faster: - Where-clauses in select andexec templates run faster withwhere = ,where in andwhere within - Searching: bin ,distinct , Find andin (if the right argument has an attribute) - Sorting: iasc andidesc - Dictionaries: group Setting attributes consumes resources and is likely to improve performance only on lists with more than a million items. Test! Applying an attribute to compressed data on disk decompresses it. attr Metadata Q for Mortals §8.8 Attributes show ¶ Format and display at the console. show x show[x] Formats x and writes it to the console; returns the identity function (::) . q)a:show til 5 0 1 2 3 4 q)a~(::) 1b Display intermediate values q)f:{a:x<5;sum a} q)f 2 3 5 7 3 3 q)f:{show a:x<5;sum a} / same function, showing value of a q)f 2 3 5 7 3 11001b 3
key ¶ key x key[x] Key of a dictionary¶ Where x is a dictionary (or the name of one), returns its key. q)D:`q`w`e!(1 2;3 4;5 6) q)key D `q`w`e q)key `D `q`w`e A namespace is a dictionary. q)key `. `D`daily`depth`mas`sym`date`nbbo... q)key `.q ``neg`not`null`string`reciprocal`floor`ceiling`signum`mod`xbar`xlog`and`or`ea.. So is the default namespace. q)key ` / namespaces in the default namespace `q`Q`h`o`util`rx q)key `. / objects in the default namespace `a`s`b`t`deltas0`x`c Keys of a keyed table¶ Where x is a keyed table (or the name of one), returns its key column/s. q)K:([s:`q`w`e]g:1 2 3;h:4 5 6) q)key K s - q w e Files in a folder¶ Where x is a directory handle returns a list of objects in the directory, sorted ascending. q)key`:c:/q `c`profile.q`sp.q`trade.q`w32 To select particular files, use like q)f:key`:c:/q q)f where f like "*.q" `profile.q`sp.q`trade.q Whether a folder exists¶ An empty folder returns an empty symbol vector; a non-existent folder returns an empty general list. Whether a file exists¶ Where x is a file handle, returns the descriptor if the file exists, otherwise an empty list. q)key`:c:/q/sp.q `:c:/q/sp.q q)key`:c:/q/notfound.q () Note that - an empty directory returns an empty symbol vector - a non-existent directory returns an empty general list q)\ls foo ls: cannot access foo: No such file or directory 'os q)()~key`:foo 1b q)\mkdir foo q)key`:foo `symbol$() Whether a name is defined¶ Where x is a symbol atom that is not a file or directory descriptor, nor the name of a dictionary or keyed table, returns the original symbol if a variable of that name exists, otherwise an empty list. The name is interpreted relative to the current context if not fully qualified. q)()~key`a /now you don't see it 1b q)a:1 q)key`a /now you see it `a q)\d .foo q.foo)key`a /now you don't q.foo)a:1 2!3 4 q.foo)key`a /this one has keys 1 2 q.foo)key`.foo.a /fully qualified name 1 2 q.foo)key`..a /fully qualified name `..a q.foo)\d . q)key`a `a q)key`.foo.a 1 2 q)key`..a `..a Target of a foreign key¶ Where x is a foreign-key column returns the name of the foreign-key table. q)f:([f:1 2 3]v:`a`b`c) q)x:`f$3 2 q)key x `f Type of a vector¶ Where x is a vector returns the name of its type as a symbol. q)key each ("abc";101b;1 2 3h;1 2 3;1 2 3;1 2 3f) `char`boolean`short`int`long`float q)key 0#5 `long Enumerator of a list¶ Where x is an enumerated list returns the name of the enumerating list. q)ids:`a`b`c q)x:`ids$`a`c q)key x `ids til ¶ Where x is a non-negative integer returns the same result as til . q)key 10 0 1 2 3 4 5 6 7 8 9 keys , xkey ¶ Get or set key column/s of a table keys ¶ Key column/s of a table keys x keys[x] Where x is a table (by value or reference), returns as a symbol vector the primary key column/s of x – empty if none. q)\l trade.q / no keys q)keys trade `symbol$() q)keys`trade `symbol$() q)`sym xkey`trade / define a key q)keys`trade ,`sym xkey ¶ Set specified columns as primary keys of a table x xkey y xkey[x;y] Where symbol atom or vector x lists columns in table y , which is passed by - value, returns - reference, updates y with x set as the primary keys. q)\l trade.q q)keys trade `symbol$() / no primary key q)`sym xkey trade / return table with primary key sym sym| time price size ---| ----------------------- a | 09:30:00.000 10.75 100 q)keys trade / trade has not changed `symbol$() q)`sym xkey `trade / pass trade by reference updates the table in place `trade q)keys trade / sym is now primary key of trade ,`sym Enkey, Unkey .Q.ff (append columns) Dictionaries, Tables, Metadata < Less Than <= Up To¶ x<y <[x;y] x<=y <=[x;y] Returns 1b where the underlying value of x is less than (or up to) that of y . q)(3;"a")<(2 3 4;"abc") 001b 000b q)(3;"a")<=(2 3 4;"abc") 011b 111b With booleans: q)0 1 </:\: 0 1 01b 00b q)0 1 <=/:\: 0 1 11b 01b Implicit iteration¶ Less Than and Up To are atomic functions. q)(10;20 30)<(50 -20;5) 10b 00b They apply to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)d<=5 a| 011b b| 111b q)t<5 a b --- 0 1 1 0 1 1 q)k<5 k | a b ---| --- abc| 0 1 def| 1 0 ghi| 1 1 Range and domain¶ b g x h i j e f c s p m d z n u v t ---------------------------------------- b | b . b b b b b b b . b b b b b b b b g | . b . . . . . . . . . . . . . . . . x | b . b b b b b b b . b b b b b b b b h | b . b b b b b b b . b b b b b b b b i | b . b b b b b b b . b b b b b b b b j | b . b b b b b b b . b b b b b b b b e | b . b b b b b b b . b b b b b b b b f | b . b b b b b b b . b b b b b b b b c | b . b b b b b b b . b b b b b b b b s | . . . . . . . . . b . . . . . . . . p | b . b b b b b b b . b b b b b b b b m | b . b b b b b b b . b b b . . . . . d | b . b b b b b b b . b b b b . . . . z | b . b b b b b b b . b . b b b b b b n | b . b b b b b b b . b . . b b b b b u | b . b b b b b b b . b . . b b b b b v | b . b b b b b b b . b . . b b b b b t | b . b b b b b b b . b . . b b b b b Range: b & Lesser, and ¶ Lesser of two values; logical AND x & y &[x;y] x and y and[x;y] Returns the lesser of the underlying values of x and y . q)2&3 2 q)1010b and 1100b /logical AND with booleans 1000b q)"sat"&"cow" "cat" & is a multithreaded primitive. Flags¶ Where x and y are both flags, Lesser is logical AND. Use and for flags While Lesser and and are synonyms, it helps readers to apply and only and wherever flag arguments are expected. There is no performance implication. Dictionaries and keyed tables¶ Where x and y are a pair of dictionaries or keyed tables their minimum is equivalent to upserting y into x where the values of y are less than those in x . q)show a:([sym:`ibm`msoft`appl`goog]t:2017.05 2017.09 2015.03 2017.11m) sym | t -----| ------- ibm | 2017.05 msoft| 2017.09 appl | 2015.03 goog | 2017.11 q)show b:([sym:`msoft`goog`ibm]t:2017.08 2017.12 2016.12m) sym | t -----| ------- msoft| 2017.08 goog | 2017.12 ibm | 2016.12 q)a&b sym | t -----| ------- ibm | 2016.12 msoft| 2017.08 appl | 2015.03 goog | 2017.11 Mixed types¶ Where x and y are of different types the lesser of their underlying values is returned as the higher of the two types. q)98&"c" "b" Implicit iteration¶ Lesser and and are atomic functions. q)(10;20 30)&(2;3 4) 2 3 4 They apply to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)d&5 a| 5 -21 3 b| 4 5 -6 q)d&`b`c!(10 20 30;1000*1 2 3) / upsert semantics a| 10 -21 3 b| 4 5 -6 c| 1000 2000 3000 q)t&5 a b ------ 5 4 -21 5 3 -6 q)k&5 k | a b ---| ------ abc| 5 4 def| -21 5 ghi| 3 -6 Domain and range¶ b g x h i j e f c s p m d z n u v t ---------------------------------------- b | b . x h i j e f c . p m d z n u v t g | . . . . . . . . . . . . . . . . . . x | x . x h i j e f c . p m d z n u v t h | h . h h i j e f c . p m d z n u v t i | i . i i i j e f c . p m d z n u v t j | j . j j j j e f c . p m d z n u v t e | e . e e e e e f c . p m d z n u v t f | f . f f f f f f c . p m d z n u v t c | c . c c c c c c c . p m d z n u v t s | . . . . . . . . . . . . . . . . . . p | p . p p p p p p p . p p p p n u v t m | m . m m m m m m m . p m d . . . . . d | d . d d d d d d d . p d d z . . . . z | z . z z z z z z z . p . z z n u v t n | n . n n n n n n n . n . . n n n n n u | u . u u u u u u u . u . . u n u v t v | v . v v v v v v v . v . . v n v v t t | t . t t t t t t t . t . . t n t t t Range: bcdefhijmnptuvxz or , | , Greater, max , min Comparison, Logic Q for Mortals §4.5 Greater and Lesser like ¶ Whether text matches a pattern x like y like[x;y] Where x is a symbol or stringy is a pattern as a string returns a boolean: whether x matches the pattern of y . q)`quick like "qu?ck" 1b q)`brown like "br[ao]wn" 1b q)`quickly like "quick*" 1b Absent pattern characters in y , like is equivalent to {y~string x} . q)`quick like "quick" 1b q)`quick like "quickish" 0b Implicit iteration¶ like applies to lists of strings or symbols; and to dictionaries with them as values. q)`brawn`brown like "br[^o]wn" 10b q)(`a`b`c!`quick`brown`fox)like "brown" a| 0 b| 1 c| 0 ss , ssr , Regular expressions in q, Strings Using regular expressions
Application, projection, and indexing¶ Values¶ Everything in q is a value, and almost all values can be applied. - A list can be applied to its indexes to get its items. - A list with an elided item or items can be applied to a fill item or list of items - A dictionary can be applied to its keys to get its values. - A matrix can be applied its row indexes to get its rows; or to its row and column indexes to get its items. - A table can be applied to its row indexes to get its tuples; to its column names to get its columns; or to its row indexes and column names to get its items. - A function (operator, keyword, or lambda) can be applied to its argument/s to get a result. - A file or process handle can be applied to a string or parse tree The domain of a function is all valid values of its argument/s; its range is all its possible results. For example, the domain of Add is numeric and temporal values, as is its range. By extension, - the domain of a list is its indexes; its range, its items - the domains of a matrix are its row and column indexes - the domain of a dictionary is its keys; its range is its values - the domains of a table are its row indexes and column names Atoms need not apply The only values that cannot be applied are atoms that are not file or process handles, nor the name of a variable or lambda. In what follows, value means applicable value. Application and indexing Most programming languages treat the indexing of arrays and the application of functions as separate. Q conflates them. This is deliberate, and fundamental to the design of the language. It also provides useful alternatives to control structures. See Application and indexing below. Q for Mortals §6.5 Everything Is a Map Application¶ To apply a value means - to evaluate a function on its arguments - to select items from a list or dictionary - to write to a file or process handle The syntax provides several ways to apply a value. Bracket application¶ All values can be applied with bracket notation. q)"abcdef"[1 4 3] "bed" q)count[1 4 3] 3 q){x*x}[4] 16 q)+[2;3] 5 q)d:`cat`cow`dog`sheep!`chat`vache`chien`mouton q)d[`cow`sheep] `vache`mouton q)ssr["Hello word!";"rd";"rld"] "Hello world!" q)m:("abc";"def";"ghi";"jkl") / a matrix q)m[3 1] / m is a list (unary) "jkl" "def" q)m[0;2 0 1] / and also a matrix (binary) "cab" q)main[] / nullary lambda Infix application¶ Operators, and some binary keywords and derived functions can also be applied infix. q)2+3 / operator 5 q)2 3 4 5 mod 2 / keyword 0 1 0 1 q)1000+\2 3 4 / derived function 1002 1005 1009 Apply operator¶ Any applicable value can be applied by the Apply operator to a list of its arguments: one item per argument. q)(+) . 2 3 / apply + to a list of its 2 arguments 5 q).[+;2 3] / apply + to a list of its 2 arguments 5 q)ssr . ("Hello word!";"rd";"rld") / apply ssr to a list of its 3 arguments "Hello world!" q)count . enlist 1 4 3 / apply count to a list of its 1 argument 3 Apply At operator¶ Lists, dictionaries and unary functions can be applied more conveniently with the Apply At operator. q)"abcdef"@1 4 3 "bed" q)@[count;1 4 3] 3 q)d @ `cow`sheep / dictionary to its keys `vache`mouton q)@[d;`cow`sheep] / dictionary to its keys `vache`mouton Apply At is syntactic sugar: x@y is equivalent to x . enlist y . Prefix application¶ Lists, dictionaries and unary keywords and lambdas can also be applied prefix. As this is equivalent to simply omitting the Apply At operator, the @ is mostly redundant. q)"abcdef" 1 4 3 "bed" q)count 1 4 3 3 q){x*x}4 16 q)d`cow`sheep `vache`mouton Postfix application¶ Iterators are unary operators that can be (and almost always are) applied postfix. They derive functions from their value arguments. Some derived functions are variadic: they can be applied either unary or binary. q)+\[2 3 4] / derived fn applied unary 2 5 9 q)+\[1000;2 3 4] / derived fn applied binary 1002 1005 1009 q)count'[("the";"quick";"brown";"fox")] / derived fn applied unary 3 5 5 3 Postfix yields infix. Functions derived by applying an iterator postfix have infix syntax – no matter how many arguments they take. Derived functions +\ and count' have infix syntax. They can be applied unary by parenthesizing them. q)(+\)2 3 4 100 1005 1009 q)(count')("the";"quick";"brown";"fox") 3 5 5 3 Application syntax¶ rank bracket other of f notation Apply Apply At syntax note ................................................................................ 0 f[] f . enlist(::) f@(::) 1 f[x] f . enlist x f@x f x, x f prefix, postfix 2 f[x;y] f . (x;y) x f y infix 3-8 f[x;y;z;…] f . (x;y;z;…) Long right scope¶ Values applied prefix or infix have long right scope. In other words: When a unary value is applied prefix, its argument is everything to its right. q)sqrt count "It's about time!" 4 When a binary value is applied infix, its right argument is everything to its right. q)7 * 2 + 4 42 Republic of values There is no precedence among values. In 7*2+4 the right argument of * is the result of evaluating the expression on its right. This rule applies without exception. Iterators¶ The iterators are almost invariably applied postfix. q)+/[17 13 12] 42 In the above, the Over iterator / is applied postfix to its single argument + to derive the function +/ (sum). An iterator applied postfix has short left scope. That is, its argument is the value immediately to its left. For the Case iterator that value is an int vector. An iterator’s argument may itself be a derived function. q)txt:(("Now";"is";"the";"time");("for";"all";"good";"folk")) q)txt "Now" "is" "the" "time" "for" "all" "good" "folk" q)count[txt] 2 q)count'[txt] 4 4 q)count''[txt] 3 2 3 4 3 3 4 4 In the last example, the derived function count' is the argument of the second ' (Each). Only iterators can be applied postfix. Apply/Index and Apply/Index At for how to apply functions and index lists Rank and syntax¶ The rank of a value is the number of - arguments it evaluates, if it is a function - indexes required to select an atom, if it is a list or dictionary A value is variadic if it can be used with more than one rank. All matrixes and some derived functions are variadic. q)+/[til 5] / unary 10 q)+/[1000000;til 5] / binary 1000010 Rank is a semantic property, and is independent of syntax. This is a ripe source of confusion. Postfix yields infix¶ The syntax of a derived function is determined by the application that produced it. The derived function +/ is variadic but has infix syntax. Applying it infix is straightforward. q)1000000+/til 5 1000010 How then to apply it as a unary? Bracket notation ‘overrides’ infix syntax. q)+/[til 5] / unary 10 q)+/[1000000;til 5] / binary 1000010 Or isolate it with parentheses. It remains variadic. q)(+/)til 5 / unary 10 q)(+/)[1000000;til 5] / binary 1000010 The potential for confusion is even greater when the argument of a unary operator is a unary function. Here the derived function is unary – but it is still an infix! Parentheses or brackets can save us. q)count'[txt] 4 4 q)(count')txt 4 4 Or a keyword. q)count each txt 4 4 Conversely, if the unary operator is applied not postfix but with bracket notation, the derived function is not an infix. But it can still be variadic. q)'[count]txt / unary derived function, applied prefix 4 4 q)/[+]til 5 / oops, a comment q);/[+]til 5 / unary derived function, applied prefix 10 q);\[+][til 5] / variadic derived function: applied unary 0 1 3 6 10 q);\[+][1000;til 5] / variadic derived function: applied binary 1000 1001 1003 1006 1010 q)1000/[+]til 5 / but not infix 'type [0] 1000/[+]til 5 ^ Applying a unary operator with bracket notation is unusual and discouraged. Projection¶ When a value of rank \(n\) is applied to \(m\) arguments and \(m<n\), the result is a projection of the value onto the supplied arguments (indexes), now known as the projected arguments or indexes. In the projection, the values of projected arguments (or indexes) are fixed. The rank of the projection is \(n-m\). q)double:2* q)double 5 / unary 10 q)halve:%[;2] q)halve[10] / unary 5 q)f:{x+y*z} / ternary q)f[2;3;4] 14 q)g:f[2;;4] q)g 3 / unary 14 q)(f . 2 3) 4 14 q)l:("Buddy can you spare";;"?") q)l "a dime" / unary "Buddy can you spare" "a dime" "?" q)m:("The";;;"fox") q)m["quick";"brown"] / binary "The" "quick" "brown" "fox" The function definition in a projection is set at the time of projection. If the function is subsequently redefined, the projection is unaffected. q)f:{x*y} q)g:f[3;] / triple q)g 5 15 q)f:{x%y} q)g 5 / still triple 15 Make projections explicit When projecting a function onto an argument list, make the argument list full-length. This is not always necessary but it is good style, because it makes it clear the value is being projected, not applied. q)foo:{x+y+z} q)goo:foo[2] / discouraged q)goo:foo[2;;] / recommended You could reasonably make an exception for operators and keywords, where the rank is well known. q)f:?["brown"] q)f "fox" 5 2 5 q)g:like["brown"] q)g "\*ow\*" 1b When projecting a variadic function the argument list must always be full-length. Since 4.1t 2021.12.07 projection creation from a lambda/foreign results in a rank error if too many parameters are defined, e.g. q){x}[;1] 'rank Q for Mortals §6.4 Projection Currying Applying a list with elided items¶ A list with elided items can be applied as if it were a function of the same rank as the number of elided items. q)("the";"quick";;"fox")"brown" "the" "quick" "brown" "fox" q)("the";"quick";;"fox") @ "brown" "the" "quick" "brown" "fox" q)("the";;;"fox") . ("quick";"brown") "the" "quick" "brown" "fox" This is subject to the same limitation as function notation. If there are more than eight elided items, a rank error is signalled. Indexing¶ Indexing a list employs the same syntax as applying a function to arguments and works similarly. q)show m:4 3#.Q.a "abc" "def" "ghi" "jkl" q)m[3][1] "k" q)m[3;1] "k" q)m[3 1;1] "ke" q)m[3 1;] / eliding an index means all positions "jkl" "def" q)m[3 1] / trailing indexes can be elided "jkl" "def" q)m 3 1 / brackets can be elided for a single index "jkl" "def" q)m @ 3 1 / Index At (top level) "jkl" "def" q)m . 3 1 / Index (at depth) "k" q)m . (3 1;1) / Index (at depth) "ke" Indexing out of bounds¶ Indexing a list at a non-existent position returns a null of the type of the first item/s. q)(til 5) 99 0N q)(`a`b`c!1.414214 2.718282 3.141593) `x 0n q)t name dob sex ------------------- dick 1980.05.24 m jane 1990.09.03 f q)t 2 name| ` dob | 0Nd sex | ` q)kt name city | eye sex ----------| --------- Tom NYC | green m Jo LA | blue f Tom Lagos| brown m q)kt `Jack`London eye| sex| The thing and the name of the thing¶ What’s in a name? That which we call a rose By any other name would smell as sweet; —Romeo and Juliet In all of the above you can use the name of a value (as a symbol) as an alternative. q)f:{x+y*3} q)f[5;3] / the rose 14 q)`f[5;3] / the name of the rose 14 q)`f . 5 3 14 q)g:`f[5;] q)`g 3 14 This applies to values you define in the default or other namespaces. It does not apply to system names, nor to names local to lambdas. Application and indexing¶ The conflation of application and indexing is deliberate and useful. q)(sum;dev;var)[1;til 5] 1.414214 Above, the list of three keywords is applied to (indexed by) the first argument, selecting dev , which is then applied to the second argument, til 5 . Q for Mortals §6.8 General Application
Multithreaded primitives¶ To complement existing explicit parallel computation facilities (peach ), kdb+ 4.0 introduces implicit, within-primitive parallelism. It is able to exploit internal parallelism of the hardware – in-memory, with modern multi-channel memory architectures, and on-disk, e.g. making use of SSD internal parallelism. / count words, in-cpu cache q)a:read1`:big.txt;st:{value"\\s ",string x;value y} q)f:{sum 0b>':max 0x0a0d0920=\:x} q)(s;r[0]%r;r:st[;"\\t:100 f a"]each s:1 4 16 32) 1 4 16 32 / threads 1 4.1 8.3 11 / speedup 1082 262 131 95 / time, ms Supported Primitives¶ The following primitives now use multiple threads where appropriate: atomics: abs acos and asin atan ceiling cos div exp floor log mod neg not null or reciprocal signum sin sqrt tan within xbar xexp xlog + - * % & | < > = >= <= <> aggregate: all any avg cor cov dev max min scov sdev sum svar var wavg lookups*: ?(Find) aj asof bin binr ij in lj uj index: @(Apply At) select .. where delete misc: $(Cast) #(Take) _(Drop/Cut) ,(Join) deltas differ distinct next prev sublist til where xprev select ... by** * For lookups, only the probe phase (i.e. dealing with the right hand side) is parallelized. ** Internally, but aggregate functions other than count , sum , min , max , and avg execute single-threaded. Practicalities¶ Multithreaded primitives execute in the same secondary threads as peach , and similar limitations apply. System command \s controls the maximum number of threads. Launch q with the -s command-line option to allow primitives to multithread. For example, here we invoke max from outside peach , and from within peach : q)v:100000000?10000;system each("t max v";"t {max x}peach(0#0;v)") 54 153 To keep overhead in check, the number of execution threads is limited by the minimum amount of data processed per thread – at the moment it is in the order of 105 vector items, depending on the primitive. q)a:100 1000000#0;b:2000 50000#0; q)system"s 2";system each("t a+a";"t b+b") 85 169 q)system"s 0";system each("t a+a";"t b+b") 170 173 Performance¶ Many q primitives issue lots of reads and writes to memory for relatively little compute, e.g. for sufficiently large a , b , and c in a+b*c both + and * would read and write from/to slow main memory, effectively making the entire computation memory bandwidth-bound. Depending on system architecture, bandwidth available to multiple cores can be much higher, but this is not always the case. Total aggregate bandwidth of a single CPU is proportional to number of memory channels available and memory speed. For example, one socket of a cascade-lake based machine has 6 memory channels of 2666MT/s RAM, which translates to practically attainable 110GB/s, almost 6 times the typical single-core bandwidth of <20GB/s. On a typical laptop with dual-channel memory, all-core bandwidth is at most 1.5× of single-core and common kdb+ operations are not expected to benefit from implicit parallelism. It is therefore important to make sure your memory setup is optimal. A tool like Intel MLC can help with comparing different RAM configurations. In a multiple-socket system, under NUMA, non-local memory access is much slower. kdb+ 4.0 is not NUMA-aware, and decisions of memory placement and scheduling across sockets are left to the operating system. That prevents scaling out to multiple sockets, and performance can fluctuate unpredictably. We recommend restricting the working set to a single socket, if possible, by running q under numactl --preferred= or even --membind= . Peach vs implicit parallelism¶ In kdb+ parallelism remains single-level, and for a given computation one has to choose a single axis to apply it over, whether implicitly with multithreaded primitives, or explicitly with peach. Within-primitive parallelism has several advantages: - No overhead of splitting and joining large vectors. For simple functions, direct execution can be much faster than .Q.fc :q)system"s 24";a:100000000?100; q)\t a\*a 28 q)\t .Q.fc[{x*x};a] 130 - Operating on one vector at a time can avoid inefficient scheduling of large, uneven chunks of work: q)system"s 3";n:100000000;t:([]n?0f;n?0x00;n?0x00); q)\t sum t / within-column parallelism 30 q)\t sum peach flip t / column-by-column parallelism .. 65 q)\s 0 q)/ .. takes just as much time as the largest unit of work, q)\t sum t`x / .. i.e. widest column 64 However, one needs vectors large enough to take advantage. Nested structures and matrices still need hand-crafted peach . Well-optimized code already making use of peach is unlikely to benefit. Named pipes¶ Overview¶ Since V3.4 it has been possible to read FIFOs/named pipes on Unix. q)h:hopen`:fifo://file / Opens file as read-only. Note the fifo prefix q)read1 h / Performs a single blocking read into a 64k byte buffer. q)/ Returns empty byte vector on eof q)read1 (h;n) / Alternatively, specify the buffer size n. q)/ At most, n bytes will be read, perhaps fewer q)hclose h / Close the file to clean up A `:fifo:// handle is also useful for reading certain non-seekable or zero-length (therefore, unsuitable for the regular read1 ) system files or devices, e.g. q)a:hopen`:fifo:///dev/urandom q)read1 (a;8) 0x8f172b7ea00b85e6 q)hclose a Streaming¶ .Q.fps and .Q.fpn provide the ability to streaming data from a fifo/named pipe. This can be useful for various applications, such as streaming data in from a compressed file without having to decompress the contents to disk. For example, using a csv file (t.csv) with the contents MSFT,12:01:10.000,A,O,300,55.60 APPL,12:01:20.000,B,O,500,67.70 IBM,12:01:20.100,A,O,100,61.11 MSFT,12:01:10.100,A,O,300,55.60 APPL,12:01:20.100,B,O,500,67.70 IBM,12:01:20.200,A,O,100,61.11 MSFT,12:01:10.200,A,O,300,55.60 APPL,12:01:20.200,B,O,500,67.70 IBM,12:01:20.200,A,O,100,61.11 MSFT,12:01:10.300,A,O,300,55.60 APPL,12:01:20.400,B,O,500,67.70 IBM,12:01:20.500,A,O,100,61.11 MSFT,12:01:10.500,A,O,300,55.60 APPL,12:01:20.600,B,O,500,67.70 IBM,12:01:20.600,A,O,100,61.11 MSFT,12:01:10.700,A,O,300,55.60 APPL,12:01:20.700,B,O,500,67.70 IBM,12:01:20.800,A,O,100,61.11 MSFT,12:01:10.900,A,O,300,55.60 APPL,12:01:20.900,B,O,500,67.70 IBM,12:01:20.990,A,O,100,61.11 If the file is compressed into a ZIP archive (t.zip), the system command unzip has the option to uncompress to stdout, which can be combined with a fifo . The following loads the CSV file through a FIFO without having the intermediary step of creating the unzipped file: q)system"rm -f fifo && mkfifo fifo" q)trade:flip `sym`time`ex`cond`size`price!"STCCFF"$\:() q)system"unzip -p t.zip > fifo &" q).Q.fps[{`trade insert ("STCCFF";",")0:x}]`:fifo q)trade Alternatively, if the file was compressed using gzip (t.gz), the system command gunzip can be used: q)system"rm -f fifo && mkfifo fifo" q)trade:flip `sym`time`ex`cond`size`price!"STCCFF"$\:() q)system"gunzip -cf t.gz > fifo &" q).Q.fps[{`trade insert ("STCCFF";",")0:x}]`:fifo q)trade
Errors¶ Runtime errors¶ - {directory}/q.k. OS reports: No such file or directory - Using the environment variable QHOME (or<HOME DIRECTORY>/q if not set),q.k was not found in the directory specified. Check that theQHOME environment variable is correctly set to the directory containingq.k , which is provided in the kdb+ installation files. - access - Tried to read files above directory, run system commands or failed usr/pwd - accp - Tried to accept an incoming TCP/IP connection but failed to do so - adict - E.g. d[::]:x - Blocked assignment ( 'nyi ) - arch - E.g. `:test set til 100 -17!`:test Tried to load file of wrong endian format - assign - E.g. cos:12 Tried to redefine a reserved word - bad lambda - E.g. h{select x by x from x} lambda from an older version of kdb+ over IPC that no longer parses - badmsg - Failure in IPC validator - bad meta data in file - The compressed file contains corrupt meta data. This can happen if the file was incomplete at the time of reading. - badtail - Incomplete transaction at end of file, get good (count;length) with -11!(-2;`:file) - binary mismatch - Wrong process for code profiler - can't - Only commercially licensed kdb+ instances can encrypt code in a script - cast - E.g. s:`a`b; c:`s$`a`e Value not in enumeration - close - (1) content-length header missing from HTTP response (2) handle: n – handle was closed by the remote while a msg was expected - con - qcon client is not supported when kdb+ is in multithreaded input mode - cond - Even number of arguments to $ (until V3.6 2018.12.06) - conn - Too many connections. Max connections was 1022 prior to 4.1t 2023.09.15, otherwise the limit imposed by the operating system (operating system configurable for system/protocol). - Could not initialize ssl - (-26!)[] found SSL/TLS not enabled - d8 - The log had a partial transaction at the end but q couldn’t truncate the file - decompression error at block [b] in - Error signalled by underlying decompression routine - domain - E.g. til -1 Out of domain - dup - E.g. `a`b xasc flip`a`b`a!() Duplicate column in table (since V3.6 2019.02.19) - dup names for cols/groups - E.g. select a,a by a from t Name collision (since V4.0 2020.03.17) - elim - E.g. ((-58?`3) set\:(),`a)$`a Too many enumerations (max: 57) - empty - The paths listed in par.txt do not contain any partitions or are inaccessible. - enable secondary threads via cmd line -s only - E.g. \s 4 Command line enabled processes for parallel processing - encryption lib unavailable - E.g. -36!(`:kf;"pwd") Failed to load OpenSSL libraries - expected response - One-shot request did not receive response - failed to load TLS certificates - Started kdb+ with -E 1 or-E 2 but without SSL/TLS enabled - from - E.g. select price trade Badly formed select query - hop - Request to hopen a handle fails; includes message from OS - hwr - Handle write error, can’t write inside a peach - IJS - E.g. "D=\001"0:"0=hello\0011=world" Key type is not I ,J , orS . - insert - E.g. t:([k:0 1]a:2 3);`t insert(0;3) Tried to insert a record with an existing key into a keyed table - invalid - E.g. q -e 3 Invalid command-line option value - invalid password - E.g. -36!(`:kf;"pwd") Invalid keyfile password - \l - Not a data file - length - E.g. ()+til 1 Arguments do not conform - limit - E.g. 0W#2 Tried to generate a list longer than 240-1, or serialized object is > 1TB, or 'type if trying to serialize a nested object which has > 2 billion elements, or Parse errors - load - Not a data file - loop - E.g. a::b::a - Dependency loop - main thread only - E.g. -36!(`:kf;"pwd") - Not executed from main thread - mismatch - E.g. ([]a:til 4),([]b:til 3) - Columns that can’t be aligned for R,R orK,K - mlim - Too many nested columns in splayed tables. (Prior to V3.0, limited to 999; from V3.0, 251; from V3.3, 65530) - mq - Multi-threading not allowed - name too long - Filepath ≥100 chars (until V3.6 2018.09.26) - need zlib to compress - zlib not available - noamend - E.g. t:([]a:1 2 3) n:`a`b`c update b:{`n?`d;:`n?`d}[] from `t - Cannot change global state from within an amend - no append to zipped enums - E.g. `:sym?`c - Cannot append to zipped enum (from V3.0) - no `g# - E.g. {`g#x}peach 2#enlist 0 1 - A thread other than the main q thread has attempted to add a group attribute to a vector. Seen with peach +secondary threads or multithreaded input queue - noupdate - E.g. {a::x}peach 0 1 - Updates blocked by the -b cmd line arg, orreval code or a thread other than the main thread has attempted to update a global variable when inpeach +secondary threads or multithreaded input queue. Update not allowed when using negative port number. - nosocket - Can only open or use sockets in main thread. - nyi - E.g. "a"like"**" - Not yet implemented: it probably makes sense, but it’s not defined nor implemented, and needs more thinking about as the language evolves - os - E.g. \foo bar - Operating-system error or license error - par - Unsupported operation on a partitioned table or component thereof - parse - Invalid syntax; bad IPC header; or bad binary data in file - part - Something wrong with the partitions in the HDB; or med applied over partitions or segments - path too long - E.g. (`$":",1000#"a") set 1 2 3 - File path ≥255 chars (100 before V3.6 2018.09.26) - PKCS5_PBKDF2_HMAC - E.g. -36!(`:kf;"pwd") - Library invocation failed - pread - Issue reading a compressed file. This can happen if file corrupt or modified during read. - pwuid - OS is missing libraries for getpwuid . (Most likely 32-bit app on 64-bit OS. Try to install ia32-libs.) - or - UID (user id) not found in system database of users (e.g. running on container with randomized UID). To prevent this issue (since 4.1t 2023.05.26,4.0 2023.11.03) system environment variable HOME or USER can be set to home directory for the user. - Q7 - nyi op on file nested array - rank - E.g. +[2;3;4] - Invalid rank - rb - Encountered a problem while doing a blocking read - restricted - E.g. 0"2+3" in a kdb+ process which was started with-b cmd line. - Also for a kdb+ process using the username:password authentication file, or the -b cmd line option,\x cannot be used to reset handlers to their default. e.g.\x .z.pg - s-fail - E.g. `s#3 2 - Invalid attempt to set sorted attribute. Also encountered with `s#enums when loading a database (\l db ) and enum target is not already loaded. - splay - nyi op on splayed table - stack - E.g. {.z.s[]}[] - Ran out of stack space. Consider using Converge \ / instead of recursion. - step - E.g. d:`s#`a`b!1 2;`d upsert `c`d!3 4 Tried to upsert a step dictionary in place - stop - User interrupt (Ctrl-c) or time limit ( -T ) - stype - E.g. '42 - sys - E.g. {system "ls"}peach 0 1 - Using system call from thread other than main thread - threadview - Trying to calc a view in a thread other than main thread. A view can be calculated in the main thread only. The cached result can be used from other threads. - timeout - Request to hopen a handle fails on a timeout; includes message from OS - TLS not enabled - Received a TLS connection request, but kdb+ not started with -E 1 or-E 2 - too many syms - kdb+ currently allows for about 1.4B interned symbols in the pool and will exit with this error when this threshold is reached - trunc - The log had a partial transaction at the end but q couldn’t truncate the file - type - E.g. til 2.2 - Wrong type. Also see limit - type/attr error amending file - Direct update on disk for this type or attribute is not allowed - u-fail - E.g. `u#2 2 - Invalid attempt to set unique or parted attribute - unmappable - E.g. t:([]sym:`a`b;a:(();())) .Q.dpft[`:thdb;.z.d;`sym;`t] - When saving partitioned data each column must be mappable. () and("";"";"") are OK - unrecognized key format - E.g. -36!(`:kf;"pwd") - Master keyfile format not recognized - upd - Function upd is undefined (sometimes encountered during-11!`:logfile ) or license error - utf8 - The websocket requires that text is UTF-8 encoded - value - No value - vd1 - Attempted multithread update - view - Tried to re-assign a view to something else - -w abort - -w init via cmd line - Trying to allocate memory with \w without-w on command line - wsfull - E.g. 999999999#0 - malloc failed, or ran out of swap (or addressability on 32-bit). The params also reported are intended to help KX diagnose when assisting clients, and are subject to change. - wsm - E.g. 010b wsum 010b - Alias for nyi forwsum prior to V3.2 - XXX - E.g. delete x from system "d";x - Value error ( XXX undefined) System errors¶ From file ops and IPC | error | explanation | |---|---| | Bad CPU Type | Tried to run 32-bit interpreter in macOS 10.15+ | XXX:YYY | XXX is from kdb+, YYY from the OS | XXX from addr, close, conn, p(from -p ), snd, rcv or (invalid) filename, e.g. read0`:invalidname.txt Parse errors¶ On execute or load | error | example / explanation | |---|---| [({])}" | "hello Open ([{ or " | | branch | a:"1;",65024#"0;" value "{if[",a,"]}" A branch ( if ;do ;while ;$[.;.;.] ) more than 65025 byte codes away(255 before V3.6 2017.09.26) | | char | value "\000" Invalid character (watch out for non-breaking spaces in copied expressions) | | globals | a:"::a"sv string til 111; value"{a",a,"::0}" Too many global variables | | limit | a:";"sv string 2+til 241; value"{",a,"}" Too many constants, or limit error | | locals | a:":a"sv string til 111; value"{a",a,":0}" Too many local variables | | params | f:{[a;b;c;d;e;f;g;h;e]} Too many parameters (8 max) | License errors¶ On launch | error | explanation | |---|---| | {timestamp} couldn't connect to license daemon | Could not connect to KX license server (kdb+ On Demand) | | cores | The license is for fewer cores than available | | cpu | The license is for fewer CPUs than available | | exp | License expiry date is prior to system date. The license has expired. Commercial license holders should have their Designated Contacts reach out to [email protected] or contact [email protected] to begin a new commercial agreement. | | host | The hostname reported by the OS does not match the hostname or hostname-pattern in the license. If you see 255.255.255.255 in the kdb+ banner, the machine likely cannot resolve its hostname to an IP address, which will cause a host error.Since 4.1t 2022.07.01,4.0 2022.07.01 the detected hostname is printed. It can be used to compare with the hostname used within the license. | | k4.lic | k4.lic file not found. If the environment variable QLIC is set, check it is set to the directory containing the license file. Note that it should not be set to the location of the license file itself, but to the directory that contains the license file. If QLIC is not set, check that the directory specified by the environment variables QHOME contains the license file. | | os | Wrong OS or operating-system error (if runtime error) | | srv | Client-only license in server mode | | upd | Version of kdb+ more recent than update date, or the function upd is undefined (sometimes encountered during -11!`:logfile ) | | user | Unlicensed user | | wha | System date is prior to kdb+ version date. Check that the system date shows the correct date. | | wrong q.k version | q and q.k versions do not match. Check that the q.k file found in the directory specified by the QHOME environment variable is the same version as that supplied with the q binary. | License-related errors are reported with the prefix licence error: since V4.0 2019.10.22. Handling errors¶ Use system command \ (abort) to clear one level off the execution stack. Keyword exit terminates the kdb+ process. Use hook .z.exit to set a callback on process exit. Use Signal to signal errors. Use Trap and Trap At to trap errors.
Data recovery for kdb+tick¶ KX freely offers a complete tick-capture product which allows for the processing, analysis and historical storage of huge volumes of tick data in real time. This product, known as kdb+tick, is extremely powerful, lightweight and forms the core of most kdb+ architectures. The tickerplant lies at the heart of this structure. It is responsible for receiving data from external feedhandlers and publishing to downstream subscribers. Perhaps the most important aspect of the tickerplant is how it logs every single message it receives to a binary log file. In the event of a subscriber process failing, this log file can be used to restore any missing data. kdb+tick¶ This paper will primarily consider the relationship between the TP (tick.q) and RDB (r.q) in a kdb+tick architecture. In particular, the use of tickerplant logs when recovering lost data in an RDB. A log file created by a tickerplant is often referred to as a TP log . The following diagram shows the steps taken by an RDB to recover from a TP log on start-up: Writing a TP log¶ A log file can be created by any kdb+ process to record instructions/data in binary format, which can be later replayed to recover state. A tickerplant (tick.q) has the option to create and records messages sent to its subscribing clients so that they may recover in situations were they may fail. The tickerplant creates and records logs using the methods as described here. Should the TP fail, or be shut down for any period of time, no downstream subscriber will receive any published data for the period of its downtime. This data typically will not be recoverable. Thus it is imperative that the TP remain always running and available. The tickerplant maintains some key variables which can be requested by subscribers in order to read the current TP log. # start tickerplant $ q tick.q sym . -p 5010 q).u.L `:./sym2014.05.03 q).u.l 376i q).u.i 0 The tickerplant calls the upd function on any of its subscribing processes. Therefore the tickerplant logs the upd function call and any data passed, so that any subscribers can replay the log to regain state. //from u.q upd:{[t;x] ... //if the handle .u.l exists, write (`upd;t;x) to the TP log; //increment .u.j by one if[l; l enlist(`upd;t;x); j+:1] kdb+ messages and upd function¶ A tickerplant message takes the form of a list. (updfunctioname;tablename;tabledata) Here, functionname and tablename are symbols, and tabledata is a row of data to be inserted into tablename . Updates using trade schema trade:([]time:`timespan$();sym:`$();side:`char$();size:`long$();price:` float$() ); would appear in the tp log as `upd `trade (0D14:56:01.113310000;`AUDUSD;"S";1000;96.96) `upd `trade (0D14:56:01.115310000;`SGDUSD;"S";5000;95.45) `upd `trade (0D14:56:01.119310000;`AUDUSD;"B";1000;95.08) `upd `trade (0D14:56:01.121310000;`AUDUSD;"B";1000;95.65) `upd `trade (0D14:56:01.122310000;`SGDUSD;"B";5000;98.14) Replaying a TP log¶ Replay of a log file and dealing with a corrupt log file is described here. Clients of a tickerplant that may wish to recover state may be an RDB or custom developed RTEs. Its important to note that tickerplant does not playback the log file. An individual client of the tickerplant (e.g. a RDB) replays the log file when required. An example of an RDB that uses the TP log to recover state on a restart is r.q. On startup, r.q subscribes to a TP and receives the following information: - message count (.u.i ) - location of the TP log (.u.L ). It then replays this TP log to recover all the data that has passed through the TP up to that point in the day. This is called within .u.rep , which is executed when the RDB connects to the TP. //from r.q .u.rep:{...;-11!y;...}; kdb+ messages were described above in kdb+ messages and upd function. In a typical RDB, upd performs an insert. Therefore, executing a single line in the logfile is equivalent to insert[`tablename;tabledata] . Filtering TP log¶ A TP log will contain all messages published. A client of the tickerplant may have originally been subscribed to a subset of that data e.g. one of many tables published, or may wish to perform alternative logic on the data recovered from the log file. To filter on the data from the tp log, we can set the function(s) originally logged e.g. upd to a different value prior to playback and reinstate it after playback completes. With this playback specific function, logic can be implemented to filter on the data provided or perform alternative logic. Logging via an RTE/RDB¶ Example¶ Consider a Real-Time Engine designed to keep track of trading account position limits, The limits for each account can be used against realtime data from the tickerplant, and could be updated in the RTE by account managers. The account positions will use this schema accounts:([] time:`timespan$(); sym:`$(); curr:`$(); action:`$(); limit:`long$()); This could take the form of a keyed table, where sym is an account name. q) `sym xkey `accounts `accounts q)accounts sym | time curr action limit ------------|---------------------------------------------------- fgAccount | 2014.05.04D10:27:00.288697000 AUDJPY insert 5000000 pbAcc | 2014.05.04D10:27:00.291699000 GBPUSD insert 1000000 ACCOUNT0023 | 2014.05.04D10:27:01.558332000 SGDUSD insert 1000000 If we wanted this keyed table to be recoverable within this process, we would publish any changes to the table via the tickerplant and have a customized upd function defined locally in the RTE to take specific action on changes to the accounts table upd:{[t;x] $[t~`accounts; $[`insert~a:first x`action;[t insert x]; `update~a;@[`.;t;,;x]; `delete~a;@[`.;t;:;delete from value[t] where sym=first x`sym ]; '`unknownaction]; t insert x]; } Here we have three operations we can perform on the accounts table: insert , update and delete . We wish to record the running of these operations in the TP log, capture in the RDB (in an unkeyed version of the table), and perform customized actions in the RTE. We create a function to publish the data to the TP. The TP will publish to both the RDB and RTE, and the upd function will then be called locally on each. Assuming TP is running on same machine on port 5010, with no user access credentials required, we can define a function named pub on the RTE which will publish data from the RTE to the TP where it can be logged and subsequently re-published to the RDB and RTE. .tp.h:hopen`:localhost:5010 pub:{[t;x] neg[.tp.h](`upd;t;x); h"" } Example usage on the RTE: // - insert new account q)k:`sym`time`curr`action`limit q)pub[`accounts; enlist k!(`ACCOUNT0024;.z.p;`SGDUSD;`insert;1000000)] q)accounts sym | time curr action limit -----------|---------------------------------------------------- fgAccount | 2014.05.04D10:51:49.288168000 AUDJPY insert 5000000 pbAcc | 2014.05.04D10:51:49.291168000 GBPUSD insert 1000000 ACCOUNT0023| 2014.05.04D10:51:50.950002000 SGDUSD insert 1000000 ACCOUNT0024| 2014.05.04D10:54:41.796915000 SGDUSD insert 1000000 // - update the limit on account ACCOUNT0024 q)pub[`accounts; enlist k!(`ACCOUNT0024;.z.p;`SGDUSD;`update;7000000)] q)accounts sym | time curr action limit -----------|---------------------------------------------------- fgAccount | 2014.05.04D10:51:49.288168000 AUDJPY insert 5000000 pbAcc | 2014.05.04D10:51:49.291168000 GBPUSD insert 1000000 ACCOUNT0023| 2014.05.04D10:51:50.950002000 SGDUSD insert 1000000 ACCOUNT0024| 2014.05.04D11:05:30.557228000 SGDUSD update 7000000 // - delete account ACCOUNT0024 from table q)pub[`accounts; enlist k!(`ACCOUNT0024;.z.p;`SGDUSD;`delete;1000000)] q)accounts sym | time curr action limit ------------|---------------------------------------------------- fgAccount | 2014.05.04D10:27:00.288697000 AUDJPY insert 5000000 pbAcc | 2014.05.04D10:27:00.291699000 GBPUSD insert 1000000 ACCOUNT0023 | 2014.05.04D10:27:01.558332000 SGDUSD insert 1000000 Each action will be recorded in an unkeyed table in the RDB, resulting in the following. q)accounts sym time curr action limit ---------------------------------------------------------------- fgAccount 2014.05.04D10:27:00.288697000 AUDJPY insert 5000000 pbAcc 2014.05.04D10:27:00.291699000 GBPUSD insert 1000000 ACCOUNT0023 2014.05.04D10:27:01.558332000 SGDUSD insert 1000000 ACCOUNT0024 2014.05.04D10:54:41.796915000 SGDUSD insert 1000000 ACCOUNT0024 2014.05.04D11:05:30.557228000 SGDUSD update 7000000 ACCOUNT0024 2014.05.04D11:05:30.557228000 SGDUSD delete 1000000 The order in which logfile messages are replayed is hugely important in this case. All operations on this table should be made via the tickerplant so that everything is logged and order is maintained. After the three operations above, the TP log will have the following lines appended. q)get`:TP_2014.05.04 (`upd;`accounts; +`sym`time`curr`action`limit! (`ACCOUNT0024; 2014.05.04D10:54:41.796915000;`SGDUSD;`insert;1000000); (`upd;`accounts; +`sym`time`curr`action`limit! (`ACCOUNT0024; 2014.05.04D11:05:30.557228000;`SGDUSD;`update;7000000); (`upd;`accounts; +`sym`time`curr`action`limit! (`ACCOUNT0024; 2014.05.04D11:05:30.557228000;`SGDUSD;`delete;1000000); Replaying this logfile will recover this table. If we wanted every operation to this table to be recoverable from a logfile, we would need to publish each operation. However, manual user actions that are not recorded in the TP log can cause errors when replaying. Consider the following table on the RTE. q)accounts sym | time curr action limit -----------|---------------------------------------------------- // - insert new account q)k:`sym`time`curr`action`limit q)pub[`accounts; enlist k!(`ACCOUNT0024;.z.p;`SGDUSD;`insert;1000000)] sym | time curr action limit -----------|---------------------------------------------------- ACCOUNT0024| 2014.05.04D10:54:41.796915000 SGDUSD insert 1000000 // - delete this entry from the in-memory table without publishing to the TP q)delete from `accounts where sym=`ACCOUNT0024 q)accounts sym | time curr action limit -----------|---------------------------------------------------- // - insert new account again q)pub[`accounts; enlist k!(`ACCOUNT0024;.z.p;`SGDUSD;`insert;1000000)] sym | time curr action limit -----------|---------------------------------------------------- ACCOUNT0024| 2014.05.04D10:54:41.796915000 SGDUSD insert 1000000 Only two of these three actions were sent to the TP so only these two messages were recorded in the TP log: q)get`:TP_2014.05.04 (`upd;`accounts; +`sym`time`curr`action`limit! (`ACCOUNT0024; 2014.05.04D10:54:41.796915000;`SGDUSD;`insert;1000000); (`upd;`accounts; +`sym`time`curr`action`limit! (`ACCOUNT0024; 2014.05.04D10:54:41.796915000;`SGDUSD;`insert;1000000); Replaying the TP log, the kdb+ process tries to perform an insert to a keyed table, but there is a restriction that the same key cannot be inserted in the table twice. The delete that was performed manually between these two steps was omitted. q)-11!`:TP_2014.05.04 'insert Recovering from q errors during replay¶ As seen in the previous section, replaying a TP log can result in an error even if the log file is uncorrupted. We can utilize error trapping to isolate any rows in the TP log which cause an error while transferring the other, error-free rows into a new log file. The problematic TP log lines will be stored in a variable where they can be analyzed to determine the next course of action. old:`:2014.05.03 new:`TP_2014.05.03_new new set () / write empty list to the new logfile h:hopen new / open handle to new log file updOld:upd / save original upd fn baddata:() / variable to hold msgs that throw errors upd:{[t;x] / redefine upd with error trapping .[{[t;x] updOld[t;x]; / try the original upd fn on the log line h enlist(`upd;t;x) }; / if successful, write to new logfile (t;x); / input params {[t;x;errmsg] / else log error and append to variable (0N!"error on upd: ",errmsg baddata,:enlist(t;x)) }[t;x] ] }; Reference: Trap We now replay the original TP log. q)-11!old 2 q)accounts sym | time curr action limit -----------|---------------------------------------------------- ACCOUNT0024| 2014.05.04D10:54:41.796915000 SGDUSD insert 1000000 Here, both lines of the TP log have been replayed, but only one has actually been executed. The second insert has been appended to the baddata variable. q)baddata (`upd;`accounts; +`sym`time`curr`action`limit! (`ACCOUNT0024; 2014.05.04D10:54:41.796915000;`SGDUSD;`insert;1000000); We may wish to write the bad data to its own logfile. (`:tp_2014.05.03_new) set enlist each `upd,/: baddata;
Reference architecture for Google Cloud¶ kdb+ is the technology of choice for many of the world’s top financial institutions when implementing a tick-capture system. kdb+ is capable of processing large amounts of data in a very short space of time, making it the ideal technology for dealing with the ever-increasing volumes of financial tick data. KX clients can lift and shift their kdb+ plants to the cloud and make use of virtual machines (VM) and storage solutions. This is the classic approach that relies on the existing license. To benefit more from the cloud technology it is recommended to migrate to kdb Insights. kdb Insights kdb Insights provides a range of tools to build, manage and deploy kdb+ applications in the cloud. It supports interfaces for deployment and common ‘Devops‘ orchestration tools such as Docker, Kubernetes, Helm, etc. It supports integrations with major cloud logging services. It provides a kdb+ native REST client, Kurl, to authenticate and interface with other cloud services. kdb Insights also provides kdb+ native support for reading from cloud storage. By taking advantage of kdb Insights suite of tools, developers can quickly and easily create new and integrate existing kdb+ applications on Google Cloud. Deployment: - Use Helm and Kubernetes to deploy kdb+ applications to the cloud Service integration: - QLog – Integrations with major cloud logging services - Kurl – Native kdb+ REST client with authentication to cloud services Storage: - kdb+ Object Store – Native support for reading and querying cloud object storage Architectural components¶ The core of a kdb+ tick-capture system is called kdb+tick. kdb+tick is an architecture which allows the capture, processing and querying of timeseries data against realtime, streaming and historical data. This reference architecture describes a full solution running kdb+tick within Google Cloud which consists of these bare-minimum functional components: - datafeeds - feed handlers - tickerplant - realtime database - historical database - KX gateway A simplified architecture diagram for kdb+/tick in Google Cloud Worthy of note in this reference architecture is the ability to place kdb+ processing functions either in one Google Cloud instance or distributed around many Google Cloud instances. The ability for kdb+ processes to communicate with each other through kdb+’s built-in language primitives, allows for this flexibility in final design layouts. The transport method between kdb+ processes and all overall external communication is done through low-level TCP/IP sockets. If two components are on the same Google Cloud instance, then local Unix sockets can be used to reduce communication overhead. Many customers have kdb+tick set up on their premises. The Google Cloud reference architecture allows customers to manage a hybrid infrastructure that communicates with both kdb+tick systems on-premises and in the cloud. However, benefits from migrating their infrastructure to the cloud include - flexibility - auto-scaling - more transparent cost management - access to management/infrastructure tools built by Google - quick hardware allocation Data feeds¶ This is the source data we aim to ingest into our system. For financial use cases, data may be ingested from B-pipe (Bloomberg), or Elektron (Refinitiv) data or any exchange that provides a data API. Often the streaming data is available on a pub-sub component like Kafka or Solace, with an open-source interface to kdb+. The data feeds are in a proprietary format, but always one KX has familiarity with. Usually this means that a feed handler just needs to be aware of the specific data format. The flexible architecture of KX means most if not all the underlying kdb+ processes that constitute the system can be placed anywhere in it. For example, for latency, compliance or other reasons, the data feeds may be relayed through an existing customer on-premises data center. Or the connection from the feed handlers may be made directly from this Virtual Private Cloud (VPC) into the market data venue. The kdb+ infrastructure is often used to also store internally-derived data. This can optimize internal data flow and help remove latency bottlenecks. The pricing of liquid products (for example, B2B markets) is often done by a complex distributed system. This system often changes with new models, new markets or other internal system changes. Data in kdb+ that will be generated by these internal steps will also require processing and handling huge amounts of timeseries data. When all the internal components of these systems send data to kdb+, a comprehensive impact analysis captures any changes. Feed handler¶ A feed handler is a process that captures external data and translates it into kdb+ messages. Multiple feed handlers can be used to gather data from several different sources and feed it to the kdb+ system for storage and analysis. There are a number of open-source (Apache 2 licensed) Fusion interfaces between KX and third-party technologies. Feed handlers are typically written in Java, Python, C++ and q. Tickerplant¶ The tickerplant (TP) is a specialized, single threaded kdb+ process that operates as a link between the client’s data feed and a number of subscribers. It implements a pub-sub pattern: specifically, it receives data from the feed handler, stores it locally in a table then saves it to a log file. It publishes this data to a realtime database (RDB) and any clients that have subscribed to it. It then purges its local tables of data. Tickerplants can operate in two modes: - Batch - Collects updates in its local tables. It batches up for a period of time and then forwards the update to realtime subscribers in a bulk update. - Realtime - Forwards the input immediately. This requires smaller local tables but has higher CPU and network costs, bear in mind that each message has a fixed network overhead. Supported API calls: | call | action | |---|---| | Subscribe | Adds subscriber to message receipt list and sends subscriber table definitions. | | Unsubscribe | Removes subscriber from message receipt list. | Events: - End of Day - At midnight, the TP closes its log files, auto creates a new file, and notifies the realtime database (RDB) about the start of the new day. Realtime database¶ The realtime database (RDB) holds all the intraday data in memory, to allow for fast powerful queries. For example, at the start of business day, the RDB sends a message to the tickerplant and receives a reply containing the data schema, the location of the log file, and the number of lines to read from the log file. It then receives subsequent updates from the tickerplant as they are published. One of the key design choices for Google Cloud will be the size of memory for this instance, as ideally we need to contain the entire business day/period of data in-memory. Purpose: - Subscribed to the messages from the tickerplant - Stores (in memory) the messages received - Allows this data to be queried intraday Actions: - On message receipt inserts into local, in-memory tables - At End of Day (EOD), usually writes intraday data down then sends a new End-of-Day message to the HDB; may sort certain tables (e.g. by sym and time) to speed up queries An RDB can operate in single- or multi-input mode. The default mode is single input, in which user queries are served sequentially and queries are queued until an update from the TP is processed (inserted into the local table). In standard tick scripts, the RDB tables are indexed, typically by the product identifier. An index is a hash table behind the scene. Indexing has a significant impact on the speed of the queries at the cost of slightly slower ingestion. The insert function takes care of the indexing, i.e. during an update it also updates the hash table. Performance of the CPU and memory in the chosen Google Cloud instance will have some impact on the overall sustainable rates of ingest and queryable rate of this realtime kdb+ function. Historical database¶ The historical database (HDB) is a simple kdb+ process with a pointer to the persisted data directory. A kdb+ process can read this data and memory maps it, allowing for fast queries across a large volume of data. Typically, the RDB is instructed to save its data to the data directory at EOD from where the HDB can refresh its memory mappings. HDB data is partitioned by date in the standard kdb+tick. If multiple disks are attached to the box, then data can be segmented, and kdb+ makes use of parallel I/O operations. Segmented HDB requires a par.txt file that identifies the locations of the individual segments. An HDB query is processed by multiple threads and map-reduce is applied if multiple partitions are involved in the query. Purpose: - Provides a queryable data store of historical data - In instances involving research and development or data analytics, customers can create customer reports on order execution times Actions: - End of Day receipt: reloads the database to get the new days’ worth of data from the RDB write-down HDBs are often expected to be mirrored locally. Some users (e.g. quants) need a subset of the data for heavy analysis and backtesting where the performance is critical. KX gateway¶ In production, a kdb+ system may be accessing multiple timeseries data sets, usually each one representing a different market data source, or using the same data, refactored for different schemas. Process-wise, this can be seen as multiple TP, RDB and HDB processes. A KX gateway generally acts as a single point of contact for a client. A gateway collects data from the underlying services, combines data sets and may perform further data operations (e.g. aggregation, joins, pivoting, etc.) before it sends the result back to the user. The gateway hides the data segregation, provides utility functions and implements business logic. The specific design of a gateway can vary in several ways according to expected use cases. For example, in a hot-hot set up, the gateway can be used to query services across availability zones. The implementation of a gateway is largely determined by the following factors. - Number of clients or users - Number of services and sites - Requirement of data aggregation - Support of free-form queries - Level of redundancy and failover The task of the gateway can be broken down into the following steps. - Check user entitlements and data-access permissions - Provide access to stored procedures - Gain access to data in the required services (TP, RDB, HDB) - Provide the best possible service and query performance Google BigQuery is a fully managed, serverless data warehouse that enables scalable analysis over petabytes of data. The kdb Insights BigQuery API lets you easily interact with the REST API that Google exposes for BigQuery. This is particularly useful for the gateway. Data may reside in BigQuery that can be fetched by the gateway and users can enjoy the expressiveness of the q language to further analyze the data or join it with other data sources. Storage and filesystem¶ kdb+tick architecture needs storage space for three types of data: - Tickerplant log - If the tickerplant (TP) needs to handle many updates, then writing to TP needs to be fast since slow I/O may delay updates and can even cause data loss. Optionally, you can write updates to TP log batched (e.g. in every second) as opposed to realtime. You will suffer data loss if TP or instance is halted unexpectedly or stops/restarts, as the recently received updates are not persisted. Nevertheless, you already suffer data loss if a TP process or the Google Cloud instance goes down or restarts. The extra second of data loss is probably marginal to the whole outage window. - If the RDB process goes down, then it can replay data to recover from the TP log. The faster it can recover the less data is waiting in the TP output queue to be processed by the restarted RDB. Hence fast read operation is critical for resilience reasons. - Sym file (and par.txt for segmented databases) - The sym file is written by the realtime database (RDB) after end-of-day, when new data is appended to the historical database (HDB). The HDB processes will then read the sym file to reload new data. Time to read and write the sym file is often marginal compared to other I/O operations. Usually it is beneficial here to be able to write down to a shared filesystem, thereby adding huge flexibility in the Google Virtual Private Cloud (VPC). (For example, any other Google Cloud instance can assume this responsibility in a stateless fashion). - Historical data - Performance of the file system solution will determine the speed and operational latency for kdb+ to read its historical (at rest) data. The solution needs to be designed to cater for good query execution times for the most important business queries. These may splay across many partitions or segments of data or may deeply query on few/single partitions of data. The time to write a new partition impacts RDB EOD work. For systems that are queried around the clock the RDB write time needs to be very short. kdb+ supports tiering via par.txt . The file may contain multiple lines; each represents a location of the data. Hot, warm, and cold data may reside in storage solutions of different characteristics. Hot data probably requires low latency and high throughput, while the cost may be the primary goal for cold data. One real great value of storing your HDB within the Google Cloud ecosystem is the flexibility of storage. This is usually distinct from ‘on-prem’ storage, whereby you may start at one level of storage capacity and grow the solution to allow for dynamic capacity growth. One huge advantage of most Google Cloud storage solutions (e.g. Persistent Disks) is that disks can grow dynamically without the need to halt instances, this allows you to dynamically change resources. For example, start with small disk capacity and grow capacity over time. The reference architecture recommends replicating data. Either this can be tiered out to lower cost/lower performance object storage in Google Cloud or the data can be replicated across availability zones. The latter may be useful if there is client-side disconnection from other time zones. You may consider failover of service from Europe to North America, or vice-versa. kdb+ uses POSIX filesystem semantics to manage HDB structure directly on a POSIX-style filesystem stored in persistent storage (Google Cloud’s Persistent Disk et al.) There are many solutions that offer full operational functionality for the POSIX interface. Persistent disk¶ Google Cloud’s Persistent Disk is a high-performance block storage for virtual machine instances. Persistent Disk has a POSIX interface so it can be used to store historical database (HDB) data. One can use disks with different latency and throughput characteristics. Storage volumes can be transparently resized without downtime. You no longer need to delete data that might be needed in the future, just add capacity on the fly. Although the Persistent Disk capacity can be shrunk this is not always supported by all filesystem types. Persistent Disks in Google Cloud allow simultaneous readers, so they can be attached to multiple VMs running their own HDB processes. Frequently-used data that are sensitive to latency should use SSD disks that offer consistently high performance. Persistent Disk’s automatic encryption helps protect sensitive data at the lowest level of the infrastructure. The limitation of Persistent Disks is that they can be mounted in read-write mode only to a single VM. When EOD splaying happens, the Persistent Disk needs to be unmounted from another VM, (i.e. all extra HDBs need to be shut down). Local SSD can be mounted to a single VM. They have higher throughput and lower latency (especially with NVMe interface) at the expense of functionality including redundancy and snapshots. Local SSD with write-cache-flushing disabled can be a great choice for TP logs. Mirrored HDBs for target groups like quants also require maximal speed; redundancy and snapshots are less important here. When considering selecting the right Persistent Disk, one needs to be aware of the relation between maximal IOP and number of CPUs. Filestore¶ Filestore is a set of services from Google Cloud allowing you to load your HDB store into a fully managed service. All Filestore tiers use network-attached storage (NAS) for Google Compute Engine (GCE) instances to access the HDB data. Depending on which tier you choose, it can scale to a few 100s of TBs for high-performance workloads. Along with predictable performance, it is simple to provision and easy to mount on GCE VM instances. NFSv3 is fully supported. Filestore includes some other storage features such as: Deduplication, Compression, Snapshots, Cross-region replication, and Quotas. kdb+ is qualified with any tier of Filestore. In using Filestore, you can take advantage of these built-in features when using it for all or some of your HDB segments and partitions. As well as performance, it allows for consolidation of RDB write down and HDB reads, due to its simultaneous read and write support within a single filesystem namespace. This makes it more convenient than Google Cloud Persistent Disks. You can simply add HDB capacity by setting up a new VM, mounting Filestore tier as if an NFS client to that service, and if needed, register the HDB to the HDB load balancer. RDB or any other data-writer processes can write HDB anytime, it just needs to notify the HDB processes to remap the HDB files to the backing store. Note that the VMs of your RDB and HDB instances need to be in the same zone as your Filestore. Each Filestore service tier provides performance of a different level. The Basic tiers offer consistent performance beyond a 10 TB instance capacity. For High Scale tier instances, performance grows or shrinks linearly as the capacity scales up or down. Filestore is not highly available. It is backed by VMs of a zone. If the complete zone suffers an outage, users can expect downtime. Filestore backups are regional resources, however. In the rare case of inaccessibility of a given zone, users can restore the data using the regional backup and continue working in any available zone. Prior to choosing this technology, check in via your Google Cloud console to find the currently supported regions, as e.g. High Scale SSD is gradually being deployed globally. Google Cloud Storage¶ Google Cloud Storage (GCS) is an object store that scales to exabytes of data. There are different storage classes (standard, nearline, cold line, archive) for different availability. Infrequently used data can use cheaper but slower storage. The cloud storage interface supports PUT, GET, LIST, HEAD operations only so it cannot be used for its historical database (HDB) directly, and constitutes ‘eventual consistency’ and RESTful interfaces. There is an open-source adapter (e.g. Cloud Storage FUSE) which allows mounting a Cloud Storage bucket as a file system. The Kx Insights native object-store functionality outperforms open-source solutions and allows users to read HDB data from GCS. All you need do is add the URI of the bucket that stores HDB data to par.txt . Cloud object storage has a relatively high latency compared to local storage such as local SSD. However, the performance of kdb+ when working with GCS can be improved by caching GCS data. The results of requests to cloud storage can be cached on a local high-performance disk thus increasing performance. The cache directory is continuously monitored and a size limit is maintained by deleting files according to a LRU (least recently used) algorithm. Caching coupled with enabling secondary threads can increase the performance of queries against a HDB on cloud storage. The larger the number of secondary threads, irrespective of CPU core count, the better the performance of kdb+ object storage. Conversely the performance of cached data appears to be better if the secondary-thread count matches the CPU core count. Each query to GCS has a financial cost and caching the resulting data can help to reduce it. It is recommended to use compression on the HDB data residing on cloud storage. This can reduce the cost of object storage and possible egress costs and also counteract the relatively high-latency and low bandwidth associated with cloud object storage. Object store is great for archiving, tiering, and backup. The TP log file and the sym file should be stored each day and archived for a period of time. The lifecycle management of the object store simplifies clean-up whereby one can set expiration time to any file. The versioning feature of GCS is particularly useful when a sym file bloat happens due to feed misconfiguration or upstream change. Migrating back to a previous version saves the health of the whole database. A kdb+ feed can subscribe to a GCS file update that the upstream drops into a bucket and can start its processing immediately. The data is available earlier compared to the solution when the feed is started periodically, e.g. in every hour. Memory¶ The tickerplant (TP) uses very little memory during normal operation in realtime mode, whilst a full record of intraday data is maintained in the realtime database. Abnormal operation occurs if a realtime subscriber (including RDB) is unable to process the updates. TP stores these updates in the output queue associated with the subscriber. Large output queue needs a large memory. TP may even hit memory limits and exit in extreme cases. Also, TP in batch mode needs to store data (e.g. for a second). This also increases memory need. Consequently, the memory requirement of the TP box depends on the set-up of the subscribers and the availability requirements of the tick system. The main consideration for an instance hosting the RDB is to use a memory-optimized VM instance such as the n1-highmem-16 (with 104 GB memory), n1-highmem-32 (208 GB memory), etc. Google Cloud also offers VM with extremely large memory, m1-ultramem-160 , with 3.75 TiB of memory, for clients who need to store large amounts of high-frequency data in memory, in the RDB, or even to keep more than one partition of data in the RDB form. Bear in mind, there is a tradeoff between having a large memory and a quick RDB recovery time. The larger the tables, the longer it takes for the RDB to start from TP log. To alleviate this problem, clients may split a large RDB into two. The driving rule for separating the tables into two clusters is the join operation between them. If two tables are never joined, then they can be placed into separate RDBs. HDB boxes are recommended to have large memories. User queries may require large temporal space for complex queries. Query execution times are often dominated by IO cost to get the raw data. OS level caching stores frequently used data. The larger the memory the less cache miss will happen and the faster the queries will run. CPU¶ The CPU load generated by the tickerplant (TP) depends on the number of publishers and their verbosity (number of updates per second) and the number of subscribers. Subscribers may subscribe to partial data, but any filtering applied will consume further CPU cycles. The CPU requirement of the realtime database (RDB) comes from - appending updates to local tables - user queries Local table updates are very efficient especially if TP sends batch updates. User queries are often CPU intensive. They perform aggregation, joins, and call expensive functions. If the RDB is set up in multi-input mode (started with a negative port) then user queries are executed in parallel. Furthermore, kdb+ 4.0 supports multithreading in most primitives, including sum , avg , dev , etc. If the RDB process is heavily used and hit by many queries, then it is recommended to start in multi-process mode by -s command-line option). VMs with a lot of cores are recommended for RDB processes with large numbers of user queries. If the infrastructure is sensitive to the RDB EOD work, then powerful CPUs are recommended. Sorting tables before splaying is a CPU-intensive task. Historical databases (HDB) are used for user queries. In most cases the I/O dominates execution times. If the box has large memory and OS-level caching reduces I/O operations efficiently, then CPU performance will directly impact execution times. VM Maintenance, live migration¶ Virtual machines run on real physical machines. Occasionally physical machines suffer hardware failures. Google developed a suite of monitoring tools to detect hardware failure as early as possible. If the physical server is considered unreliable then the VM is moved to a healthy server. In most cases, the migration is unnoticed in Google Cloud, in contrast to to an on-premise solution where DevOps are involved and it takes time to replace the server. Improving business continuity is a huge value for all domains. Even Google Cloud cannot break the laws of physics. The migration step takes some time: data must be transferred over the network. The more memory you have, the longer it takes to migrate the VM. VMs that run the RDB are likely to have the largest memory. During migration, client queries are not ignored but delayed a bit. The connections are not dropped, the queries go into a buffer temporarily, and are executed after the migration. VM migration is not triggered solely by hardware failure. Google needs to perform maintenance that is integral to keeping infrastructure protected and reliable. The maintenance includes host OS and BIOS upgrades, security or compliance requirements, etc. Maintenance events are logged in Stackdriver and you can receive advance notice by monitoring value /computeMetadata/v1/instance/maintenance-event metadata Furthermore, Google provides gcloud command compute instances simulate-maintenance-event to simulate a maintenance event. You can use this function to measure the impact of live migration and provide an SLA for the kdb+tick. You can also instruct Google Cloud to avoid live migration during maintenance. The alternative is stopping the instance before live migration, and starting it up once the maintenance finished. For kdb+tick this is probably not the policy you need, since you need to provide continuous service. Locality, latency and resilience¶ The standard tick setup on premises requires the components to be placed on the same server. The tickerplant (TP) and realtime database (RDB) are linked via the TP log file and the RDB and historical database (HDB) are bound due to RDB EOD splaying. Customized kdb+tick release this constraint in order to improve resilience. One motivation could be to avoid HDB queries impacting data capture in TP. You can set up an HDB writer on the HDB box and RDB can send its tables via IPC at midnight and delegate the I/O work together with the sorting and attribute handling. We recommend placing the fhe feedhandlers outside the TP box, on another VM between TP and data feed. This way any feedhandler malfunctions have a smaller impact on TP stability. Sole-tenant nodes¶ Physical servers may run multiple VMs that may belong to different organizations. Sole-tenancy lets you have exclusive access to the physical server that is dedicated to hosting only your project’s VMs. Having this level of isolation is useful in performance-sensitive, business-critical applications or to meet security or compliance requirements. Another advantage of sole-tenant nodes is that you can define a maintenance window. This is particularly useful in business domains (e.g. exchanges that close for the weekend) where the data flow is not continuous. Recovery¶ A disaster recovery plan is usually based on requirements from both the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) specifications, which can guide the design of a cost-effective solution. However, every system has its own unique requirements and challenges. Here we suggest the best-practice methods for dealing with the various possible failures one needs to be aware of and plan for when building a kdb+tick system. In all the various combinations of failover operations that can be designed, the end goal is always to maintain availability of the application and minimize any disruption to the business. In a production environment, some level of redundancy is always required. Depending on the use case, requirements may vary but in nearly all instances requiring high availability, the best option is to have a hot-hot (or ‘active-active’) configuration. The following are the four main configurations that are found in production: hot-hot, hot-warm, hot-cold, and pilot light (or cold hot-warm). - Hot-hot - Hot-hot is the term for an identical mirrored secondary system running, separate to the primary system, capturing and storing data but also serving client queries. - In a system with a secondary server available, hot-hot is the typical configuration as it is sensible to use all available hardware to maximize operational performance. The KX gateway handles client requests across availability zones and collects data from several underlying services, combining data sets and if necessary, performing an aggregation operation before returning the result to the client. - Hot-warm - The secondary system captures data but does not serve queries. In the event of a failover, the KX gateway will reroute client queries to the secondary (warm) system. - Hot-cold - The secondary system has a complete backup or copy of the primary system at some previous point in time (recall that kdb+ databases are just a series of operating system files and directories) with no live processes running. - A failover in this scenario involves restoring from this latest backup, with the understanding that there may be some data loss between the time of failover to the time the latest backup was made. - Pilot light (cold hot-warm) - The secondary is on standby and the entire system can quickly be started to allow recovery in a shorter time period than a hot-cold configuration. Typically, kdb+ is deployed in a high-value system. Hence, downtime can impact business which justifies the hot-hot setup to ensure high availability. Usually, the secondary will run on separate infrastructure, with a separate file system, and save the data to a secondary database directory, separate from the primary. In this way, if the primary system or underlying infrastructure goes offline, the secondary would be able to take over completely. The usual strategy for failover is to have a complete mirror of the production system (feed handler, tickerplant, and realtime subscriber), and when any critical process goes down, the secondary is able to take over. Switching from production to disaster recovery systems can be implemented seamlessly using kdb+ inter process communication. Disaster-recovery planning for kdb+tick systems Data recovery for kdb+tick Network¶ The network bandwidth needs to be considered if the tickerplant components are not located on the same VM. The network bandwidth between Google Cloud VMs depends on the type of the VMs. For example, a VM of type n1-standard-8 has a maximum egress rate of 2 GBps. For a given update frequency you can calculate the required bandwidth by employing the -22! internal function that returns the length of the IPC byte representation of its argument. The tickerplant copes with large amounts of data if batch updates are sent. Make sure that the network is not your bottleneck in processing the updates. You might want to use the premium network service tier for higher throughput and lower latencies. Premium tier delivers GCP traffic over Google’s well-provisioned, low-latency, highly reliable global network. Network load balancer¶ Cloud Load Balancing is used for ultra-high performance, TLS offloading at scale, centralized certificate deployment, support for UDP, and static IP addresses for your application. Operating at the connection level, network load balancers are capable of handling millions of requests per second securely while maintaining ultra-low latencies. Standard network tier offers regional load balancing. The global load balancing is available as a premium tier feature. Load balancers can distribute load among applications that offer the same service. kdb+ is single threaded by default. With a negative -p command-line option you can set multithreaded input mode, in which requests are processed in parallel. This however, is not recommended for gateways (due to socket-usage limitation) and for kdb+ servers that process data from disk, like HDBs. A better approach is to use a pool of HDB processes. Distributing the queries can either be done by the gateway via async calls or by a load balancer. If the gateways are sending sync queries to the HDB load balancer, then a gateway load balancer is recommended to avoid query contention in the gateway. Furthermore, there are other kdb+tick components that enjoy the benefit of load balancers to better handle simultaneous requests. Adding a load balancer on top of an historical database (HDB) pool is quite simple. You create an instant template. It starts script automatically mounting the HDB data, sets environment variables (e.g. QHOME ) and starts the HDB. The HDB accepts incoming TCP connections so you need to set up an ingress firewall rule via network tags. In the next step, you need to create a managed, stateless instance group (set of virtual machines) with autoscaling to better handle peak loads. The final step is creating a TCP network load balancer between your VMs. You can set the recently created instance group as a backend service and request a static internal address. All clients will access the HDB pool via this static address and the load balancer will distribute the requests among the HDB servers seamlessly. General TCP load balancers with an HDB pool offer better performance than a stand-alone HDB, however, utilizing the underlying HDBs is not optimal. Consider three clients C1, C2, C3 and two servers HDB1 and HDB2. C1 is directed to HDB1 when establishing the TCP connection, C2 to HDB2 and C3 to HDB1 again. If C1 and C3 send heavy queries and C2 sends a few lightweight queries, then HDB1 is overloaded and HDB2 is idle. To improve the load distribution the load balancer needs to go under the TCP layer and needs to understand the kdb+ protocol. Logging¶ Google Cloud provides a fully-managed logging service that performs at scale and can ingest applications and system log data. Cloud Logging allows you to search and analyze the system log. It provides an easy-to-use and customizable interface so that DevOps can quickly troubleshoot applications. Log messages can be transferred to BigQuery by a single click, where complex queries allow a more advanced log-data analysis. In this section we also illustrate how to easily interact with the Google Cloud API from a q process. You can make use of cloud logging without any change in the code by setting up a fluentd-based logging agent. After installation, you simply add a config file to /etc/google-fluentd/config.d and restart service google-fluentd . At minimum, you need to set the path of the log files and specify a tag to derive the logName part of a log message. The simplest way to send a log message directly from a kdb+ process is to use the system keyword and the gcloud logging command-line tool. system "gcloud logging write kdb-log \"My first log message as text.\" --severity INFO&" The ampersand is needed to prevent the logging from blocking the main thread. Google Cloud allows sending structured log messages in JSON format. If you would like to send some key-value pair that is stored in a q dictionary then you can use the function .j.j to serialize the map into JSON. m: `message`val!("a structured message"; 42) system "gcloud logging write --severity=INFO --payload-type=json kdb-log '", .j.j[m], "'" Using system commands for logging is not convenient. A better approach is to use client libraries. There is no client library for the q programming language but you can use embedPy and the Python API as a workaround. \l p.q p)from google.cloud import logging p)logging_client = logging.Client() p)log_name = 'kdb-log-embedpy' p)logger = logging_client.logger(log_name) qlogger:.p.get `logger qlogger[`:log_text; "My kdb+ third log message as text"; `severity pykw `ERROR] m: `message`val!("another structured message"; 42) qlogger[`:log_struct; m; `severity pykw `ERROR] Another way to interact with the Cloud Logging API is through the REST API. kdb+ supports HTTP get and post requests by utilities .Q.hg and .Q.hp . The advantage of this approach is that you don’t need to install embedPy. Instead you have a portable pure-q solution. There is a long journey from .Q.hp till you have a fully featured cloud-logging library. The QLog library of kdb Insights spares you the trip. Call unary function msg in namespace .qlog to log a message. The argument is a string or a dictionary, depending on the type (structured or unstructured) of the message. .log.msg "unstructured message via QLog" .log.msg `severity`labels`message!("ERROR"; `class`facility!`rdb`EOD; "Something went wrong") QLog supports multiple endpoint types through a simple interface and lets you write to them concurrently. The logging endpoints in QLog are encoded as URLs with two main types: file descriptors and REST endpoints. The file descriptor endpoints supported are: :fd://stdout :fd://stderr :fd:///path/to/file.log REST endpoints are encoded as standard HTTP/S URLs such as: https://logging.googleapis.com . QLog generates structured, formatted log messages tagged with a severity level and component name. Routing rules can also be configured to suppress or route based on these tags. Existing q libraries that implement their own formatting can still use QLog via the base APIs. This enables them to do their own formatting but still take advantage of the QLog-supported endpoints. Integration with cloud logging application providers can easily be achieved using logging agents. These can be set up alongside running containers/virtual machines to capture their output and forward to logging endpoints, such as Cloud Logging API. Once the log messages are ingested you can search, sort and display them by - the gcloud command-line tool - APIs Explorer - Legacy Logs Viewer Logs Viewer is probably the best place to start the log analysis as it provides a slick web interface with a search bar and filters based on the most popular choices. A clear advantage of structured log messages over text-based ones is that you can make better use of the advanced search facility. You can restrict by any key, value pair in the boolean expression of filtering constraints. Log messages can be filtered and copied to BigQuery, which allows a more advanced analysis thanks to the Standard SQL of BigQuery that provides superset functionality of ANSI SQL (e.g. by allowing array and JSON columns). Key benefits of Cloud Logging: - Almost all kdb+tick components can benefit from Cloud Logging. Feed handlers log new data arrival, data and connection issues. The TP logs new or disappearing publishers and subscribers. It can log if the output queue is above a threshold. The RDB logs all steps of the EOD process which includes sorting and splaying of all tables. HDB and gateway can log every single user query. - kdb+ users often prefer to save log messages in kdb+ tables. Tables that are unlikely to change are specified by a schema, while entries that require more flexibility use key-value columns. Log tables are ingested by log tickerplants and these Ops tables are separated from the tables required for the business. - One benefit of storing log messages is the ability to process log messages in qSQL. Timeseries join functions include as-of and window joins. Consider how it investigates gateway functions that are executed hundreds of times during the day. The gateway query executes RDB and HDB queries and via load balancers. All these components have their own log entries. You can simply employ window join to find relevant entries and perform aggregation to get an insight of the performance characteristics of the execution chain. Nothing prevents you from logging both to kdb+ and to Cloud Logging. - Cloud Logging integrates with Cloud Monitoring. You may also wish to integrate your KX Monitoring for kdb+ components into this Cloud Logging and Cloud Monitoring framework. The purpose is the same, to get insights into performance, uptime and overall health of the applications and the servers pool. You can visualize trends via dashboards and set rules to trigger alarms. Cloud Monitoring supports monitoring, alarming and creating dashboards. It is simple to create a Metric Filter based on a pattern and set an alarm (e.g. sending email) if a certain criterion holds. You may also wish to integrate your KX Monitoring for kdb+ components into this cloud logging and monitoring framework. The purpose is the same, to get insights into performance, uptime and overall health of the applications and the servers pool. You can visualize trends via dashboards. Google Cloud Functions¶ Cloud Functions allow you to run code without worrying about infrastructure management. You deploy a code that is triggered by some event – the backend system is managed by Google Cloud. You pay only for the resource during the function execution. This may result in a better cost allocation than maintaining a complete backend server, by not paying for idle periods. The function’s platform only supports Node.js, Java, Go, and Python programming languages. Python has a kdb+ API via PyQ but this requires starting up binary pyq, which is not supported. Java and Go have kdb+ client APIs, the former is maintained by KX. One use case for Cloud Functions is implementing feed handlers. An upstream can drop, for instance, a CSV file to a Google Filestore bucket. This event can trigger a Java or Go cloud function that reads the file, applies some filtering or other data transformation, then sends the data to a tickerplant (TP). The real benefit of not caring about the backend infrastructure becomes obvious when the number of kdb+ tables, hence the number of feed handlers, increases, and distributing the feed handler on available servers needs constant human supervision. A similar service, called Cloud Run, can be leveraged to run kdb+ in a serverless architecture. The kdb+ binary and code can be containerized and deployed to Cloud Run. Service discovery¶ Feeds and the RDB need to know the address of the tickerplant. The gateway and the RDB need to know the address of the HDB. In fact, there are multiple RDBs and HDBs connecting to a gateway in practice. In a microservice infrastructure like kdb+tick, these configuration details are best stored in a configuration-management service. This is especially true if the addresses are constantly changing and new services are added dynamically. Google offers Service Directory, a managed service, to reduce the complexity of management and operations by providing a single place to publish, discover, and connect services. Service Directory organizes services into namespaces. A service can have multiple attributes, called annotations, as key-value pairs. You can add several endpoints to a service. The IP address and a port is mandatory for each end point. Unfortunately Service Directory neither validates addresses nor performs health checks. kdb+ can easily interact with the Service Directory using Kurl. Kurl can be extended to create or query namespaces, discover or add and remove endpoints to facilitate service discovery of your kdb+ processes running in your tick environment. For example a kdb+ gateway can fetch from Service Directory the addresses of RDBs and HDBs. The cloud console also comes with a simple web interface to e.g. list the endpoints and their addresses of any service. Access management¶ We distinguish application and infrastructure level access control. Application-level access management controls who can access kdb+ components and run commands. tickerplant (TP), realtime database (RDB) and historical database (HDB) are generally restricted to kdb+ infra admins only and the gateway is the access point for the users. One responsibility of the gateway is to check if the user can access the tables (columns and rows) s/he is querying. This generally requires checking user ID (returned by .z.u ) against some organizational entitlement database, cached locally in the gateway and refreshed periodically. Google provides an enterprise-grade identity and access management referred to as Cloud IAM. It offers a unified way to administrate fine-grained actions on any Google Cloud resource including storage, VMs and logs. Hardware¶ | service | VM instance type | storage | CPU, memory, I/O | |---|---|---|---| | Tickerplant | High CPUn1-highcpu-[16-96] n2-highcpu-[8-80] | PD local PD | High-Perf Medium Medium | | Realtime Database | High Memoryn1-highmem-[16-96] n2-highmem-[8-80] | High-Perf High-Capacity Medium | | | Historical Database | High Memoryn1-highmem-[16-96] n2-highmem-[8-80] | PD ElastiFile | Medium Perf Medium High | | Complex Event Processing (CEP) | Standardn1-standard-[16-96] n2-standard-[8-80] | Medium Perf Medium High | | | Gateway | High CPUn1-highcpu-[16-96] n2-highcpu-[8-80] | Medium-Perf Medium High | Resources¶ KxSystems/kdb-tick: standard tick.q scripts Building Realtime Tick Subscribers Data Recovery for kdb+ tick Disaster-recovery planning for kdb+ tick systems Intraday writedown solutions Query Routing: a kdb+ framework for a scalable load-balanced system Order Book: a kdb+ intraday storage and access methodology kdb+tick profiling for throughput optimization
psz:{[s;x] if[0=n:count x;:0];t*:11h<>t:.Q.tx each value .Q.V x;if[0h<(&/)t;:n*(+/)tw t]; / Exit if table empty or materialized types all fixed-width j:1+(c:(+\)k:(|).Q.pn s)binr i:cs n; / Find number of trailing partitions required to obtain statistical sample p:(neg[j]#.Q.pv)(j-1)-m:where 0<k:j#k; / From these, get contributing partition numbers k@:m;j:k&i--1_0,c m; / Corresponding counts to fetch "j"$(n%i)*(+/){[s;p;n;i] sz?[s;enl[(=;.Q.pf;p)],$[n>i;enl(in;`i;neg[i]?n);()];0b;()]}[s]'[p;k;j] / Fetch and scale result } ct:{[p] if[0=count i:where(1<count each p)&(k:p[;0])in" /\t\n";3 0#()]; / Extract comments if[0=count j:where(c[;1]="#")&(c:(c?'"/")_'c:p i)[;2]in CTCH;3 0#()]; / From those, extract code tags (might have: /#+ a b /#@ c / Comment) s:{x:(0,1+x ss" /")_x:@[x;where x="\t";:;" "];(((x[;1]="#")&x[;2]in CTCH)?0b)#x}each c j; / Split into individual CTs, ignoring possible trailing comments (a[;2];(1+where[k="\n"]bin i j)where count each s;3_'a:(,/)s) / Code tag types, line numbers, and values } ctref:{[d;c] if[0=count c;:2 0#()]; / No code tags r:{[d;t;x] x@:where not null x;$[t="+";x;t="@";(,/)ctiref each qn[d]x;0#`]}[d]'[first c;`$" "vs'last c]; / Resolve direct and indirect references (c[1]where count each r;(,/)r) / Affix line numbers } lref:{ t:type each f:value y;g:x _$[100h=type y;f 3;100h=first t;1#value[first f]3;1#`]; / Get direct refs (fn) and/or namespace (fn, proj) g,/(lref[1]each f where t in 100 104h),cref each f where t in 0 99h / Append references } tr:{[s;nm] if[nm in s;:1 1#"+*"nm=-1#s]; / Check for recursion if[0=count g:calls nm;:1 1#"|"]; / And for degenerate calls t:(,/)tr[s,nm]each g; / Compute descendant call trees i:" "<>j:t[;1];if[1<n:count i:0,where((j=tl)&-1_0b,i)|i&-1_0b,j=bl;t:-1_(,/)(i _t),'n#1 1 0#""]; / The vertically aesthetic side of arboriculture n:count t:rt[g;t]; / Augment with subtree roots i:first k:where" "<>t[;0];j:last k; / Scope of descendant tree c:(c>1)+/(k:til c)>/:0,0|-2+c:1+j-i; / Descendant row class: 0 = solo, 1 = first, 2 = middle, 3 = last (@[n#" ";(_)0.5*j+i;:;ht],'@[n#" ";i+k;:;(ht,tl,vl,bl)c]),'t / Mark root position and bracket descendant group } rt:{[g;t] j@:k:where" "<>j:t[;0]; / Locate root positions g:(ht," "),/:((i:1+(|/)count each g)$g:string[g],'(j in"+*")#'j),'(" ",ht)j=ht; / Pad names to align subtrees, and add recursion hints @[count[t]#enl(i+3)#" ";k;:;g],'1_'t / Prepend roots } xr:{[nm] fn:last v:value value nm; / Get function defn fn[where fn="\t"]:" "; / Map tabs to blanks cr:ctref[first v 3]ct -4!(2*"k)"~2#fn)_fn; / Extract code tag information j:cmm[fn;0,where ln:fn="\n";q:qtm fn]; / Mark (with 0's) unescaped quote chars (may be in comments) and comments fn@:j:where j&q<=(=\)q>=j;ln@:j; / Mark (with 0's) quoted strings and comments and remove them j:where{x@:where j:i|-1_0b,i:x<>" ";i|expand[j;((("-"=1_x),0b)&-1_0b,x in")]}")|(1_k,0b)&-1_0b,k:x in CH]}fn;fn@:j;ln@:j; / Remove redundant white space b:b<>-1_0b,b:fn in CH; / Locate extremums of possible identifiers d:(fn[i]in -10_CH)&"`"<>fn -1+i:where b;d:(<>\)b:expand[b]d|-1_0b,d; / Inclusive mask of real identifiers (non-numeric, non-symbol) ia:(b&d)|p:trueAt[count fn;ra[fn;1i]]; / Compute start of each identifier and return stmt if[not count[first cr]|1b in ia;:enl"No references"]; / Quit if no references b:(-1_0b,p)|b>d; / Determine end of each reference ln-:til count ln:where ia where ia|ln; / Associated line numbers w:(|/)c:(1_c,count i)-c:where i:ia where d|:p; / Compute identifier lengths nl:`$(-1_0,(+\)c)_fn where d; / Extract identifier names ch:fn rz:where b; / Get character following each reference iz:count[ch]#0N; / Position of char following each indexed reference (in case no indexing) if[1b in i:ch="[";j:"]"=fn where c:fn in"[]";j:where[c]iasc j+(+\)1 -1 j;iz:@[iz;i;:;1+j 1+j?rz i:where i]]; / Find position after matching right bracket p:(+\)1 -1i"{}"?fn; / Cumulative function nesting level if[0=count lm:value each value each where[(i>-1_0b,i)j]_fn j:where i|-1_0b,i:2<=p;lm:1 4 0#`]; / Get lambda properties rt:rty[fn;iz;rz;ch;ia];ga:last rt;rt:first rt; / Compute reference type and global assign mask if[c:count first cr;nl,:last cr;ln,:first cr;rt,:c#3;ga,:c#0b;rz,:c#-2;v[3],:last cr]; / Add code tag references i:iasc$[c;flip(nl;ln);nl]; / Sort names (and line numbers if code tag refs) nl@:where m:differ nl@:i;ln@:i;rt@:i;ga@:i;rz@:i; / Keep unique names and reorder properties it:ity[nm;fn;nl;m;lm;v;rz]; / Compute identifier type sr:sref[nl;m;v;rt;ga;it]; / Calculate suspicious references xfmt[nl;m;it;ln;rt;sr;w] / Format table } rty:{[fn;iz;rz;ch;ia] rt:(":"=fn iz)+":[["?ch; / Compute basic reference type (a:1, a[b], a[b]:1, a) i:":"=fn rz+1; / Candidates for global assign (a::1) and modified assign (a+:1) rt[k]:6+j k:where b:(rt=3)&i&count[AOP]>j:AOP?ch; / Modified assignment (a+:1) j:(rt=1)&(fn[iz]in AOP)&":"=fn iz+1; / Modified indexed assignment (a[b]+:1, a[b]+::1) ga:(i&rt=0)|((":"=fn iz+1)&rt=2)|(j&":"=fn iz+2)|b&":"=fn rz+2; / Global assignment (a::1, a[b]::1, a[b]+::1, a+::1) rt+:j; / Flip index ref to assign for modified IA i:where ia;j:(rt=3)&ch=";"; / Candidates for functional amend (@[a; ...], .[a; ...]) rt[where j&i in 2+fn ss"@[[]"]:4; / At amend rt[where j&i in 2+fn ss".[[]"]:5; / Dot amend (rt;ga) / Reference type and global assign mask } ity:{[nm;fn;nl;m;lm;v;rz] g:g@/:(where not i|null t;where i:99h<t:gty qn[d:first v 3;g:(,/)1_'enl[v 3],lm[;3]]); / Compute global type, and group vars and fns it:(0,(+\)count each i)bin((,/)i:(v[1],/lm[;1];v[2],/lm[;2]),g)?nl; / Compute identifier type it[nl?distinct(inter/)2#i]+:8; / Mark if in multiple groups it[where(it=4)&nl in`by`from,.Q.res,key`.q]:7; / Q/k name it[where(it=1)&porr[m;"{"=fn rz+1]]:6; / Lambda it[where nl in nm,`.z.s]:5; / Recursive function it[i]:4 2 3[-20 99h binr gty qn[d;nl i:where it=4]]; / Global type (if still unresolved) it / Identifier type } sref:{[nl;m;v;rt;ga;it] i:rt in 1 3 4 5; / Mark references (vs. assigns) sr:porr[m;not rt in 1 3]&it=7; / Assignment to keywords sr|:nl in where 1<count each group v 1; / Duplicate parameters sr|:porr[m;i]<it<2; / Local identifiers with no call references sr|:(porr[m;ga]&it=1)|porr[m;not i|ga]&it=2; / Local/global assign inconsistency sr / Suspicious reference mask } xfmt:{[nl;m;it;ln;rt;sr;w] i:4+(_)10 xlog 1|/ln; / Width of each reference a:1_(-':)where m,1b; / Number of references per identifier b:(_)0.1+((WTH,(_)(WTH-6)%2)-10+w|:15)%i; / Max references per line of output for 1- or 2-col (min 6 cols between 2 cols; it+sr = 8+2 = 10) b:1|b flg:last[b]>=4&(|/)a; / Choose display format and refs/line (min 4 for 2-col) accordingly l:(+\)k:ceiling a%b;j:trueAt[last l;-1_0,l]; / Cumulative lines, and mask with 1's for each line of output corresponding to a new identifier c:@[count[j]#b;where 1_j,1b;:;1+(a-1)mod b]; / Number of references per line of output r:(,/')((+\)0,-1_c)_((2-i)$string ln),'RTY rt; / Format references r:(enl[count[first i]#" "],i:(w$string nl),'ITY[it],'(2 2#" ? ")sr)[j*1+til[count k]where k],'r; / Prepend name, type, and suspicious ref indicator if[flg;k:l|i:count[j]-l@:i?(&/)i:abs l-ceiling count[j]%2;r:(((_)WTH%2)$(l#r),(k-l)#enl""),'(l _r),(k-i)#enl""]; / Compose 2-col display r / Formatted table } shw:{[nm;s;f0;f1] fn:first a:fnd nm;dpy[nm;fn;where last a;(,/)f1[;fn;f0 a]each$[10h=type s;enl s;s]] } se:{[fl] j:cmm[fn;where last fl;q:qtm fn:first fl]; / Mark (with 0's) unescaped quote chars (may be in comments) and comments q:j&q<=(=\)q>=j; / Mark (with 0's) quoted strings excluding quote chars and comments i:q&fn in CH;se:(<>\)i:i<>-1_0b,i; / Mark unquoted identifiers and constants b:fn in -11#CH; / Possible numeric constants j:q&i<fn="-";se|:j&1_b,0b; / Turn on "-" if no ID to left and token to right is numeric j:i&(fn="_")|u:fn=" ";se|:u<expand[j;not t:b[(0,k)where j k:where i]]; / Turn on "_" if token to left is non-numeric ID se|:u&expand[j;t]&1_b,0b; / Turn on " " if tokens to left and right are numeric IDs se&-1_0b,se:se>=q / One-element spanning set for syntactic matches } dpy:{[nm;fn;ln;p] if[not n:count p;:()]; i:((+\)count each fn:ln _fn)binr p:asc p; / Lines on which hits occur j:where i<>-1_-1,i; / Starts of line groups h:string[nm]," (",string[n]," occurrence",(n=1)_"s)\n"; / Header -1 h,/{[fn;p] fn,"\n",@[#[1+last p]" ";p;:;"^"]}'[fn i j;j _p-1+ln i],"\n"; } setc "w"=first string .z.o / Use 0 for ASCII box corners and sides, 1 for graphic \ Usage: .ws.fns`. / Lists names of functions in root namespace .ws.fns`name / Lists names of functions in specified namespace .ws.fns`name1`name2 / Lists names of functions in specified namespaces .ws.fns` / Lists names of functions in all namespaces .ws.vars | .ws.tbls / As above, but for variables or tables
// @kind function // @category featureCreate // @desc Apply word2vec on string data for NLP problems // @param features {table} Feature data as a table // @param config {dictionary} Information related to the current run of AutoML // @return {table} Features created in accordance with the NLP feature creation // procedure featureCreation.nlp.create:{[features;config] featExtractStart:.z.T; // Preprocess the character data charPrep:featureCreation.nlp.proc[features;config]; // Table returned with NLP feature creation, any constant columns are dropped featNLP:charPrep`features; featNLP:.ml.dropConstant featNLP; // Run normal feature creation on numeric datasets and add to NLP features // if relevant cols2use:cols[features]except charPrep`stringCols; if[0<count cols2use; nonTextFeat:charPrep[`stringCols]_features; featNLP:featNLP,'featureCreation.normal.create[nonTextFeat;config]`features ]; featureExtractEnd:.z.T-featExtractStart; `creationTime`features`featModel!(featureExtractEnd;featNLP;charPrep`model) } ================================================================================ FILE: ml_automl_code_nodes_featureCreation_nlp_funcs.q SIZE: 8,369 characters ================================================================================ // code/nodes/featureCreation/nlp/funcs.q - Nlp feature creation // Copyright (c) 2021 Kx Systems Inc // // The functionality below pertains to the application of NLP methods to // kdb+ data \d .automl // @kind function // @category featureCreation // @desc Utility function used both in the application of NLP on the // initial run and on new data. It covers sentiment analysis, named entity // recognition, word2vec and stop word analysis. // @param features {table} Feature data as a table // @param config {dictionary} Information related to the current run of AutoML // @return {dictionary} Updated table with NLP created features included, along // with the string columns and word2vec model featureCreation.nlp.proc:{[features;config] stringCols:.ml.i.findCols[features;"C"]; spacyLoad:.p.import[`spacy;`:load]`en_core_web_sm; args:(spacyLoad pydstr @;features stringCols); sentences:$[1<count stringCols; {x@''flip y}; {x each y 0} ]. args; regexTab:featureCreation.nlp.regexTab[features;stringCols; featureCreation.nlp.i.regexList]; namedEntityTab:featureCreation.nlp.getNamedEntity[sentences;stringCols]; sentimentTab:featureCreation.nlp.sentimentCreate[features;stringCols; `compound`pos`neg`neu]; corpus:featureCreation.nlp.corpus[features;stringCols; `isStop`tokens`uniPOS`likeNumber]; colsCheck:featureCreation.nlp.i.colCheck[cols corpus;]; uniposTab:featureCreation.nlp.uniposTagging[corpus;stringCols] colsCheck"uniPOS*"; stopTab:featureCreation.nlp.boolTab[corpus]colsCheck"isStop*"; numTab:featureCreation.nlp.boolTab[corpus]colsCheck"likeNumber*"; countTokens:flip enlist[`countTokens]!enlist count each corpus`tokens; tokens:string(,'/)corpus colsCheck"tokens*"; w2vTab:featureCreation.nlp.word2vec[tokens;config]; nlpTabList:(uniposTab;sentimentTab;w2vTab 0;namedEntityTab;regexTab; stopTab;numTab;countTokens); nlpTab:(,'/)nlpTabList; nlpKeys:`features`stringCols`model; nlpValues:(nlpTab;stringCols;w2vTab 1); nlpKeys!nlpValues } // @kind function // @category featureCreation // @desc Calculate percentage of positive booleans in a column // @param features {table} Feature data as a table // @param col {string} Column containing list of booleans // @return {table} Updated features indicating percentage of true values // within a column featureCreation.nlp.boolTab:{[features;col] flip col!{sum[x]%count x}@''features col } // @kind function // @category featureCreation // @desc Utility function used both in the application of NLP on the // initial run and on new data // @param features {table} Feature data as a table // @param stringCols {string} String columns within the table // @param fields {string[]} Items to retrieve from newParser - also used in the // naming of columns // @return {table} Parsed character data in appropriate corpus for // word2vec/stop word/unipos analysis featureCreation.nlp.corpus:{[features;stringCols;fields] parseCols:featureCreation.nlp.i.colNaming[fields;stringCols]; newParser:.nlp.newParser[`en_core_web_sm;fields]; // apply new parser to table data $[1<count stringCols; featureCreation.nlp.i.nameRaze[parseCols]newParser@'features stringCols; newParser@features[stringCols]0 ] } // @kind function // @category featureCreation // @desc Calculate percentage of each uniPOS tagging element present // @param features {table} Feature data as a table // @param stringCols {string} String columns within the table // @param fields {string[]} uniPOS elements created from parser // @return {table} Part of speech components as a percentage of the total parts // of speech featureCreation.nlp.uniposTagging:{[features;stringCols;fields] // retrieve all relevant part of speech types pyDir:.p.import[`builtins;`:dir]; uniposTypes:cstring pyDir[.p.import[`spacy]`:parts_of_speech]`; uniposTypes:`$uniposTypes where not 0 in/:uniposTypes ss\:"__"; table:features fields; // Encode the percentage of each sentence which is of a specific POS percentageFunc:featureCreation.nlp.i.percentDict[;uniposTypes]; $[1<count stringCols; [colNames:featureCreation.nlp.i.colNaming[uniposTypes;fields]; percentageTable:percentageFunc@''group@''table; featureCreation.nlp.i.nameRaze[colNames;percentageTable] ]; percentageFunc each group each table 0 ] } // @kind function // @category featureCreation // @desc Apply named entity recognition to retrieve information about // the content of a sentence/paragraph, allowing for context to be provided // for a sentence // @param sentences {string} Sentences on which named entity recognition is to // be applied // @param stringCols {string} String columns within the table // @return {table} Percentage of each sentence belonging to particular named // entity featureCreation.nlp.getNamedEntity:{[sentences;stringCols] // Named entities being searched over namedEntity:`PERSON`NORP`FAC`ORG`GPE`LOC`PRODUCT`EVENT`WORK_OF_ART`LAW, `LANGUAGE`DATE`TIME`PERCENT`MONEY`QUANTITY`ORDINAL`CARDINAL; percentageFunc:featureCreation.nlp.i.percentDict[;namedEntity]; data:$[countCols:1<count stringCols;flip;::]sentences; labelFunc:{csym {(.p.wrap x)[`:label_]`}each x[`:ents]`}; nerData:$[countCols; {x@''count@'''group@''z@''y}[;;labelFunc]; {x@'count@''group@'z@'y}[;;labelFunc] ].(percentageFunc;data); $[countCols; [colNames:featureCreation.nlp.i.colNaming[namedEntity;stringCols]; featureCreation.nlp.i.nameRaze colNames ]; ]nerData } // @kind function // @category featureCreation // @desc Apply sentiment analysis to an input table // @param features {table} Feature data as a table // @param stringCols {string} String columns within the table // @param fields {string[]} Sentiments to extract // @return {table} Information about the pos/neg/compound sentiment of columns featureCreation.nlp.sentimentCreate:{[features;stringCols;fields] sentimentCols:featureCreation.nlp.i.colNaming[fields;stringCols]; $[1<count stringCols; featureCreation.nlp.i.nameRaze[sentimentCols].nlp.sentiment@''features stringCols; .nlp.sentiment each features[stringCols]0 ] } // @kind function // @category featureCreation // @desc Find Regualar expressions within the text // @param features {table} Feature data as a table // @param stringCols {string} String columns within the table // @param fields {string[]} Expressions to search for within the text // @return {table} Count of each expression found featureCreation.nlp.regexTab:{[features;stringCols;fields] regexCols:featureCreation.nlp.i.colNaming[fields;stringCols]; // get regex values $[1<count stringCols; [regexCount:featureCreation.nlp.i.regexCheck@''features stringCols; featureCreation.nlp.i.nameRaze[regexCols;regexCount] ]; featureCreation.nlp.i.regexCheck each features[stringCols]0 ] } // @kind function // @category featureCreation // @desc Create/load a word2vec model for the corpus and apply this // analysis to the sentences to encode the sentence information into a // numerical representation which can provide context to the meaning of a // sentence. // @param tokens {table} Feature data as a table // @param config {dictionary} Information related to the current run of AutoML // @return {table} word2vec applied to the string column featureCreation.nlp.word2vec:{[tokens;config] size:300&count raze distinct tokens; tokenCount:avg count each tokens; tokens:csym tokens; window:$[30<tokenCount;10;10<tokenCount;5;2]; gensimWord2Vec:.p.import[`gensim.models][`:Word2Vec]; args:`vector_size`window`sg`seed`workers!(size;window;config`w2v;config`seed;1); model:$[config`savedWord2Vec; gensimWord2Vec[`:load] pydstr utils.ssrWindows config[`modelsSavePath],"/w2v.model"; @[gensimWord2Vec .;(tokens;pykwargs args);{ '"\nGensim returned the following error\n",x, "\nPlease review your input NLP data\n"}] ]; if[config`savedWord2Vec;size:model[`:vector_size]`]; w2vIndex:where each tokens in csym model[`:wv.index_to_key]`; sentenceVector:featureCreation.nlp.i.w2vTokens[tokens]'[til count w2vIndex; w2vIndex]; avgVector:avg each featureCreation.nlp.i.w2vItem[model]each sentenceVector; w2vTable:flip(`$"col",/:string til size)!flip avgVector; (w2vTable;model) } ================================================================================ FILE: ml_automl_code_nodes_featureCreation_nlp_init.q SIZE: 308 characters ================================================================================ // code/nodes/featureCreation/nlp/init.q - Load nlp code // Copyright (c) 2021 Kx Systems Inc // // Load code for nlp featureCreation node \d .automl loadfile`:code/nodes/featureCreation/nlp/featureCreate.q loadfile`:code/nodes/featureCreation/nlp/funcs.q loadfile`:code/nodes/featureCreation/nlp/utils.q ================================================================================ FILE: ml_automl_code_nodes_featureCreation_nlp_utils.q SIZE: 3,143 characters ================================================================================ // code/nodes/featureCreation/nlp/utils.q - Utilities for nlp feature creation // Copyright (c) 2021 Kx Systems Inc // // Utility functions specific the the featureCreation node implementation \d .automl // @kind function // @category featureCreationUtility // @desc Retrieves the word2vec items for sentences based on the model // @param model {<} Model to be applied // @param sentence {symbol} Sentence to retrieve information from // @return {float[]} word2vec transformation for sentence featureCreation.nlp.i.w2vItem:{[model;sentence] $[()~sentence;0;model[`:wv.__getitem__][sentence]`] } // @kind function // @category featureCreationUtility // @desc Transform tokens into correct word2vec format // @param tokens {symbol[]} Tokens within input text // @param index1 {int} 1st index of tokens // @param index2 {int} 2nd index of tokens // @return {string[]} Tokens present in w2v featureCreation.nlp.i.w2vTokens:{[tokens;index1;index2] tokens[index1;index2] } // @kind function // @category featureCreationUtility // @desc Count each expression within a single text // @param text {string} Textual data // @return {dictionary} Count of each expression found featureCreation.nlp.i.regexCheck:{[text] count each .nlp.findRegex[text;featureCreation.nlp.i.regexList] } // @kind function // @category featureCreationUtility // @desc Retrieves the word2vec items for sentences based on the model // @param attrCheck {symbol[]} Attributes to check // @param attrAll {symbol[]} All possible attributes // @return {dictionary} Percentage of each attribute present in NLP featureCreation.nlp.i.percentDict:{[attrCheck;attrAll] countAttr:count each attrCheck; attrDictAll:attrAll!count[attrAll]#0f; percentValue:`float$(countAttr)%sum countAttr; attrDictAll,percentValue }
/- Take the average symfile count from the past n days, then check that todays /- sym file count hasn't grown more than pct%. Third argument (weekends) is a /- boolean flagging if you should consider weekends (1b) or not (0b). symfilegrowth:{[ndays;pct;weekends] .lg.o[`symfilegrowth;"Checking sym file has not grown more than ",string[pct],"%."]; /- Create list of last n days. lastndays:$[weekends;.z.D+-1*1+til ndays;{x#a where((a:.z.D-1+til 7*1+x div 5)mod 7)in 2 3 4 5 6}ndays]; /- Get handle to the DQEDB. h:(exec first w from .servers.getservers[`proctype;`dqedb;()!();0b;1b]); /- Make sure we have all previous n (business) days in the dqedb. if[ndays>c:count lastndays inter@[h;".Q.pv";`date$()];:(0b;"ERROR: number of",$[weekends;" ";" business "],"days (",string[ndays],") exceeds number of available dates (",string[c],") on disk")]; /- Get todays sym file count. tc:first exec resvalue from .dqe.resultstab where funct=`symcount; /- Get average sym file count from previous days. ac:exec avg resvalue from h"select from resultstab where date in ",(" "sv string lastndays),",funct=`symcount"; /- Test whether the symfile growth is less than a pct, and return test status. msg:"Sym file ",$[b:pct>100*(tc-ac)%ac;"has not";"has"]," grown more than ",string[pct],"% above the previous ",string[ndays],"-day average."; .lg.o[`symfilegrowth;msg]; (b;msg) } ================================================================================ FILE: TorQ_code_dqc_tablecomp.q SIZE: 134 characters ================================================================================ /- only meant to be used for comparison \d .dqc tablecomp:{[tab] (1b;("table count of ",(string tab)," is ",c);c:count get tab) } ================================================================================ FILE: TorQ_code_dqc_tablecount.q SIZE: 618 characters ================================================================================ \d .dqc /- compare the count of a table to a chosen value tablecount:{[tab;operator;chkvalue] .lg.o["checking count of ",string[tab]," is ",string[operator]," ",string[chkvalue]]; d:(>;=;<)!("greater than";"equal to";"less than"); statement:d[operator]," ",(string chkvalue),". Its count is ",string count value tab; c:operator .(count value tab;chkvalue); (c;"The count of ",(string tab)," is ",$[c;"";"not "],statement) } /- check if the count of the table is greater than zero tablehasrecords:.dqc.tablecount[;>;0]; /- count the number of rows in a table tablecountcomp:{[tab] count value tab } ================================================================================ FILE: TorQ_code_dqc_tableticking.q SIZE: 407 characters ================================================================================ \d .dqc /- check that a table has obtained records within a specified period of time tableticking:{[tab;timeperiod;timetype] .lg.o[`dqc;"checking table recieved data in the last ",string[timeperiod]," ",string[timetype],"s"]; $[0<a:count select from tab where time within (.z.p-timetype$"J"$string timeperiod;.z.p); (1b;"there are ",(string a)," records"); (0b;"the table is not ticking")] } ================================================================================ FILE: TorQ_code_dqc_timediff.q SIZE: 571 characters ================================================================================ \d .dqc / Takes a table name tn and two column names ca and cb, as well as a percentage / pt and a tolerance tl as a timespan (eg 0D00:00:00.001). timediff:{[tn;ca;cb;pt;tl] .lg.o[`timediff;"Checking the time differences in columns ",(", "sv string(ca;cb))," of table ",(string tn)]; ot:$[pt>re:(sum a)%count a:tl<(tn ca)-tn cb; (1b;"No major problem with data flow"); (0b;"ERROR: ",(string re*100),"% of differences between columns ",(string ca),", ",m:(string cb)," are greater than the timespan ",(string tl),".") ]; .lg.o[`timediff;ot 1]; ot } ================================================================================ FILE: TorQ_code_dqc_xmarketalert.q SIZE: 408 characters ================================================================================ \d .dqc /- alerts user when bid has exceeded the ask in market data xmarketalert:{[tab] .lg.o[`dqc;"checking whether bid has exceeded ask price in market data"]; data:select from tab where bid>ask; $[0=count data; (1b;"bid has not exceeded the ask in market data"); (0b;"bid has exceeded the ask ",string[count data]," times and they have occured at: ","," sv string exec time from data)] } ================================================================================ FILE: TorQ_code_dqcommon_connection.q SIZE: 240 characters ================================================================================ \d .dqe gethandles:{exec procname,proctype,w from .servers.SERVERS where (procname in x) | (proctype in x)} /- fill procname for results table fillprocname:{[rs;h] val:rs where not rs in raze a:h`proctype`procname; (flip a),val,'` } ================================================================================ FILE: TorQ_code_dqcommon_loadcsv.q SIZE: 266 characters ================================================================================ \d .dqe readdqeconfig:{[file;types] /- notify user about reading in config csv .lg.o["reading dqengine config from ",string file:hsym file]; /- read in csv, trap error c:.[0:;((types;enlist",");file);{.lg.e["failed to load dqe configuration file: ",x]}] } ================================================================================ FILE: TorQ_code_dqcommon_savedata.q SIZE: 1,165 characters ================================================================================ \d .dqe savedata:{[dir;pt;savetemp;ns;tabname] .lg.o[`dqe;"Saving ",(string tabname)," data to ",.os.pth dir]; pth:` sv .Q.par[dir;pt;tabname],`; err:{[e].lg.e[`savedata;"Failed to save dqe data to disk : ",e];'e}; tab:.Q.dd[ns;tabname]; .[upsert;(pth;.Q.en[dir;r:0!.save.manipulate[tabname;select from tab where i in savetemp]]);err]; .lg.o[`savedata;"number of rows that will be saved down: ", string count savetemp]; tbl:` sv ns,tabname; .dqe.tosavedown[tbl]:.dqe.tosavedown[tbl] except savetemp; }; cleartables:{[ns;tabname] /- empty the table from memory .lg.o[`cleartables;"deleting ",(string tabname)," data from in-memory table"]; @[ns;tabname;0#]; }; endofday:{[dir;pt;tabs;ns;savetemp] .lg.o[`eod;"end of day message received - ",string pt]; savedata[dir;pt;savetemp;ns]each tabs; cleartables[ns]each tabs; .lg.o[`eod;"end of day is now complete"]; }; /- function to reload an hdb notifyhdb:{[dir;h] .lg.o[`dqc;"notifying the hdb to reload"]; /- if you can connect to the hdb - call the reload function @[h;"system \"l ",dir,"\"";{.lg.e[`notifyhdb;"failed to send reload message to hdb on handle: ",x]}]; }; ================================================================================ FILE: TorQ_code_dqe_backfill.q SIZE: 467 characters ================================================================================ \d .dqe / - function to backfill dqedb data with older data from hdb backfill:{[dqetab;funct;vars;proc;dateof;dir] / - empty the table from memory first .dqe.cleartables[`.dqe;dqetab]; / - funct represents the dqe query that you would like to perform on the older data from an old date(dateof) .dqe.runquery[funct;(vars;dateof);`table;proc]; .dqe.savedata[dir;dateof;.dqe.tosavedown[.Q.dd[`.dqe;dqetab]];`.dqe;dqetab]; .dqe.cleartables[`.dqe;dqetab]; } ================================================================================ FILE: TorQ_code_dqe_bucketcount.q SIZE: 1,023 characters ================================================================================ \d .dqe /- Given a table name as a symbol (tn), and a inbuilt aggregate function in kdb (agg), return the number of messages reieved each hour throughout the day after applying the aggregate function bucketcount:{[agg;tn] .lg.o[.Q.dd[`$string agg;`bucketcount];"Getting ",(string agg)," hourly count of rows in ",string tn]; (enlist tn)!"j"$value agg select rowcount:count i by 60 xbar time.minute from ?[tn;enlist(=;.Q.pf;last .Q.PV);1b;()] } /- Given a table name as a symbol (tn), return the average number of messages received throughout the day based on hourly counts /- Works on partitioned tables in an hdb avgbucketcount:bucketcount[avg;] /- Given a table name as a symbol (tn), return the maximum number of messages received throughout the day based on hourly counts maxbucketcount:bucketcount[max;] /- Given a table name as a symbol (tn), return the minimum number of messages received throughout the day based on hourly counts /- Works on partitioned tables in an hdb minbucketcount:bucketcount[min;] ================================================================================ FILE: TorQ_code_dqe_bycount.q SIZE: 268 characters ================================================================================ \d .dqe bycount:{[tab;bycols] .lg.o[`bycount;"Counting amount of messages received with by clauses applied to column(s) bycols"]; (enlist$[-11h=type bycols;;` sv]bycols)!enlist?[tab;enlist(=;.Q.pf;last .Q.PV);{x!x}(),bycols;(enlist`bycount)!enlist(count;`i)] } ================================================================================ FILE: TorQ_code_dqe_groupcount.q SIZE: 246 characters ================================================================================ \d .dqe groupcount:{[tab;cola;vara] .lg.o[`groupcount;"Counting amount of messages received with where clauses applied to columns column with variable vara"]; (enlist vara)!enlist ?[tab;((=;.Q.pf;last .Q.PV);(=;cola;enlist vara));1b;()] } ================================================================================ FILE: TorQ_code_dqe_infinitycount.q SIZE: 449 characters ================================================================================ \d .dqe /- Given a table name as a symbol (tab), a column name as a symbol (col), returns the number of infinities in col of tab. /- Works on partitioned tables in an hdb infinitycount:{[tab;col] .lg.o[`infinitycount;"Getting count of infinities in",$[col~`;" ";" column: ",(string col)," of "],string tab]; (enlist tab)!enlist "j"$sum value{sum x in(0w;-0w;0W;-0W)}each flip?[tab;enlist(=;.Q.pf;last .Q.PV);1b;$[col~`;();{x!x}enlist col]] } ================================================================================ FILE: TorQ_code_dqe_nullcount.q SIZE: 515 characters ================================================================================ \d .dqe /- Given a table name as a symbol (tab) and a column name as a symbol (col), returns the number of nulls in col of tab. /- Works on partitioned tables in an hdb /- col can be set to ` for the function to work on the whole table nullcount:{[tab;col] .lg.o[`nullcount;"Getting count of nulls in",$[col~`;" ";" column: ",(string col)," of "],string tab]; (enlist tab)!enlist "j"$sum value{sum$[0h=type x;0=count each x;null x]}each flip?[tab;enlist(=;.Q.pf;last .Q.PV);1b;$[col~`;();{x!x}enlist col]] } ================================================================================ FILE: TorQ_code_dqe_symcount.q SIZE: 322 characters ================================================================================ \d .dqe /- distinct symbols count in a table tab and a column col. Works on partitioned /- tables in an hdb symcount:{[tab;col] .lg.o[`symcount;"Counting distinct symbols each day in column ",(string col)," of table ",string tab]; (enlist tab)!enlist count ?[tab; enlist(=;.Q.pf;last .Q.PV); 1b; {x!x}enlist col] } ================================================================================ FILE: TorQ_code_dqe_symfilecheck.q SIZE: 265 characters ================================================================================ \d .dqe /- should be run on hdb process - function returns a dictionary of count of syms /- in sym file symfilecheck:{[filename] .lg.o[`symfilecheck;"Counting number of symbols in the symbol file each day"]; (enlist `symfilecount)!enlist count get filename } ================================================================================ FILE: TorQ_code_dqe_tablecount.q SIZE: 213 characters ================================================================================ \d .dqe tablecount:{[par] .lg.o[`tablecount;"Getting table count dictionary"]; /- create dictionary of partition tables with their counts .Q.pt!{[par;x]count ?[x;enlist(=;.Q.pf;par);0b;()]}[par]'[.Q.pt] } ================================================================================ FILE: TorQ_code_filealerter_pcapdecoder.q SIZE: 4,180 characters ================================================================================ // Decoder is designed for pcap version 2.4 // Info on pcap structure => https://www.kroosec.com/2012/10/a-look-at-pcap-file-format.html \d .pcap // size of headers in bytes and dict of protocol code conversions globheader: 24; packetheader: 16; // mapping from protocol id numbers to protocol name // add desired protocols to dictionary for decoder to be able to translate their id's allcodes:(enlist 6)!(enlist `TCP); // returns table of packet data buildtable:{[file] // check that version number of input file is 2.4, if incorrect function exits pcapversioncheck: all 2 0 4 0 = read1(file;4;4); if[not pcapversioncheck;.lg.o[`alerter;"pcap version number of ",(1_string file), " is incorrect, so could not be decoded"];:()];