Dataset Viewer
Auto-converted to Parquet
text
stringlengths
1.46k
56.1k
.ml.mm:.qml.mm .ml.mmt:.qml.mmx[`rflip] .ml.mtm:.qml.mmx[`lflip] .ml.minv:.qml.minv .ml.mlsq:{.qml.mlsqx[`flip;y;x]} .ml.dot:.qml.dot .ml.mdet:.qml.mdet .ml.mchol:.qml.mchol .fmincg.dot:.qml.dot ================================================================================ FILE: funq_randomforest.q SIZE: 1,085 characters ================================================================================ \c 20 100 \l funq.q \l wdbc.q \l winequality.q -1"applying random forest to the wdbc data set"; k:20 d:.ut.part[`train`test!3 1;0N?] wdbc.t -1"bagging grows B decision trees with random sampling (with replacement)"; m:.ml.bag[k;.ml.q45[();::]] d`train avg d.test.diagnosis=.ml.pbag[k;m] d`test -1"a random forest grows B decision trees with random sampling (with replacement)"; -1"and a sub-selection of sqrt (for classification) of the features at each split"; m:.ml.bag[k;.ml.q45[(1#`maxff)!1#sqrt;::]] d`train avg d.test.diagnosis=.ml.pbag[k;m] d`test -1"applying random forest to the winequality data set"; d:.ut.part[`train`test!1 1;0N?] winequality.red.t -1"bagging grows B decision trees with random sampling (with replacement)"; m:.ml.bag[k;.ml.rt[();::]] d`train .ml.rms d.test.quality-.ml.pbag[k;m] d`test -1"a random forest grows B decision trees with random sampling (with replacement)"; -1"and a sub-selection of one third (for regression) of the features at each split"; m:.ml.bag[k;.ml.rt[(1#`maxff)!1#%[;3];::]] d`train .ml.rms d.test.quality-.ml.pbag[k;m] d`test ================================================================================ FILE: funq_recommend.q SIZE: 8,769 characters ================================================================================ \c 22 100 \l funq.q \l mlens.q -1"reference mlens data from global namespace"; `rating`movie set' mlens`rating`movie / personal ratings -1"we now build a dataset to hold our own ratings/preferences"; r:1!select `mlens.movie$movieId,rating:0n from movie r,:([]movieId:173 208 260 435 1197 2005 1968i;rating:.5 .5 4 .5 4 4 4f) r,:([]movieId:2918 4006 53996 69526 87520 112370i;rating:5 5 4 4 5 5f) show select movieId,rating,movieId.title from r where not null rating / http://files.grouplens.org/papers/FnT%20CF%20Recsys%20Survey.pdf / content based filtering -1"content based filtering does not use ratings from other people."; -1"it uses our own preferences mixed with each movie's genre"; Y:value[r]1#`rating -1"we build the X matrix based on each movie's genres"; show X:"f"$flip genre in/: value[movie]`genres -1"we then initialize the THETA matrix"; theta:raze 0N!THETA:(1;1+count X)#0f -1"since we don't use other user's preferences, this is quick optimization"; rf:.ml.l2[.1] / l2 regularization theta:first .fmincg.fmincg[20;.ml.lincostgrad[rf;Y;X];theta] / learn -1"confirm lincostgrad handled the null Y values"; .ut.assert[2.4 0.2 0.4 -0.2 0.4] .ut.rnd[.1] 5#theta -1"view our deduced genre preferences"; show {(5#x),-5#x}desc genre!1_theta -1"how closely do the computed scores match our preferences"; THETA:(count[Y];0N)#theta r:update score:first .ml.plin[X;THETA] from r show select[>score] rating,score,movieId.title from r where not null rating -1"and finally, show the recommendations"; show select[10;>score] movieId,score,movieId.title from r -1"'Mars Needs Moms' was my top recommendation because it had so many genres"; select genres from movie where movieId = 85261 / ratings data summary / http://webdam.inria.fr/Jorge/html/wdmch19.html -1"we begin be reporting summary statistics about the ratings dataset"; -1"support"; -1"reporting the number of users, movies and ratings"; (count distinct@) each exec nu:userId, nm:movieId, nr:i from rating -1"distribution:"; -1"we can see that only users with >20 ratings are included"; t:select nr:count rating by userId from rating show select nu:count userId by 10 xbar nr from t -1"we can also see that a large majority of movies have less than 10 ratings"; t:select nr:count rating by movieId from rating show select nm:count movieId by 10 xbar nr from t -1"quality:"; -1"we can see that there is a positive bias to the ratings"; show `min`med`avg`mode`max!(min;med;avg;.ml.mode;max)@\:rating`rating /rating:select from rating where 19<(count;i) fby userId,9<(count;i) fby movieId -1"the average rating per user (and movie) is distributed around 3.5"; t:select avg rating by movieId from rating t:select nm:count i by .5 xbar rating from t s:select avg rating by userId from rating show t lj select nu:count i by .5 xbar rating from s -1"movies with a small number of ratings can distort the rankings"; -1"the top rankings are dominated by movies with a single rating"; show select[10;>rating] avg rating, n:count i by movieId.title from rating -1"while the most rated movies have averages centered around 4"; show select[10;>n] avg rating, n:count i by movieId.title from rating -1"we will therefore demean the ratings before performing our analysis"; -1""; -1"by using a syntax that is similar to pivoting,"; -1"we can generate the user/movie matrix"; / https://grouplens.org/blog/similarity-functions-for-user-user-collaborative-filtering/ -1"to ensure the ratings matrix only contains movies with relevant movies,"; -1"we generate a list of unique movie ids that meet our threshold."; n:20 show m:exec distinct movieId from rating where n<(count;i) fby movieId show R:value exec (movieId!rating) m by userId from rating where movieId in m -1"then add our own ratings"; R,:r[([]movieId:m);`rating] -1"demean each user"; U:R-au:avg each R k:30 / user-user collaborative filtering -1"user-user collaborative filtering fills missing ratings"; -1"with averaged values from users who's ratings are most similar to ours"; -1"average top ",string[k], " users based on correlation"; p:last[au]+.ml.fknn[1f-;.ml.cordist\:;k;U;0f^U] 0f^last U show `score xdesc update score:p,movieId.title from ([]movieId:m)#r -1"average top ",string[k], " users based on spearman correlation"; p:last[au]+.ml.fknn[1f-;.ml.scordist\:;k;U;0f^U] 0f^last U show `score xdesc update score:p,movieId.title from ([]movieId:m)#r -1"weighted average top ",string[k], " users based on cosine similarity"; -1"results in the same recommendations as .ml.cordist because the data"; -1"has been centered and filled with 0"; p:last[au]+.ml.fknn[1f-;.ml.cosdist\:;k;U;0f^U] 0f^last U show `score xdesc update score:p,movieId.title from ([]movieId:m)#r / item-item collaborative filtering -1"item-item collaborative filtering fills missing ratings"; -1"with averaged values from movies most similar to movies we've rated"; I-:ai:avg each I:flip R -1"pre-build item-item distance matrix because item similarities are stable"; D:((0^I) .ml.cosdist\:) peach 0^I -1"average top ",string[k], " items based on correlation"; p:ai+.ml.knn[1f-;k;last each I] D show `score xdesc update score:p,movieId.title from ([]movieId:m)#r nf:10; if[2<count key `.qml; -1 .ut.box["**"] ( "singular value decomposition (svd) allows us to compute latent factors (off-line)"; "and perform simple matrix multiplication to make predictions (on-line)"); -1"compute score based on top n svd factors"; / singular value decomposition usv:.qml.msvd 0f^U; -1"predict missing ratings using low rank approximations"; P:ai+{x$z$/:y} . .ml.nsvd[nf] usv; show t:`score xdesc update score:last P,movieId.title from ([]movieId:m)#r; -1"compare against existing ratings"; show select from t where not null rating; -1"we can use svd to foldin a new user"; .ml.foldin[.ml.nsvd[500] usv;0b] 0f^U[2]; -1"or even a new movie"; .ml.foldin[.ml.nsvd[500] usv;1b;0f^U[;2]]; -1"what does the first factor look like?"; show each {(5#x;-5#x)}([]movieId:m idesc usv[2][;0])#movie; -1"how much variance does each factor explain?"; show .ut.plot[40;19;.ut.c10;avg] {x%sum x*:x}.qml.mdiag usv 1; ]; / regularized gradient descent -1 .ut.box["**"] ( "regularized gradient descent collaborative filtering"; "doesn't need to be filled with default values"; "and can use regularization"); n:(ni:count U 0;nu:count U) / (n items; n users) -1"randomly initialize X and THETA"; xtheta:2 raze/ XTHETA:(X:-1+ni?/:nf#2f;THETA:-1+nu?/:nf#2f) -1"learn latent factors that best predict existing ratings matrix"; xtheta:first .fmincg.fmincg[100;.ml.cfcostgrad[rf;n;U];xtheta] / learn -1"predict missing ratings"; P:au+.ml.pcf . XTHETA:.ml.cfcut[n] xtheta / predictions show t:`score xdesc update score:last P,movieId.title from ([]movieId:m)#r -1"compare against existing ratings"; show select from t where not null rating -1"check collaborative filtering gradient calculations"; .ut.assert . .ut.rnd[1e-6] .ml.checkcfgrad[1e-4;rf;20 5] / stochastic regularized gradient descent -1"by solving for each rating, one at a time"; -1"we can perform stochastic gradient descent"; -1"randomly initialize X and THETA"; xtheta:2 raze/ XTHETA:(X:-1+ni?/:nf#2f;THETA:-1+nu?/:nf#2f) -1"define cost function"; cf:.ml.cfcost[rf;U] . -1"define minimization function"; mf:.ml.sgdmf[.05;.2;0N?;U;;::] -1"keep running mf until improvement is lower than pct limit"; XTHETA:first .ml.iter[-1;.0001;cf;mf] XTHETA -1"predict missing ratings"; P:au+.ml.pcf . XTHETA / predictions show t:`score xdesc update score:last P,movieId.title from ([]movieId:m)#r -1"compare against existing ratings"; show select from t where not null rating / alternating least squares with weighted regularization / Large-scale Parallel Collaborative Filtering for the Netflix Prize / http://dl.acm.org/citation.cfm?id=1424269 -1"Alterating Least Squares is used to factor the rating matrix"; -1"into a user matrix (X) and movie matrix (THETA)"; -1"by alternating between keeping THETA constant and solving for X"; -1"and vice versa. this changes a non-convex problem"; -1"into a quadratic problem solvable with parallel least squares."; -1"this implementation uses a weighting scheme where"; -1"the weights are equal to the number of ratings per user/movie"; -1"reset X and THETA"; XTHETA:(X:-1+ni?/:nf#1f;THETA:-1+nu?/:nf#2f) -1"keep running mf until improvement is lower than pct limit"; XTHETA:first .ml.iter[1;.0001;.ml.cfcost[();U] .;.ml.alswr[.01;U]] XTHETA -1"predict missing ratings"; P:au+.ml.pcf . XTHETA / predictions show t:`score xdesc update score:last P,movieId.title from ([]movieId:m)#r -1"compare against existing ratings"; show s:select from t where not null rating .ut.assert[0f] .ut.rnd[.01] avg exec .ml.mseloss[rating;score] from s ================================================================================ FILE: funq_sands.q SIZE: 315 characters ================================================================================ / sense and sensibility sands.f:"161.txt" sands.b:"https://www.gutenberg.org/files/161/old/" -1"[down]loading sense and sensibility text"; .ut.download[sands.b;;"";""] sands.f; sands.txt:read0 `$sands.f sands.chapters:1_"CHAPTER" vs "\n" sv 43_-373_sands.txt sands.s:{(3+first x ss"\n\n\n")_x} each sands.chapters ================================================================================ FILE: funq_seeds.q SIZE: 429 characters ================================================================================ seeds.f:"seeds_dataset.txt" seeds.b:"http://archive.ics.uci.edu/ml/machine-learning-databases/" seeds.b,:"00236/" -1"[down]loading seeds data set"; .ut.download[seeds.b;;"";""] seeds.f; seeds.XY:("FFFFFFFH";"\t") 0: ssr[;"\t\t";"\t"] each read0 `$seeds.f seeds.X:-1_seeds.XY seeds.y:first seeds.Y:-1#seeds.XY seeds.c:`area`perimeter`compactness`length`width`asymmetry`groove`variety seeds.t:`variety xcols flip seeds.c!seeds.XY ================================================================================ FILE: funq_silhouette.q SIZE: 1,437 characters ================================================================================
Implementing trend indicators in kdb+¶ The compactness of kdb+ and the terseness of q focus code on a small number of high-performing native built-in functions rather than extensive libraries. kdb+ users often develop libraries of their own domain-specific algorithms and functions, for convenience and to support reuse. In this paper, we show examples of functions commonly used in finance, built on native q functions. Cryptocurrency data for Bitcoin and Ethereum from multiple exchanges are used in the examples. Charts are displayed using the KX Analyst IDE. The code used in this paper can be found at kxcontrib/trend-indicators. It is developed on kdb+ version 3.6 2019.03.07. Data extraction¶ Data was captured in a process similar to that used in Eduard Silantyev’s blog “Combining high-frequency cryptocurrency venue data using kdb+”. Trade and quote tick data for Ethereum (ETH) and Bitcoin (BTC) denominated in the US dollar (USD) was collected from four exchanges - Bitfinex - HitBtc - Kraken - Coinbase spanning May, June and July 2019: just over two months of data. A Python script connected to exchange feeds and extracted the relevant data, which was then published to a kdb+ tickerplant. The tickerplant processed the messages and sent them to a real-time database (RDB). At the end of the day the data was then written to a historical database (HDB) where it could be accessed for analysis. Such details will not be elaborated on, as the focus of this paper is implementing Trend Indicators using kdb+. For help with tick capture: - kdb+tick profiling for throughput optimization - Disaster-recovery planning for kdb+ tick systems - Query Routing: A kdb+ framework for a scalable, load balanced system To make it easy to follow through this paper and execute the functions/indicators created there is a sample of close data located in the GitHub repository. This is a small binary flat file which can be loaded into kdb+/q. The close data contains the daily high/low/open/close and volume of trades for Bitcoin trading on Kraken and the table is called bitcoinKraken . This table will be used throughout the paper to show how you can apply the functions/indicators to an in-memory kdb+ table. q)bitcoinKraken:get `:bitcoinKraken q)\l cryptoFuncs.q "loading in cryptoFuncs" q)10#bitcoinKraken date sym exch high low open close vol -------------------------------------------------------------- 2019.05.09 BTC_USD KRAKEN 6174 6037.9 6042 6151.4 1808.803 2019.05.10 BTC_USD KRAKEN 6430 6110.1 6151.4 6337.9 9872.36 2019.05.11 BTC_USD KRAKEN 7450 6338 6339.5 7209.9 18569.93 2019.05.12 BTC_USD KRAKEN 7588 6724.1 7207.9 6973.9 18620.15 2019.05.13 BTC_USD KRAKEN 8169.3 6870 6970.1 7816.3 19668.6 2019.05.14 BTC_USD KRAKEN 8339.9 7620 7817.1 7993.7 18118.61 2019.05.15 BTC_USD KRAKEN 8296.9 5414.5 7988.9 8203 11599.71 2019.05.16 BTC_USD KRAKEN 8370 7650 8201.5 7880.7 13419.86 2019.05.17 BTC_USD KRAKEN 7946.2 6636 7883.6 7350 21017.35 2019.05.18 BTC_USD KRAKEN 7494.2 7205 7353.9 7266.8 6258.585 Technical analysis¶ Technical analysis is the process of identifying trading opportunities based on past price movements using different stock charts. Trend/technical traders use a combination of patterns and indicators from price charts to help them make financial decisions. Investors analyze price charts to develop theories about what direction the market is likely to move. Commonly used in technical analysis tools are the Candlestick chart, Moving Average Convergence Divergence and Relative Strength Index. These tools are created using q/kdb+’s in-built functions such as mavg , ema , min , max , and avg . The tools discussed do not predict future prices but provide the investor information to determine their next move. The indicators create buy and sell signals using moving averages, prices, volume, days since previous high or low. The investor can then make his financial decision based on the signals created. Pattern recognition¶ The candlestick chart is used for describing price movements in a particular security. The chart illustrates the open/high/low/close of a security and is used by traders to identify patterns based on past movements. candlestick : { fillscale : .gg.scale.colour.cat 01b!(.gg.colour.Red; .gg.colour.Green); .qp.theme[enlist[`legend_use]!enlist 0b] .qp.stack ( // open/close .qp.interval[x; `date; `open; `close] .qp.s.aes[`fill; `gain] ,.qp.s.scale[`fill; fillscale] ,.qp.s.labels[`x`y!("Date";"Price")] ,.qp.s.geom[`gap`colour!(0; .gg.colour.White)]; // low/high .qp.segment[x; `date; `high; `date; `low] .qp.s.aes[`fill; `gain] ,.qp.s.scale[`fill; fillscale] ,.qp.s.labels[`x`y!("Date";"Price")] ,.qp.s.geom[enlist [`size]!enlist 1]) } .qp.go[700;300] .qp.theme[.gg.theme.clean] .qp.title["Candlestick chart BTC"] candlestick[update gain: close > open from select from wpData where sym=`BTC_USD,exch=`KRAKEN] Figure 1: Bitcoin Candlestick Chart using Kraken data Each candle shows the high/open/close/low and if our security closed higher than the open. This can be useful in predicting short term price movements. Simple Moving Averages¶ The price of a security can be extremely volatile and large price movements can make it hard to pinpoint the general trend. Moving averages ‘smooth’ price data by creating a single flowing line. The line represents the average price over a period of time. Which moving average the trader decides to use is determined by the time frame in which he or she trades. There are two commonly used moving averages: Simple Moving Average (SMA) and Exponential Moving Average (EMA). EMA gives a larger weighting to more recent prices when calculating the average. In Figure 2 you can see the 10-day moving average and 20-day moving average along with the close price. Traders analyze where the current trade price lies in relation to the moving averages. If the current trade price is above the moving-average (MA) line this would indicate over-bought (decline in price expected), trade price below MA would indicate over-sold (increase in price may be seen). It should be noted that a signal/trend indicator would not determine a trading strategy but would be analyzed in conjunction with other factors. Now using the previously defined bitcoinKraken table we can start to apply our own simple moving averages. In the example below the 2- and 5-day moving averages are calculated on the close price. This can be updated to get the moving average of any of the numeric columns like high price for example or you could alter the number of periods used. In Figure 2 the 10- and 20-day moving averages are used. This can be adjusted depending on your needs. Short-term traders would be interested in relatively short time periods whereas long-term investors who want an overall picture of a security would compare large periods like 100 and 200 days. q)10#update sma2:mavg[2;close],sma5:mavg[5;close] from bitcoinKraken date sym exch high low open close vol sma2 sma5 ------------------------------------------------------------------------------- 2019.05.09 BTC_USD KRAKEN 6174 6037.9 6042 6151.4 1808.803 6151.4 6151.4 2019.05.10 BTC_USD KRAKEN 6430 6110.1 6151.4 6337.9 9872.36 6244.65 6244.65 2019.05.11 BTC_USD KRAKEN 7450 6338 6339.5 7209.9 18569.93 6773.9 6566.4 2019.05.12 BTC_USD KRAKEN 7588 6724.1 7207.9 6973.9 18620.15 7091.9 6668.275 2019.05.13 BTC_USD KRAKEN 8169.3 6870 6970.1 7816.3 19668.6 7395.1 6897.88 2019.05.14 BTC_USD KRAKEN 8339.9 7620 7817.1 7993.7 18118.61 7905 7266.34 2019.05.15 BTC_USD KRAKEN 8296.9 5414.5 7988.9 8203 11599.71 8098.35 7639.36 2019.05.16 BTC_USD KRAKEN 8370 7650 8201.5 7880.7 13419.86 8041.85 7773.52 2019.05.17 BTC_USD KRAKEN 7946.2 6636 7883.6 7350 21017.35 7615.35 7848.74 2019.05.18 BTC_USD KRAKEN 7494.2 7205 7353.9 7266.8 6258.585 7308.4 7738.84 The graph in Figure 2 was created using KX Analyst. A sample for this code can be seen below. All Graphics of Grammar code can be found in the repository for this project. The following is an example. sma:{[x] .qp.go[700;300] .qp.title["SMA BTC Kraken"] .qp.theme[.gg.theme.clean] .qp.stack( .qp.line[x; `date; `sma10] .qp.s.geom[enlist[`fill]!enlist .gg.colour.Blue] ,.qp.s.scale [`y; .gg.scale.limits[6000 0N] .gg.scale.linear] ,.qp.s.legend[""; `sma10`sma20`close!(.gg.colour.Blue;.gg.colour.Red;.gg.colour.Green)] ,.qp.s.labels[`x`y!("Date";"Price")]; .qp.line[x; `date; `sma20] .qp.s.geom[enlist[`fill]!enlist .gg.colour.Red] ,.qp.s.scale [`y; .gg.scale.limits[6000 0N] .gg.scale.linear] ,.qp.s.labels[`x`y!("Date";"Price")]; .qp.line[x; `date; `close] .qp.s.geom[enlist[`fill]!enlist .gg.colour.Green] ,.qp.s.scale [`y; .gg.scale.limits[6000 0N] .gg.scale.linear] ,.qp.s.labels[`x`y!("Date";"Price")])} q)sma[update sma10:mavg[10;close], sma20:mavg[20;close] from select from wpData where sym=`BTC_USD,exch=`KRAKEN] Figure 2: 10- and 20-day Simple Moving Averages for Bitcoin Moving Average Convergence Divergence¶ Moving Average Convergence Divergence (MACD) is an important and popular analysis tool. It is a trend indicator that shows the relationship between two moving averages of a securities price. MACD is calculated by subtracting the long-term EMA (26 periods) from the short-term EMA (12 periods). A period is generally defined as a day but shorter/longer timespans can be used. Throughout this paper we will consider a period to be one day. EMAs place greater weight and significance on the more recent data points and react more significantly to price movements than SMA. The 9-day moving average of the MACD is also calculated and plotted. This line is known as the signal line and can be used to identify buy and sell signals. The code for calculating the MACD is very simple and exploits kdb+/q’s built-in function ema . An example of how the code is executed, along with a subset of the output is displayed. /tab-table input /id-ID you want `ETH_USD/BTC_USD /ex-exchange you want /output is a table with the close,ema12,ema26,macd,signal line calculated macd:{[tab;id;ex] macd:{[x] ema[2%13;x]-ema[2%27;x]}; /macd line signal:{ema[2%10;x]}; /signal line res:select sym, date, exch, close, ema12:ema[2%13;close], ema26:ema[2%27;close], macd:macd[close] from tab where sym=id, exch=ex; update signal:signal[macd] from res } q)10#macd[bitcoinKraken;`BTC_USD;`KRAKEN] sym date exch close ema12 ema26 macd signal -------------------------------------------------------------------- BTC_USD 2019.05.09 KRAKEN 6151.4 6151.4 6151.4 0 0 BTC_USD 2019.05.10 KRAKEN 6337.9 6180.092 6165.215 14.87749 2.975499 BTC_USD 2019.05.11 KRAKEN 7209.9 6338.524 6242.599 95.92536 21.56547 BTC_USD 2019.05.12 KRAKEN 6973.9 6436.274 6296.769 139.505 45.15338 BTC_USD 2019.05.13 KRAKEN 7816.3 6648.586 6409.327 239.2588 83.97447 BTC_USD 2019.05.14 KRAKEN 7993.7 6855.527 6526.688 328.8385 132.9473 BTC_USD 2019.05.15 KRAKEN 8203 7062.83 6650.859 411.9708 188.752 BTC_USD 2019.05.16 KRAKEN 7880.7 7188.656 6741.959 446.6977 240.3411 BTC_USD 2019.05.17 KRAKEN 7350 7213.478 6786.999 426.4797 277.5688 BTC_USD 2019.05.18 KRAKEN 7266.8 7221.682 6822.54 399.1421 301.8835 Figure 3 graphs the MACD for ETH_USD using data from HITBTC. Figure 3: Moving Average Convergence Divergence for Ethereum using HITBTC data From the above graph, you can see how the close price interacts with the short and long EMA and how this then impacts the MACD and signal-line relationship. There is a buy signal when the MACD line crosses over the signal line and there is a short signal when the MACD line crosses below the signal line. Relative Strength Index¶ Figure 4: Relative Strength Index for Ethereum using HITBTC data Relative Strength Index (RSI) is a momentum oscillator that measures the speed and change of price movements. It oscillates between 0-100. It is said that a security is overbought when above 70 and oversold when below 30. It is a general trend and momentum indicator. The default period is 14 days. This can be reduced or increased – the shorter the period, the more sensitive it is to price changes. Short-term traders sometimes look at 2-day RSIs for overbought readings above 80 and oversold ratings below 20. The first calculation of the average gain/loss are simple 14-day averages. First Average Gain: sum of Gains over the past 14 days/14 First Average Loss: sum of Losses over the past 14 days/14 The subsequent calculations are based on the prior averages and the current gain/loss. //Relative strength index - RSI //close-close price /n-number of periods relativeStrength:{[num;y] begin:num#0Nf; start:avg((num+1)#y); begin,start,{(y+x*(z-1))%z}\[start;(num+1)_y;num] } rsiMain:{[close;n] diff:-[close;prev close]; rs:relativeStrength[n;diff*diff>0]%relativeStrength[n;abs diff*diff<0]; rsi:100*rs%(1+rs); rsi } q)update rsi:rsiMain[close;14] by sym,exch from wpData It is shrewd to use both RSI and MACD together as both measure momentum in a market, but, because they measure different factors, they sometimes give contrary indications. Using both together can provide a clearer picture of the market. RSI could be showing a reading of greater than 70, this would indicate that the the security is overbought, but the MACD is signaling that the market is continuing in the upward direction. Money Flow Index¶ Figure 5: Money Flow Index for Ethereum where n=14 Money Flow Index (MFI) is a technical oscillator similar to RSI but which instead uses price and volume for identifying overbought and oversold conditions. This indicator weighs in on volume and not just price to give a relative score. A low volume with a large price movement will have less impact on the relative score compared to a high volume move with a lower price move. You see new highs/lows and large price swings but also if there is a price swing whether there is any volume behind the move or if it is just a small trade. The market will generally correct itself. It can be used to spot divergences that warn traders of a change in trend. MFI is known as the volume-weighted RSI. We use the relativeStrength function as in the RSI calculation above. mfiMain:{[h;l;c;n;v] TP:avg(h;l;c); / typical price rmf:TP*v; / real money flow diff:deltas[0n;TP]; / diffs /money-flow leveraging func for RSI mf:relativeStrength[n;rmf*diff*diff>0]%relativeStrength[n;abs rmf*diff*diff<0]; mfi:100*mf%(1+mf); /money flow as a percentage mfi } q)update mfi:mfiMain[high;low;close;14;vol] by sym,exch from wpData Figure 6: MFI versus RSI Analysts use both RSI and MFI together to see whether a price move has volume behind it. Here is another good example to show the output of the update columns after applying the indicators to the in memory table defined above as bitcoinKraken . The table below shows bitcoinKraken updated with the output columns attached on to the end. This shows how easy it is to compare statistical outputs. In Figure 6 the 14-day period RSI and MFI are compared, but below a 6-day period is chosen. q)10#update rsi:rsiMain[close;6],mfi:mfiMain[high;low;close;6;vol] from bitcoinKraken date sym exch high low open close vol rsi mfi -------------------------------------------------------------------------------- 2019.05.09 BTC_USD KRAKEN 6174 6037.9 6042 6151.4 1808.803 2019.05.10 BTC_USD KRAKEN 6430 6110.1 6151.4 6337.9 9872.36 2019.05.11 BTC_USD KRAKEN 7450 6338 6339.5 7209.9 18569.93 2019.05.12 BTC_USD KRAKEN 7588 6724.1 7207.9 6973.9 18620.15 2019.05.13 BTC_USD KRAKEN 8169.3 6870 6970.1 7816.3 19668.6 2019.05.14 BTC_USD KRAKEN 8339.9 7620 7817.1 7993.7 18118.61 2019.05.15 BTC_USD KRAKEN 8296.9 5414.5 7988.9 8203 11599.71 90.64828 81.06234 2019.05.16 BTC_USD KRAKEN 8370 7650 8201.5 7880.7 13419.86 78.60196 85.19688 2019.05.17 BTC_USD KRAKEN 7946.2 6636 7883.6 7350 21017.35 62.25494 62.04519 2019.05.18 BTC_USD KRAKEN 7494.2 7205 7353.9 7266.8 6258.585 59.91089 62.10847 Commodity Channel Index¶ The Commodity Channel Index (CCI) is another tool used by technical analysts. Its primary use is for spotting new trends. It measures the current price level relative to an average price level over time. The CCI can be used for any market, not just for commodities. It can be used to help identify if a security is approaching overbought and oversold levels. Its primary use is for spotting new trends. This can help traders make decisions on trades whether to add to position, exit the position or take no part. When CCI is positive it indicates it is above the historical average and when it is negative it indicates it is below the historical average. Moving from negative ratings to high positive ratings can be used as a signal for a possible uptrend. Similarly, the reverse will signal downtrends. CCI has no upper or lower bound so finding out what typical overbought and oversold levels should be determined on each asset individually looking at its historical CCI levels. To calculate the Mean Deviation, a helper function called maDev (moving-average deviation). maDev:{[tp;ma;n] ((n-1)#0Nf), {[x;y;z;num] reciprocal[num]*sum abs z _y#x}' [(n-1)_tp-/:ma; n+l; l:til count[tp]-n-1; n] } This was calculated by subtracting the Moving Average from the Typical Price for the last n periods, summing the absolute values of these figures and then dividing by n periods. CCI:{[high;low;close;ndays] TP:avg(high;low;close); sma:mavg[ndays;TP]; mad:maDev[TP;sma;n]; reciprocal[0.015*mad]*TP-sma } q)update cci:CCI[high;low;close;14] by sym,exch from wpData Figure 7: Commodity Channel Index and close price for Bitcoin using Kraken data Bollinger Bands¶ Figure 8: Bollinger Bands for Bitcoin using Kraken data and n=20 Bollinger Bands are used in technical analysis for pattern recognition. They are formed by plotting two lines that are two standard deviations from the simple moving-average price, one in the negative direction and one positive. Standard deviation is a measure of volatility in an asset, so when the market becomes more volatile the bands widen. Similarly, less volatility leads to the bands contracting. If the prices move towards the upper band the security is seen to be overbought and as the prices get close to the lower bound the security is considered oversold. This provides traders with information regarding price volatility. 90% of price action occurs between the bands. A breakout from this would be seen as a major event. The breakout is not considered a trading signal. Breakouts provide no clue as to the direction and extent of future price movements. /tab-input table /n-number of days /ex-exchange /id-id to run for bollB:{[tab;n;ex;id] tab:select from wpData where sym=id,exch=ex; tab:update sma:mavg[n;TP],sd:mdev[n;TP] from update TP:avg(high;low;close) from tab; select date,sd,TP,sma,up:sma+2*sd,down:sma-2*sd from tab} q)bollB[wpData;20;`KRAKEN;`BTC_USD] Force Index¶ The Force Index is a technical indicator that measures the amount of power behind a price move. It uses price and volume to assess the force behind a move or a possible turning point. The technical indicator is an unbounded oscillator that oscillates between a negative and positive value. There are three essential elements to stock price movement-direction, extent and volume. The Force Index combines all three in this oscillator. Figure 9: Force Index and Close Price for Bitcoin using Kraken data The above graph is the 13-day EMA of the Force Index. It can be seen that the Force Index crosses the centre line as the price begins to increase. This would indicate that bullish trading is exerting a greater force. However, this changes towards the end of July where there is a significant change from a high positive force index to a negative one and the price drops dramatically. It suggests the emergence of a bear market. The Force Index calculation subtracts today’s close from the prior day’s close and multiplies it by the daily volume. The next step is to calculate the 13-day EMA of this value. //Force Index Indicator /c-close /v-volume /n-num of periods //ForceIndex1 is the force index for one period forceIndex:{[c;v;n] forceIndex1:1_deltas[0nf;c]*v; n#0nf,(n-1)_ema[2%1+n;forceIndex1] } q)update ForceIndex:forceIndex[close;vol;13] by sym,exch from wpData Ease of Movement Value¶ Ease of Movement Value (EMV) is another technical indicator that combines momentum and volume information into one value. The idea is to use this value to decide if the prices are able to rise or fall with little resistance in directional movement. 14-period EMV: 14 day simple average of EMV The scale factor is chosen to produce a normal number. This is generally relative to the volume of shares traded. //Ease of movement value -EMV /h-high /l-low /v-volume /s-scale /n-num of periods emv:{[h;l;v;s;n] boxRatio:reciprocal[-[h;l]]*v%s; distMoved:deltas[0n;avg(h;l)]; (n#0nf),n _mavg[n;distMoved%boxRatio] } q)update EMV:emv[high;low;vol;1000000;14] by sym,exch from wpData Figure 10: Ease of Movement, Close and Volume for Ethereum using Kraken Data Rate of Change¶ The Rate of Change (ROC) indicator measures the percentage change in the close price over a specific period of time. //Price Rate of change Indicator (ROC) /c-close /n-number of days prior to compare roc:{[c;n] curP:_[n;c]; prevP:_[neg n;c]; (n#0nf),100*reciprocal[prevP]*curP-prevP } q)update ROC:roc[close;10] from bitcoinKraken A positive move in the ROC indicates that there was a sharp price advance. This can be seen on the graph in Figure 11 between the 8th and 22nd of June. A downward drop indicates steep decline in the price. This oscillator is prone to whipsaw around the zero line as can be seen in the graph. For the graph below n is set to 9, a value commonly used by short-term traders. Figure 11: Rate of change for Bitcoin using Kraken data Stochastic Oscillator¶ Figure 12: Stochastic Oscillator with smoothing %K=1,%D=3 for Bitcoin using Kraken data The Stochastic Oscillator is a momentum indicator comparing a particular closing price of a security to a range of its prices over a certain period of time. You can adjust the sensitivity of the indicator by adjusting the time period and by taking the moving average of the result. The indicator has a 0-100 range that can be used to indicate overbought and oversold signals. A security is considered overbought when greater than 80 and oversold when less than 20. For this case, n will be 14 days. where C: Current Close L(n): Low across last n days H(n): High over the last n days %K: slow stochastic indicator %D: fast stochastic indicator, the n-day moving average of %K (generally n=3) //null out first 13 days if 14 days moving avg //Stochastic Oscillator /h-high /l-low /n-num of periods /c-close price /o-open stoOscCalc:{[c;h;l;n] lows:mmin[n;l]; highs:mma[n;h]; (a#0n),(a:n-1)_100*reciprocal[highs-lows]*c-lows } /k-smoothing for %D /for fast stochastic oscillation smoothing is set to one k=1/slow k=3 default /d-smoothing for %D - this generally set for 3 /general set up n=14,k=1(fast),slow(slow),d=3 stoOcsK:{[c;h;l;n;k] (a#0nf),(a:n+k-2)_mavg[k;stoOscCalc[c;h;l;n]] } stoOscD:{[c;h;l;n;k;d] (a#0n),(a:n+k+d-3)_mavg[d;stoOscK[c;h;l;n;k]] } q)update sC:stoOscCalc[close;high;low;5], sk:stoOscK[close;high;low;5;2], stoOscD[close;high;low;5;2;3] from bitcoinKraken The Commodity Channel Index (CCI) and the Stochastic Oscillator Both these technical indicators are oscillators, but calculated quite differently. One of the main differences is that the Stochastic Oscillator is bound between zero and 100, while the CCI is unbounded. Due to the calculation differences, they will provide different signals at different times, such as overbought and oversold readings. Aroon Oscillator¶ The Aroon Indicator is a technical indicator used to identify trend changes in the price of a security and the strength of that trend, which is used in the Aroon Oscillator. An Aroon Indicator has two parts: \(aroonUp\) and \(aroonDown\), which measure the time between highs and lows respectively over a period of time \(n\), generally 25 days. The objective of the indicator is that strong uptrends will regularly see new highs and strong downtrends will regularly see new lows. The range of the indicator is between 0-100. Figure 13: Aroon Oscillator and Aroon Indicator //Aroon Indicator aroonFunc:{[c;n;f] m:reverse each a _'(n+1+a:til count[c]-n)#\:c; #[n;0ni],{x? y x}'[m;f] } aroon:{[c;n;f] 100*reciprocal[n]*n-aroonFunc[c;n;f]} /- aroon[tab`high;25;max]-- aroon up /- aroon[tab`low;25;max]-- aroon down aroonOsc:{[h;l;n] aroon[h;n;max] - aroon[l;n;min]} q)update aroonUp:aroon[high;25;max], aroonDown:aroon[low;25;min], aroonOsc:aroonOsc[high;low;25] from krakenBitcoin Aroon Oscillator subtracts \(aroonUp\) from \(aroonDown\) making the range of the oscillator between -100 and 100. The oscillator moves above the zero line when \(aroonUp\) moves above the \(aroonDown\). The oscillator drops below the zero line when the \(aroonDown\) moves above the \(aroonUp\). Conclusion¶ This paper shows how kdb+/q can be applied to produce common trade analytics which are not available out of the box but which can be efficiently implemented using primitive functions. The functions shown range from moving averages to more complex functions like Relative Strength Index and Moving Average Convergence Divergence, as used by quants and traders building out more powerful analytics solutions. The common trend indicators discussed trigger buy/sell signals, and offer a clearer image of the current market. This touches the tip of the iceberg of what can be done in analytics and emphasizes the power of kdb+ in a data-analytics solution. Libraries of custom-built analytic functions can be created with ease, and in a short space of time applied to realtime and historical data. This paper also demonstrates KX Analyst, an IDE for creating analytical functions and visualizing their output. The combination of this library of functions and KX Analyst provides the user faster development and processing times to gain meaningful insights from the data. Author¶ James Galligan is a kdb+ consultant who has designed and developed data-capture and data-analytics platforms for trading and analytics across multiple asset classes in multiple leading financial institutions.
Working with MATLAB¶ Installation¶ Versions As MATLAB/datafeed toolbox evolves features or instruction below are subject to revisions. Please refer to toolbox documentation for latest version. Users have reported that this works with more recent versions (e.g. R2015b on RHEL 6.8/2016b and 2017a on macOS). See also community-supported native connector dmarienko/kdbml Download and unzip kx_kdbplus.zip. Add the resulting directory to your MATLAB path, for example in MATLAB >> addpath('/Users/Developer/matlabkx') Support for MATLAB is a part of Datafeed Toolbox for MATLAB: since R2007a edition. The MATLAB integration depends on the two Java files c.jar and jdbc.jar . KxSystems/kdb/c/c.jar KxSystems/kdb/c/jdbc.jar Add the JAR files to the classpath used by MATLAB. It can be added permanently by editing classpath.txt (type edit classpath.txt at the MATLAB prompt) or for the duration of a particular session using the javaaddpath function, for example >> javaaddpath /home/myusername/jdbc.jar >> javaaddpath /home/myusername/c.jar Installation directory In these examples change /home/myusername to the directory where jdbc.jar and c.jar are installed. Alternatively, this can be achieved in a MATLAB source file (i.e., *.m file) adding the following two functions before calling kx functions. javaaddpath('/home/myusername/jdbc.jar') javaaddpath('/home/myusername/c.jar') Confirm they have been added successfully using the javaclasspath function. >> javaclasspath STATIC JAVA PATH ... /opt/matlab/2015b/java/jar/toolbox/stats.jar /opt/matlab/2015b/java/jar/toolbox/symbol.jar DYNAMIC JAVA PATH /home/myusername/jdbc.jar /home/myusername/c.jar >> Connecting to a q process¶ First, we start up a kdb+ process that we wish to communicate with from MATLAB and load some sample data into it. Save following as tradedata.q file / List of securities seclist:([name:`ACME`ABC`DEF`XYZ] market:`US`UK`JP`US) / Distinct list of securities secs: distinct exec name from seclist n:5000 / Data table trade:([]sec:`seclist$n?secs;price:n?100.0;volume:100*10+n?20;exchange:5+n?2.0;date:2004.01.01+n?499) / Intra day tick data table intraday:([]sec:`seclist$n?secs;price:n?100.0;volume:100*10+n?20;exchange:5+n?2.0;time:08:00:00.0+n?43200000) / Function with one input parameter / Return total trading volume for given security totalvolume:{[stock] select volume from trade where sec = stock} / Function with two input parameters / Return total trading volume for given security with volume greate than / given value totalvolume2:{[stock;minvolume] select sum(volume) from trade where sec = stock, volume > minvolume} Then q tradedata.q -p 5001 q)show trade sec price volume exchange date ---------------------------------------- ACME 89.5897 1300 6.58303 2005.04.26 ABC 4.346879 2000 5.957694 2004.03.08 DEF 2.486644 1000 5.304114 2004.03.18 ACME 42.26209 1600 5.31383 2004.03.14 DEF 67.79352 2500 5.954478 2004.04.21 DEF 85.56155 1300 6.462338 2004.03.15 ACME 52.65432 1800 5.240313 2005.02.05 ABC 22.43142 2700 5.088007 2005.03.13 ABC 58.26731 2100 5.220929 2004.09.10 XYZ 74.14568 2900 5.075229 2004.08.24 DEF 35.67741 1500 6.064387 2004.03.12 DEF 30.37496 1300 5.025874 2004.03.24 ABC 20.30781 1000 6.642873 2005.02.02 DEF 2.984627 1200 6.346634 2004.12.15 ACME 28.80098 2100 5.591732 2004.09.19 DEF 45.20084 2800 5.481197 2004.08.01 DEF 29.25037 1000 6.065474 2005.02.05 XYZ 50.68805 1700 6.901464 2004.11.02 DEF 41.79832 2300 6.016484 2005.05.04 XYZ 13.64856 2900 6.435824 2005.04.03 .. q) We then start a new MATLAB session. From here on, >> represents the MATLAB prompt. We’re now ready to open a connection to the q process: >> q = kx('localhost',5001) q = handle: [1x1 c] ipaddress: 'localhost' port: 5001 Credentials We can also pass a username:password string as the third parameter to the kx function if it is required to log in to the q process. The q value is a normal MATLAB object and we can inspect the listed properties. We’ll use this value in all our communications with the q process. We close a connection using the close function: >> close(q) Installation errors If there is a problem with either the installation of the q integration, or the jar file is not found, we’ll get an error along the lines of: ??? Undefined function or method 'c' for input arguments of type 'char'. Error in ==> kx.kx at 51 w.handle = c(ip,p); Or if the socket is not currently connected then any future communications will result in an error like: ??? Java exception occurred: java.net.SocketException: Socket closed at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) at c.w(c.java:99) at c.k(c.java:107) at c.k(c.java:108) Error in ==> kx.fetch at 65 t = c.handle.k(varargin{1}); Using the kdb+ process¶ It is typical to perform basic interactions with a database using the fetch function via a connected handle. For example in a legacy database we might perform this: x = fetch(q,'select * from tablename') We can use this function to perform basic interaction with kdb+, where we expect a value to be returned. This need not be a query but in fact can be general chunks of code. Using q as a calculator, we can compute the average of 0 to 999. >> fetch(q,'avg til 1000') ans = 499.5000 Fetching data from kdb+¶ The fetch function can be used to get q data such as lists, as well as tables. Given the list c : q)c:((til 100);(til 100)) q)c 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 .. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 .. Then we can fetch it: >> hundreds = fetch(q, 'c') hundreds = java.lang.Object[]: [100×1 int64] [100×1 int64] We can use the cell function to strip the Java array wrapper away: >> hundreds_as_cell = cell(hundreds) hundreds_as_cell = 2×1 cell array {100×1 int64} {100×1 int64} Tables are returned as an object with an array property for each column. Taking the first 10 rows of the trade table as an example: q)10#trade sec price volume exchange date ---------------------------------------- ACME 89.5897 1300 6.58303 2005.04.26 ABC 4.346879 2000 5.957694 2004.03.08 DEF 2.486644 1000 5.304114 2004.03.18 ACME 42.26209 1600 5.31383 2004.03.14 DEF 67.79352 2500 5.954478 2004.04.21 DEF 85.56155 1300 6.462338 2004.03.15 ACME 52.65432 1800 5.240313 2005.02.05 ABC 22.43142 2700 5.088007 2005.03.13 ABC 58.26731 2100 5.220929 2004.09.10 XYZ 74.14568 2900 5.075229 2004.08.24 Will be returned in MATLAB: >> ten = fetch(q, '10#trade') ten = sec: {10×1 cell} price: [10×1 double] volume: [10×1 int64] exchange: [10×1 double] date: [10×1 double] With suitable computation in q, we can return data suitable for immediate plotting. Here we compute a 10-item moving average over the `ACME prices: q)mavg[10;exec price from trade where sec=`ACME] 89.5897 65.9259 61.50204 53.32677 54.74408 57.39743 57.15958 62.33525 56.8732.. >> acme = fetch(q,'mavg[10;exec price from trade where sec=`ACME]') Metadata¶ The q integration in MATLAB provides the tables meta function. >> tables(q) ans = 'intraday' 'seclist' 'trade' The experienced q user can use the \v command to see all values in the directory: >> fetch(q,'\v') ans = 'a' 'b' 'c' 'intraday' 'n' 'seclist' 'secs' 'trade' Sending data to q¶ We can use the fetch function to cause side effects in the kdb+ process, such as inserting data into a table. Given a table b : q)b:([] a:1 2; b:1 2) q)b a b --- 1 1 2 2 Then we can add a row like this: >> fetch(q,'b,:(3;3)') ans = [] and, sure enough, on the q side we see the new data: q)show b a b --- 1 1 2 2 3 3 The q integration also provides an insert function: this takes an array of items in the row and may be more convenient for certain purposes. >> insert(q,'b',{4,4}) shows on the q side as: q)show b a b --- 1 1 2 2 3 3 4 4 A more complicated row shows the potential advantage to better effect: >> insert(q,'trade',{'`ACME',100.45,400,.0453,'2005.04.28'}) Be warned though, that errors will not be detected very well. For example the following expression silently fails! >> insert(q,'b',{1,2,3}) whereas the equivalent fetch call provokes an error: >> fetch(q,'b,:(1;2;3)') Error using fetch (line 64) Java exception occurred: kx.c$KException: length at kx.c.k(c.java:110) at kx.c.k(c.java:111) at kx.c.k(c.java:112) Async commands to q¶ The exec function is used for sending asynchronous commands to q; ones we do not expect a response to, and which may be performed in the background while we continue interacting with the MATLAB process. Here we establish a large-ish data structure in the kdb+ process: >> exec(q,'big_data:10000000?100') Then we take the average of the data, delete it from the namespace and close the connection: >> fetch(q,'avg big_data') ans = 49.4976 >> exec(q,'delete big_data from `.') >> close(q) Handling null¶ kdb+ has the ability to set values to null. MATLAB doesnt have a corresponding null type, so if your data contains nulls you may wish to filter or detect them. MATLAB has the ability to call static methods within Java. The NULL method can provide the null values for the different data types. For example NullInt=kx.c.NULL('i') NullLong=kx.c.NULL('j') NullDouble=kx.c.NULL('f') NullDate=kx.c.NULL('d') With this, you can test values for null. The following shows that the comparison will return true when requesting null values from a kdb+ connection named conn: fetch(conn,'0Ni')== NullInt fetch(conn,'0N')== NullLong fetch(conn,'0Nd')== NullDate isequaln(fetch(conn,'0Ni'),NullInt) isequaln(fetch(conn,'0N'), NullLong) isequaln(fetch(conn,'0Nd'), NullDate) isequaln(fetch(conn,'0Nf'), NullDouble) An alternative is to have your query include a filter for nulls (if they are populated), so they arent retrieved by MATLAB. Getting more help¶ Start with help kx in your MATLAB session and also see help kx.fetch and so on for further details of the integration. MathWorks provides functions overview, usage instructions and some examples on the toolbox webpage. Python client for kdb+¶ The PyKX interface exposes q as a domain-specific language (DSL) embedded within Python, and also permits IPC connectivity to kdb+ from Python applications. PyKX supports three principal use cases: - It allows users to store, query, manipulate and use q objects within a Python process. - It allows users to query external q processes via an IPC interface. - It allows users to embed Python functionality within a native q session using it's under q functionality. It is documented and available to download from https://code.kx.com/pykx. Q client for Bloomberg¶ Marshall Wace has kindly contributed a Linux-based Bloomberg Feed Handler, written by Sufian Al-Qasem & Attila Vrabecz, using the Bloomberg Open Api. Design notes¶ Bloomberg uses an event-driven model whereby they push EVENT objects to consumers – SUMMARY, TRADE and QUOTE. The C code in bloomberg.c handles the connectivity to the Bloomberg appliance (hosted on client’s site) and also does the conversion from an EVENT object to a dictionary (Bloomberg mnemonic <> Value pair) which is then processed on the q main thread via the following: Update:{@[value;x;-1“Update: '”,string[x 0]," ",]} The Bloomberg API calls back on a separate thread and copies a pointer to that object onto a lock-free queue; eventfd is then used to create a K struct (a dictionary representation of the EVENT) on the q main thread and process. A function is defined for every EVENT type (Authorize/SessionStarted/MarketDataEvent/etc …) which carries out the desired behavior in q. Tested with Bloomberg Open API 3.6.2.0 and 3.7.5.1. Uses http://www.liblfds.org
Frequently-asked questions from the k4 listbox¶ If you notice a question that is asked more then once on the k4 list, please feel free to add it here. Where can I find archives of the k4 list?¶ Archives are available to subscribers at the Topicbox. When you follow that link, you will be asked for your e-mail address and the mailing list name. Use k4 for the list name, and the e-mail address that you used to subscribe to the k4 list. How to post test data on the k4 list?¶ Always post your test data in the executable form. For example, q)foo:([]a:5?10;b:5?10;c:5?10) You can generate an executable form of your data using 0N! . q)0N!foo; +`a`b`c!(4 3 7 1 1;6 1 7 9 8;4 7 5 0 9) Note use of ; to suppress the default display. If you use the latter form, prefix it with k) in your post, so that others could easily cut and paste it in their q session. q)k)+`a`b`c!(4 3 7 1 1;6 1 7 9 8;4 7 5 0 9) a b c ----- 4 6 4 3 1 7 7 7 5 1 9 0 1 8 9 What are the limits on the number of variables in q functions?¶ Reference: Lambdas What does 'error mean?¶ Basics: Errors Why does sg work with :: but not : ? Also why does {x.time} not work?¶ Locals and globals are different: locals don’t have symbols associated with them, so for example .Q.dpft (you would have to pass in name of table) or x.time does not work with them. As a workaround for the second issue one can always use `time$x though. How do I query a column of strings for multiple values?¶ If you wish to query a column of strings for a single value either like or ~ (Match) with an iterator can be used q)e:([]c:("ams";"lon";"amS";"bar")) q)select from e where c ~\:"ams" c ----- "ams" q)select from e where c like "ams" c ----- "ams" q)select from e where c like "am*" c ----- "ams" "amS" To query for multiple strings you need to use another iterator, then aggregate the results into a single boolean value using sum or any . Generally the like form is easier to understand and more efficient. q)select from e where any c like/:("lon";"am*") c ----- "ams" "lon" "amS" How to kill long/invalid query on a server?¶ You can achieve that by sending SIGINT to the server process. In *nix shell, try $ kill -INT <pid> You can find the server process ID by examining .z.i . How do I recall and edit keyboard input?¶ Start q under rlwrap to get readline support, e.g. $ rlwrap l64/q -p 5001 This is available in most Linux repositories. An alternative to rlwrap is tecla's enhance . This is good for vi -mode users who would like more of vi ’s keys functionality – eg d f x will delete everything up to the next x and you can paste it back, too.
Geospatial indexing¶ This demo shows the basics of geospatial indexing with q. A 1-million-point random data set is queried from the HTML map client. Click on the map to see nearby points. Download KxSystems/kdb/e/geo.zip and run: $ make -C s2 $ q q/geo.q $ open html/geo.html This should then open a browser, connect to the kdb+ process and retrieve geo.html , displayed similar to: There are five text fields in the top row: - Last-click coordinates - Number of returned results - Min date* - Max date* - Lookup rectangle size* (degrees) Those marked with * are editable filters. When the mouse is clicked on the map, the underlying lat-lon coordinates are sent to the kdb+ process along with the filters over a websocket connection, and the points in the response are then plotted on the map. In addition to coordinates, kdb+ returns a trk column, which the client interprets as point colour. This uses the Google S2 library as a kdb+ shared object. To create the index, the function ids[lats;lons] maps (lat-lon) coordinates on a sphere to one-dimensional cell IDs. These are stored as 32-bit integers with the `p attribute applied. q)geo time trk lat lon cid ---------------------------------------------------------------- 2016.09.26D00:40:05.783973634 3233 51.79961 0.1946887 1205375107 2016.09.26D01:12:53.469740152 3233 51.80003 0.1923668 1205375107 2016.09.26D01:40:23.427598178 3233 51.79994 0.192314 1205375107 2016.09.26D04:11:52.743414938 3233 51.79958 0.1950875 1205375107 2016.09.26D08:39:32.459766268 3233 51.80044 0.1923126 1205375107 .. q)meta geo c | t f a ----| ----- time| p trk | j lat | f lon | f cid | i p lu , defined in geo.q as {[x;y]select from pl rect . x where all(lat;lon;time)within'(x,enlist y)} retrieves points contained in the given spherical rectangle. lu takes the rectangle coordinates with a time filter, and calculates the coverage (ranges of cells covering the rectangle) with rect[(lat0;lat1);(lon0;lon1)] . The cell ID ranges are looked up with pl , defined in geo.q as {raze{select[x]lat,lon,trk,time from geo}each flip deltas geo.cid binr/:x} The result is then filtered to remove points outside the rectangle (since the covering might exceed the rectangle dimensions) and to constrain by time. The simple HTML interface is implemented with openstreetmap and leaflet. HTTP¶ HTTP server¶ kdb+ has an in-built webserver capable of handling HTTP/HTTPS requests. Listening port¶ When kdb+ is configured to listen on a port, it uses the same port as that serving kdb+ IPC and websocket connections. SSL/TLS¶ HTTPS can be handled once kdb+ has been configured to use SSL/TLS. Authentication / Authorization¶ Client requests can be authenticated/authorized using .z.ac. This allows kdb+ to be customized with a variety of mechanisms for securing HTTP requests e.g. LDAP, OAuth2, OpenID Connect, etc. Request handling¶ HTTP request handling is customized by using the following callbacks: Default .z.ph handling¶ The default implementation of .z.ph displays all variables and views. For example, starting kdb+ listening on port (q -p 8080 ) and visiting http://localhost:8080 from a web browser on the same machine, displays all created variables/views). Providing q code as a GET param causes it to be evaluated eg. http://localhost:8080?1+1 returns 2 . .h.HOME can be set to be the webserver root to serve files contained in the directory e.g. creating an HTML file index.html in directory /webserver/ and setting .h.HOME="/webserver" allows the file to be viewed via `http://localhost:8080/index.html'. An example of customizing the default webserver can be found in simongarland/doth Keep-alive¶ Persistent connections to supported clients can be enabled via .h.ka Compression¶ HTTP server supports gzip compression via Content-Encoding: gzip for responses to form?… -style requests. The response payload must be 2,000+ chars and the client must indicate support via Accept-Encoding: gzip in the HTTP header. (Since V4.0 2020.03.17.) HTTP client¶ Creating HTTP requests¶ kdb+ has helper methods that provide functionality as described in the linked reference material: - .Q.hg for performing a HTTP GET, where a query string can be sent in the URL - .Q.hp for performing a HTTP POST, where data transmitted is sent in the request body e.g. q)/ perform http post q).Q.hp["http://httpbin.org/post";.h.ty`txt]"my data" "{\n \"args\": {}, \n \"data\": \"my data\", \n \"files\": {}, \n \"form\": {}, \n \"headers\": {\n \"Accept-Encoding\": \"gzip\", \n \"Content-Length\": \"7\", \n \"Content-Type\": \"text/plain\", \n \"Host\": \"httpbin.org\", \n \"X-Amzn-Trace-Id\": \"Root=1-665711e1-19e62fef6b6e4d192a9a7096\"\n }, \n \"json\": null, \n \"origin\": \"78.147.173.108\", \n \"url\": \"http://httpbin.org/post\"\n}\n" q)/ request gzipped data, which is unzipped & returned in json and formatted appropriately q).j.k .Q.hg "http://httpbin.org/gzip" gzipped| 1b headers| `Accept-Encoding`Host`X-Amzn-Trace-Id!("gzip";"httpbin.org";"Root=1-665710aa-50bd49d724b532913348a62a") method | "GET" origin | "78.147.173.108" In addition, kdb+ provides a low level HTTP request mechanism: `:http://host:port "string to send as HTTP request" which returns the HTTP response as a string. An HTTP request generally consists of: - a request line (URL, method, protocol version), terminated by a carriage return and line feed, - zero or more header fields (field name, colon, field value), terminated by a carriage return and line feed - an empty line (consisting of a carriage return and a line feed) - an optional message body e.g. q)/ perform HTTP DELETE q)`:http://httpbin.org "DELETE /anything HTTP/1.1\r\nConnection: close\r\nHost: httpbin.org\r\n\r\n" "HTTP/1.1 200 OK\r\ndate: Wed, 29 May 2024 12:23:54 GMT\r\ncontent-type: application/json\r\ncontent-length: 290\r\nconnection: close\r\nserver: gunicorn/19.9.0\r\naccess-control-allow-origin: *\r\naccess-control-allow-credentials: true\r\n\r\n{\n \"args\": {},... q)postdata:"hello" q)/ perform HTTP POST (inc Content-length to denote the payload size) q)`:http://httpbin.org "POST /anything HTTP/1.1\r\nConnection: close\r\nHost: httpbin.org\r\nContent-length: ",(string count postdata),"\r\n\r\n",postdata "HTTP/1.1 200 OK\r\ndate: Wed, 29 May 2024 13:08:41 GMT\r\ncontent-type: application/json\r\ncontent-length: 321\r\nconnection: close\r\nserver: gunicorn/19.9.0\r\naccess-control-allow-origin: *\r\naccess-control-allow-credentials: true\r\n\r\n{\n \"args\": {}, \n \"data\": \"hello\"... An HTTP response typically consists of: - a status line (protocol version, status code, reason), terminated by a carriage return and line feed - zero or more header fields (field name, colon, field value), terminated by a carriage return and line feed - an empty line (consisting of a carriage return and a line feed) - an optional message body e.g. q)/ x will be complete HTTP response q)x:`:http://httpbin.org "DELETE /delete HTTP/1.1\r\nConnection: close\r\nHost: httpbin.org\r\n\r\n" q)/ separate body from headers, get body q)@["\r\n\r\n" vs x;1] "{\n \"args\": {}, \n \"data\": \"\", \n \"files\": {}, \n \"form\": {}, \n \"headers\": {\n \"Host\": \"httpbin.org\", \n \"X-Amzn-Trace-Id\": \"Root=1-66572924-7396cee34f268fcd406e94d5\"\n }, \n \"json\": null, \n \"origin\": \"78.147.173.108\", \n \"url\": \"http://httpbin.org/delete\"\n}\n" If a server uses chunked transfer encoding, the response is constructed from the chunks prior to returning (since V3.3 2014.07.31). SSL/TLS¶ To use SSL/TLS, kdb+ should first be configured to use SSL/TLS. For any request requiring SSL/TLS, replace http with https . HTTP/HTML markup¶ The .h namespace provides a range of markup and HTTP protocol formatting tools. Q for Mortals §11.7.1 HTTP Connections inetd, xinetd¶ On *nix-like operating systems, inetd (or its successor xinetd ) maintains a list of passive sockets for various services configured to run on that particular machine. When a client attempts to connect to one of the service, inetd will start a program to handle the connection based on the configuration files. This way, inetd will run the server programs as they are needed by spawning multiple processes to service multiple network connections. A kdb+ server can work under inetd to provide a private server for each connection established on a designated port. For Windows you might be able to have kdb+ run under inetd using Cygwin. Configuration¶ To configure a kdb+ server to work under inetd or xinetd you have to decide on the name of the service and port on which this server should run and declare it in the /etc/services configuration file. Note This operation can be performed only by an administrative user (root). /etc/services : … # Local services kdbtaq 2015/tcp # kdb server for the taq database … If you have multiple databases which should be served over inetd , add multiple entries in the /etc/services file and make sure you are using different ports for each service name. Also, as a safety measure, create one applicative group (ex: kdb ) and two applicative users on your system, one (e.g. kdb ) owning the q programs and the databases and another one (e.g. kdbuser ) having the rights to execute and read data from the database directories. This can be achieved by assigning the two users to the applicative group mentioned above and setting the permissions on the programs to be readable and executable by the group, and the database directories readable and executable (search) by the group: rwxr-x--- . Once this is configured, you'll need to configure inetd /xinetd to make it aware of the new service. If you are running inetd , you’ll need to add the service configuration into /etc/inetd.conf (see the inedt.conf man page for more details). /etc/inetd.conf : … kdbtaq stream tcp nowait kdbuser /home/kdb/q/l64/q q /home/kdb/taq -s 4 … For xinetd , you’ll need to create a configuration file (kdbtaq for example) for the new service in the /etc/xinetd.d directory (see the xinetd.conf man page for more details). /etc/xinetd.d/kdbtaq : # default: on service kdbtaq { flags = REUSE socket_type = stream wait = no user = kdbuser env = QHOME=/home/kdb/q QLIC=/home/kdb/q server = /home/kdb/q/l64/q server_args = /home/kdb/taq -s 4 -q -g 1 # use taskset to conform to license # server = /bin/taskset # server_args = -c 0,1 /home/kdb/q/l64/q -q -g 1 # only_from = 127.0.0.1 localhost # bind = 127.0.0.1 # instances = 5 # per_source = 2 } After the configuration is finished, you will have to find your process ID for your inetd /xinetd server and send it the SIGHUP signal to read the new configuration: $ ps -e|grep inetd 3848 ? 00:00:00 xinetd $ kill -HUP 3848 \1 and \2 for stdout/stderr redirect
Reference architecture for Azure¶ Lift and shift your kdb+ plants to the cloud and leverage virtual machines (VM) with storage kdb Insights provides a range of tools to build, manage and deploy kdb+ applications in the cloud. kdb Insights supports: - interfaces for deployment and common ‘Devops’ orchestration tools such as Docker, Kubernetes, Helm, and others. - integrations with major cloud logging services. kdb Insights provides: - a kdb+ native REST client; Kurl, to authenticate and interface with other cloud services. - kdb+ native support for reading from Azure Blog Storage, and a packaging utility, QPacker to build and deploy kdb+ applications to the cloud. By taking advantage of the kdb Insights suite of tools, you can quickly and easily create new and integrate existing kdb+ applications on Microsoft Azure. Deployment - QPacker – A packaging utility that supports q, Python and C libraries - Detailed guide to deploy kdb+ applications to the cloud Service integration - QLog – Integrations with major cloud logging services - Kurl – Native kdb+ REST client with authentication to cloud services Storage - kdb+ Object Store – Native support for reading and querying Azure Blob Storage kdb+ architecture patterns in Microsoft Azure¶ kdb+tick is an architecture that allows the capture, processing and querying of timeseries data against realtime, streaming and historical data. This reference architecture describes a full solution running kdb+tick within Microsoft Azure which consists of these functional components: - datafeeds - feedhandlers - tickerplant - realtime database - historical database - KX gateway An architectural pattern for kdb+tick in Microsoft Azure: Azure integration allows the ability to place kdb+ processing functions either in one Azure Virtual Machine (VM) instance or distributed across many Azure VM instances. The ability for kdb+ processes to communicate with each other through kdb+’s built-in language primitives, allows for this flexibility in final design layouts. The transport method between kdb+ processes and overall external communication is achieved through low-level TCP/IP sockets. If two components are on the same VM instance, then local Unix sockets can be used to reduce communication overhead. Many customers have tickerplants set up on their premises. The Microsoft Azure reference architecture allows you to manage a hybrid infrastructure that communicates with both tickerplants on-premises and in the cloud. The benefits of migrating tickerplants to a cloud infrastructure are vast, and include flexibility, auto-scaling, improved transparency in cost management, access to management and infrastructure tools built by Microsoft, quick hardware allocation and many more. This page focuses on kdb+tick deployment to virtual machines in Azure; however, kdb Insights provides another kdb+ architectural pattern for deploying to Microsoft Azure Kubernetes Service (AKS). Refer to managed app for more details. Datafeeds¶ These are the source data ingested into the system. For financial use cases, data may be ingested from B-pipe (Bloomberg), or Elektron (Refinitiv) or any exchange that provides a data API. Often the streaming data is available on a pub-sub component such as Kafka, or Solace, which are popular for having an open-source interface to kdb+. The datafeeds are in a proprietary format, but always one with which KX has familiarity. Usually this means a feed handler just needs to be aware of the specific data format. Due to the flexible architecture of KX, most underlying kdb+ processes that constitute the system can be placed anywhere in this architecture. For example, for latency, compliance or other reasons, the datafeeds might be relayed through your on-premises data center. Alternatively, the connection from the feedhandlers might be made directly from the Azure Virtual Network (VNet) into the market-data source. The kdb+ infrastructure is often used to store internally derived data. This can optimize internal data flow and help remove latency bottlenecks. The pricing of liquid products, for example on B2B markets, is often calculated by a complex distributed system. This system often changes due to new models, new markets or other internal system changes. Data in kdb+ that is generated by these internal steps also require processing and handling huge amounts of timeseries data. When all the internal components of these systems send data to kdb+, a comprehensive impact analysis captures any changes. Feedhandler¶ A feedhandler is a process that captures external data and translates it into kdb+ messages. Multiple feedhandlers can be used to gather data from several different sources and feed it to the kdb+ system for storage and analysis. There are a number of open-source (Apache 2 licensed) Fusion interfaces between KX and other third-party technologies. Feedhandlers are typically written in Java, Python, C++ and q. Tickerplant¶ A tickerplant (TP) is a specialized, single-threaded kdb+ process that operates as a link between the client’s data feed and a number of subscribers. It implements a pub-sub pattern,specifically, it receives data from the feedhandler, stores it locally in a table then saves it to a log file. It publishes this data to a realtime database (RDB) and any clients who have subscribed to it. It then purges its local tables of data. Tickerplants can operate in two modes: | mode | operation | |---|---| | batch | Collects updates in its local tables, batches up for a period of time and then forwards the update to realtime subscribers in a bulk update. | | realtime (zero latency) | Forwards the input immediately. This requires smaller local tables but has higher CPU and network costs. Each message has a fixed network overhead. | API calls: | call | operation | |---|---| | subscribe | Add subscriber to message receipt list and send subscriber table definitions. | | unsubscribe | Remove subscriber from message receipt list. | End of Day event: at midnight, the TP closes its log files, auto creates a new file, and notifies the realtime database about the start of the new day. Realtime database¶ The realtime database holds all the intraday data in memory to allow for fast powerful queries. For example, at the start of the business day, the RDB sends a message to the tickerplant and receives a reply containing the data schema, the location of the log file, and the number of lines to read from the log file. It then receives subsequent updates from the tickerplant as they are published. One of the key design choices for Microsoft Azure is the size of memory for this instance, as ideally we need to contain the entire business day/period of data in-memory. Purpose: - Subscribed to the messages from the tickerplant - Stores (in-memory) the messages received - Allows this data to be queried intraday Actions: - On message receipt: insert into local, in-memory tables. - End of Day receipt: usually writes intraday data down then sends a new End-of-Day message to the HDB. Optionally RDB sorts certain tables (for example, by sym and time) to speed up queries. An RDB can operate in single or multi-input mode. The default mode is single input, in which user queries are served sequentially and queries are queued until an update from the TP is processed (inserted into the local table). In standard tick scripts, the RDB tables are indexed (using hash tables), typically by the product identifier. Indexing has a significant impact on query speed, resulting in slower data ingestion. The insert function takes care of the indexing; during an update it also updates the hash table. The performance of the CPU and memory in the chosen Azure VM instance impacts the rate at which data is ingested and the time taken to execute data queries. Historical database¶ The historical database (HDB) is a simple kdb+ process with a pointer to the persisted data directory. A kdb+ process can read this data and memory map it, allowing for fast queries across a large volume of data. Typically, the RDB is instructed by the TP to save its data to the data directory at EOD. The HDB can then refresh its memory from the data directory mappings. HDB data is partitioned by date in the standard TP. If multiple disks are attached to the box, then data can be segmented and kdb+ makes use of parallel I/O operations. Segmented HDB requires a par.txt file that contains the locations of the individual segments. An HDB query is processed by multiple threads, and map-reduce is applied if multiple partitions are involved in the query. Purpose: - Provides a queryable data store of historical data - In instances involving research and development or data analytics, customers can create reports on order execution times Actions: - End of Day receipt - Reload the database to get the new day’s worth of data from the RDB write down. HDBs are often expected to be mirrored locally. If performance is critical, some users, (for example, quants) need a subset of the data for heavy analysis and backtesting. KX Gateway¶ In production, a kdb+ system may be accessing multiple timeseries datasets, usually each one representing a different market data source, or using the same data, refactored for different schemas. All core components of a kdb+tick can handle multiple tables. However, you can introduce multiple TPs, RDBs and HDBs based on your fault-tolerance requirements. This can result in a large number of kdb+ components and a high infrastructure segregation. A KX gateway generally acts as a single point of contact for a client. A gateway collects data from the underlying services, combines datasets and may perform further data operations (for example, aggregation, joins, pivoting, and so on) before it sends the result back to the user. The specific design of a gateway can vary in several ways according to expected use cases. For example, in a hot-hot setup, the gateway can be used to query services across availability zones. The implementation of a gateway is largely determined by the following factors: - Number of clients or users - Number of services and sites - Requirement for data aggregation - Support of free-form queries - Level of redundancy and failover The task of the gateway is to: - Check user entitlements and data-access permissions - Provide access to stored procedures, utility functions and business logic - Gain access to data in the required services (TP, RDB, HDB) - Provide the best possible service and query performance The KX Gateway must be accessible through Azure security rules from all clients of the kdb+ service. In addition, the location of the gateway service also needs to have visibility of the remaining kdb+ processes constituting the full KX service. Storage and filesystem¶ kdb+tick architecture needs storage space for three types of data: - TP log - If the TP needs to handle many updates, then writing to it needs to be fast since slow I/O may delay updates and can even cause data loss. Optionally, you can write updates to the TP log in batches, for example, every second as opposed to in real time. Data loss occurs if the TP or instance is halted unexpectedly, or stops or restarts, as the recently received updates are not persisted. If a TP process or the Azure VM instance goes down or restarts also results in data loss. The extra second of data loss is probably marginal to the whole outage window. If the RDB process goes down, then it can replay data to recover from the TP log. The faster it can recover, the fewer data are waiting in the TP output queue to be processed by the restarted RDB. Hence, a fast read operation is critical for resilience reasons. Using Azure Premium SSD Managed Disk or Ultra disk, or a subsection of an existing Lustre filesystem on Azure is a recommended solution. Managed disks are more resilient and would still contain the data despite any Azure VM restart or loss. - Sym file (and par.txt for segmented databases) - The sym file is written by the realtime database after end-of-day, when new data is appended to the historical database. The HDB processes read the sym file to reload new data. Time to read and write the sym file is often marginal compared to other I/O operations. It is beneficial to be able to write down to a shared filesystem, thereby adding huge flexibility in the Azure Virtual Network (VNet). Any other Azure VM instance can assume this responsibility in a stateless fashion. - HDB data - Performance of the filesystem solution determines the speed and operational latency for kdb+ to read its historical (at rest) data. The solution needs to be designed to cater for good query execution times for the most important business queries. These may splay across many partitions or segments of data or may deeply query on few or single partitions of data. The time to write a new partition impacts RDB EOD work. For systems that are queried around the clock the RDB write time needs to be very short. One advantage of storing your HDB within the Azure ecosystem is the flexibility of storage. This is usually distinct from “on-prem” storage, whereby you may start at one level of storage capacity and grow the solution to allow for dynamic capacity growth. One huge advantage of most Azure storage solutions is that permanent disks can grow dynamically without the need to halt instances, this allows you to dynamically change resources. For example, start with small disk capacity and grow capacity over time. The reference architecture recommends replicating data. This can either be tiered out to lower cost or lower performance object storage in Azure or the data can be replicated across availability zones. The latter may be useful if there is client-side disconnection from other time zones. You may consider failover of service from Europe to North America, or vice-versa. kdb+ uses POSIX filesystem semantics to manage HDB structure directly on a POSIX-style file system stored in persistent storage, for example Azure Disk Storage. There are many solutions that offer full operational functionality for the POSIX interface. Azure Blob Storage¶ Azure Blob Storage is an object store that scales to exabytes of data. There are different storage classes (Premium, Hot, Cool, Archive) for different availability. Infrequently used data can use cheaper but slower storage. The kdb Insights native object store functionality allows users to read HDB data from Azure Blob object storage. The HDB par.txt file can have segment locations that are on Azure Blob object storage. In this pattern, the HDB can reside entirely on Azure Blob storage or spread across Azure Disks, Azure Files or Azure Blob Storage as required. There is a relatively high latency when using Azure Blob cloud storage compared to local storage, such as Azure Disks. The performance of kdb+ when working with Azure Blob Storage can be improved by taking advantage of the caching feature of the kdb+ native objectstore. The results of requests to Azure Blob Storage can be cached on a local high-performance disk thus increasing performance. The cache directory is continuously monitored and a size limit is maintained by deleting files according to a LRU (least recently used) algorithm. Caching coupled with enabling secondary threads can increase the performance of queries against a HDB on Azure Blob Storage. The larger the number of secondary threads, irrespective of CPU core count, the better the performance of kdb+ object storage. Conversely, the performance of cached data appears to be better if the secondary-thread count matches the CPU core count. It is recommended to use compression on the HDB data residing on Azure Blob Storage. This can reduce the cost of object storage and possible egress costs, and also counteract the relatively high-latency and low bandwidth associated with Azure Blob Storage. Furthermore, Azure Blob Storage is useful for archiving, tiering, and backup purposes. The TP log file and the sym can be stored each day and archived for a period of time. The lifecycle management of the object store simplifies clean-up, whereby one can set the expiration time on any file. The versioning feature of Azure Blob Storage is particularly useful when a sym file bloat happens due to feed misconfiguration or upstream change. Migrating back to a previous version saves the health of the whole database. Azure Blob Storage provides strong read-after-write consistency. After a successful write or update of an object, any subsequent read request immediately receives the latest version of the object. Azure Blob Storage also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with all changes reflected. This is especially useful when there are many kdb+ processes reading from Azure Blob Storage, as it ensures consistency. A kdb+ feed can subscribe to a Azure Blob Storage file update that the upstream drops into a bucket and can start its processing immediately. The data is available earlier compared to the solution when the feed is started periodically, for example, every hour. Azure Disk Storage¶ Azure Disk Storage can be used to store HDB and Tickerplant data, and is fully compliant with kdb+. It supports all the POSIX semantics required. Azure Ultra Disk volumes offers increased performance of 300 IOPS/GiB, up to a maximum of 160 K IOPS per disk and more durability, reducing the possibility of a storage volume failure. Azure Files¶ Azure Files over NFS offers NFS service for nodes in the same availability zone, and can run across zones, or can be exposed externally. Azure Files can be used to store HDB and tickerplant data and is fully compliant with kdb+. Microsoft plan to release to general availability shortly. Lustre FS¶ Lustre FS is POSIX compliant and built on Lustre, a popular open-source parallel filesystem that provides scale-out performance that increases linearly with a filesystem’s size. Lustre filesystems scale to hundreds of GB/s of throughput and millions of IOPS. It also supports concurrent access to the same file or directory from thousands of compute instances and provides consistent, sub-millisecond latencies for file operations, making it especially suitable for storing and retrieving HDB data. A Lustre FS persistent file system provides highly available and durable storage for kdb+ workloads. The file servers in a persistent file system are highly available and data is automatically replicated within the same Availability Zone. Memory¶ The TP uses very little memory during normal operation in realtime mode, while a full record of intraday data is maintained in the realtime database. Abnormal operation occurs if a realtime subscriber (including RDB) is unable to process the updates. TP stores these updates in the output queue associated with the subscriber. Large output queue needs a large memory. TP may exceed memory limits and exit in extreme cases. Also, TP in batch mode needs to store data. This also increases memory need. Consequently, the memory requirement of the TP box depends on the setup of the subscribers and the availability requirements of the tick system. The main consideration for an instance hosting the RDB is to use a memory optimized VM instance such as the Standard_E16s_v5 (with 128 GB memory), or Standard_E32s_v5 (256 GB memory). Azure also offers VM with extremely large memory, S896oom (BareMetal) , with 36TiB of memory, for clients who need to store large amounts of high-frequency data in memory, in the RDB, or to keep more than one partition of data in the RDB form. There is a tradeoff however, of large memory and RDB recovery time. The larger the tables, the longer it takes for the RDB to start from TP log. To alleviate this problem, you can split a large RDB into two. The rule for separating the tables into two clusters is the join operation between them. If two tables are never joined, then they can be placed into separate RDBs. It is recommended that HDB boxes have large memories. User queries may require large temporal space for complex queries. Query execution times are often dominated by IO cost to get the raw data. OS-level caching stores frequently used data. The larger the memory, the less cache miss and the faster the queries run. CPU¶ The CPU load generated by the TP depends on the number of publishers and their verbosity (number of updates per second), and the number of subscribers. Subscribers may subscribe to partial data, but any filtering applied consumes further CPU cycles. The CPU requirement of the realtime database comes from - appending updates to local tables - user queries Local table updates are very efficient especially if the TP sends batch updates. Faster CPU results in faster ingestion and lower latency. User queries are often CPU intensive. They perform aggregation, joins, and call expensive functions. If the RDB is set up in multi-input mode (started with a negative port) then user queries are executed in parallel. Furthermore, kdb+ 4.0 supports multithreading in most primitives, including sum , avg , dev , etc. (If the RDB process is heavily used and hit by many queries, then it is recommended to start in multi-process mode by the -s command-line option). VMs with many cores are recommended for RDB processes with large numbers of user queries. If the infrastructure is sensitive to the RDB EOD work, then powerful CPUs are recommended. Sorting tables before splaying is a CPU-intensive task. Historical databases are used for user queries. In many cases the IO dominates execution times. If the box has large memory and OS-level caching reduces IO operations efficiently, then CPU performance directly impacts execution times. Azure VM instances optimized for HPC applications, such as the HBv4-series (Standard_HB120rs_v4 with 120 AMD EPYC vCPUs), are recommended for CPU-bound services as described in the use cases above. Locality, latency and resiliency¶ The standard tick set up on premises requires the components to be placed on the same server. The tickerplant and realtime database are linked via the TP log file and the RDB and historical database are bound due to RDB EOD splaying. Customized tickerplants release this constraint in order to improve resilience. One motivation could be to avoid HDB queries impacting data capture in TP. You can set up an HDB writer on the HDB box and RDB can send its tables through IPC at midnight and delegate the IO work together with the sorting and attribute handling. It is recommended that the feedhandlers are placed outside the TP box on another VM between the TP and data feed. This minimises the impact on TP stability if the feedhandler malfunctions. Placement groups¶ The kdb+ tick architecture can be set up with placement groups, depending on the use case. A Proximity Placement Group is a configuration option that Azure offers, which lets you place a group of interdependent instances in a certain way across the underlying hardware in which those instances reside. The instances could be placed close together, spread through different racks, or spread through different Availability Zones. Cluster placement group¶ The cluster placement group configuration allows you to place your group of interrelated instances close together to achieve the best throughput and low latency results possible. This option only lets you pack the instances together inside the same Availability Zone, either in the same Virtual Network (VNet) or between peered VNets. Spread placement groups¶ With spread placement groups, each single instance runs on separate physical hardware racks. So, if you deploy five instances and put them into this type of placement group, each one of those five instances resides on a different rack with its own network access and power, either within a single AZ or in multi-AZ architecture. Recovery-time and recovery-point objectives¶ A disaster-recovery plan is usually based on requirements from both the Recovery Time Objective and Recovery Point Objective specifications, which can guide the design of a cost-effective solution. However, every system has its own unique requirements and challenges. Here, we suggest the best-practice methods for dealing with the various possible failures one needs to be aware of and plan for when building a kdb+ tick-based system. In the various combinations of failover operations that can be designed, the end goal is always to maintain availability of the application and minimize any disruption to the business. In a production environment, some level of redundancy is always required. Depending on the use case, requirements may vary but in nearly all instances requiring high availability, the best option is to have a hot-hot (or active-active) configuration. There are four main configurations found in production hot-hot, hot-warm, hot-cold, and pilot-light (or cold hot-warm). | Term | Description | |---|---| | Hot-hot | Describes an identical mirrored secondary system running, separate to the primary system, capturing and storing data but also serving client queries. In a system with a secondary server available, hot-hot is the typical configuration as it is sensible to use all available hardware to maximize operational performance. The KX gateway handles client requests across availability zones and collects data from several underlying services, combining data sets and if necessary, performing an aggregation operation before returning the result to the client. | | Hot-warm | The secondary system captures data but does not serve queries. In the event of a failover, the KX gateway reroutes client queries to the secondary (warm) system. | | Hot-cold | The secondary system has a complete backup or copy of the primary system at some previous point in time (recall that kdb+ databases are a series of operating system files and directories) with no live processes running. A failover in this scenario involves restoring from this latest backup, with the understanding that there may be some data loss between the time of failover to the time the latest backup was made. | | Pilot light (or cold hot-warm) | The secondary is on standby and the entire system can quickly be started to allow recovery in a shorter time period than a hot-cold configuration. | Typically, kdb+ is deployed in a high-value system. Hence, downtime can impact business which justifies the hot-hot setup to ensure high availability. Usually, the secondary system runs on completely separate infrastructure, with a separate filesystem, and saves the data to a secondary database directory, separate from the primary system. In this way, if the primary system or underlying infrastructure goes offline, the secondary system is able to take over completely. The usual strategy for failover is to have a complete mirror of the production system (feed handler, tickerplant, and realtime subscriber), and when any critical process goes down, the secondary is able to take over. Switching from production to disaster recovery systems can be implemented seamlessly using kdb+ interprocess communication. Disaster-recovery planning for kdb+ tick systems Data recovery for kdb+ tick Network¶ Network bandwidth needs to be considered if the TP components are not located on the same VM. The network bandwidth between Azure VMs depends on the type of the VMs. For example, a VM of type Standard_D8as_v4 has an expected network bandwidth 3.125 Gbps and a larger instance Standard_D32as_v4 can sustain 12.5 Gbps. For a given update frequency you can calculate the required bandwidth by employing the -22! internal function that returns the length of the IPC byte representation of its argument. The TP copes with large amounts of data if batch updates are sent. Make sure that the network is not your bottleneck in processing the updates. Azure Load Balancer¶ An Azure Load Balancer is a type of load balancing service by Azure. It is used for ultra-high performance, TLS offloading at scale, centralized certificate deployment, support for UDP, and static IP addresses for your application. Operating at the connection level, network load balancers are capable of securely handling millions of requests per second while maintaining ultra-low latencies. Load balancers can distribute load among applications that offer the same service. kdb+ is single threaded by default. The recommended approach is to use a pool of HDB processes. Distributing the queries can either be done by the gateway using async calls or by a load balancer. If the gateways are sending sync queries to the HDB load balancer, then a gateway load balancer is recommended to avoid query contention in the gateway. Furthermore, there are other TP components that enjoy the benefit of load balancers to better handle simultaneous requests. Adding a load balancer on top of an historical database (HDB) pool requires only three steps: - Create a Network Load Balancer with protocol TCP. Set the name, Availability Zone, Target Group name and Security group. The security group needs an inbound rule to the HDB port. - Create a launch template. A key part is the User Data window where you can type a startup-script. It mounts the volume that contains the HDB data and the q interpreter, sets environment variables (e.g. QHOME ) and starts the HDB. The HDB accepts incoming TCP connections from the Load Balancer, so you must set up an inbound Firewall rule using a Security Group. You can also leverage a Custom Image that you already created from an existing Azure VM. - Create an Azure Virtual Machine scale set (set of virtual machines) with autoscale rules to better handle peak loads. You can set the recently created instance group as a Target Group. All clients access the HDB pool using the Load Balancer’s DNS name (together with the HDB port) and the load balancer distributes the requests among the HDB servers seamlessly. General TCP load balancers with an HDB pool offer better performance than a stand-alone HDB, however, utilizing the underlying HDBs is not optimal. Consider three clients C1, C2, C3 and two servers HDB1 and HDB2. C1 is directed to HDB1 when establishing the TCP connection, C2 to HDB2 and C3 to HDB1 again. If C1 and C3 send heavy queries and C2 sends a few lightweight queries, then HDB1 is overloaded and HDB2 is idle. To improve the load distribution the load balancer needs to go under the TCP layer and needs to understand the kdb+ protocol. Logging¶ Azure provides a fully managed logging service that performs at scale and can ingest application and system log data. Azure Monitor allows you to view, search and analyze system logs. It provides an easy to use and customizable interface so that e.g. DevOps can quickly troubleshoot applications. Azure Monitor Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time. Events are organized into log streams and each stream is part of a log group. Related applications typically belong to the same log group. You don’t need to modify your tick scripts to enjoy the benefits of Azure Monitor. The Azure Monitor agent can be installed and configured to forward your application log to Azure Monitor. You should use the policies and policy initiatives below to automatically install the agent and associate it with a data-collection rule, every time you create a virtual machine. In the host configuration file you need to provide the log file to watch and to which log stream the new entries should be sent. Almost all kdb+ tick components can benefit from cloud logging. Feedhandlers log new data arrival, data and connection issues. The TP logs new or disappearing publishers and subscribers. It can log if the output queue is above a threshold. The RDB logs all steps of the EOD process which includes sorting and splaying of all tables. The HDB and gateway can log every user query. kdb+ users often prefer to save log messages in kdb+ tables. Tables that are unlikely to change are specified by a schema, while entries that require more flexibility use key-value columns. Log tables are ingested by log tick plans and these Ops tables are separated from the tables required for the business. One benefit of storing log messages is the ability to process log messages in qSQL. Timeseries join functions include as-of and window joins. For example, gateway functions are executed hundreds of times during the day. The gateway query executes RDB and HDB queries, often using a load balancer. All these components have their own log entries. You can simply employ Window Join to find relevant entries and perform aggregation to get insight into the performance characteristics of the execution chain. Please note that you can log both to kdb+ and to Azure Monitor. kdb Insights QLog provides kdb+ cloud-logging functionality. QLog supports multiple endpoint types through a simple interface and provides the ability to write to them concurrently. The logging endpoints in QLog are encoded as URLs with two main types: file descriptors and REST endpoints. The file descriptor endpoints supported are; :fd://stdout :fd://stderr :fd:///path/to/file.log REST endpoints are encoded as standard HTTP/S URLs such as: https://<CustomerId>.ods.opinsights.azure.com/api/logs?api-version=2016-04-01 QLog generates structured, formatted log messages tagged with a severity level and component name. Routing rules can also be configured to suppress or route based on these tags. Existing q libraries that implement their own formatting can still use QLog via the base APIs. This enables them to do their own formatting but still take advantage of the QLog-supported endpoints. Integration with cloud logging application providers can easily be achieved using logging agents. These can be set up alongside running containers or virtual machines to capture their output and forward to logging endpoints, such as Azure Monitor. Azure Monitor supports monitoring, alarming and creating dashboards. It is simple to create a Metric Filter based on a pattern and set an alarm (for example, sending email) if a certain criterion holds. You may also wish to integrate your KX monitoring for kdb+ components into this cloud logging and monitoring framework. The gives you insights into performance, uptime and overall health of the applications and the servers pool. You can visualize trends using dashboards. Interacting with Azure services¶ You can use Azure services through the console web interface. You may also need to interact from a q process. The following demonstration shows how to get a list of Virtual Machines for a specific Azure Tenant from a q process using either: - a system call to the Azure CLI - embedPy using the Azure SDK for Python - the Kurl Rest API client Azure CLI¶ A q process can run shell commands by calling the function system . The example below shows how we can get the list of virtual machines. We assume the Azure CLI is installed on the script-runner machine. (38env) stran@amon:~$ q KDB+ 4.0 2020.07.15 Copyright (C) 1993-2020 Kx Systems l64/ 4(16)core 7959MB stran amon 127.0.1.1 EXPIRE .. q) system "az vm list --output table" "Name ResourceGroup Location Zones" "--------------- ---------------------- ---------- -------" "staging-bastion STAGING-RESOURCE-GROUP westeurope" "tempVM STAGING-RESOURCE-GROUP westeurope" "windowsvm WINDOWSVM_GROUP westeurope" This example shows how the Azure CLI forms q using the system function. Unfortunately, this approach needs string manipulation so it is not always convenient. EmbedPy¶ Azure provides a Python client library to interact with Azure services. Using embedPy, a q process can load a Python environment and easily query the list of virtual machines for a given Azure tenant. (38env) stran@amon:\~/development/Azure$ q KDB+ 4.0 2020.07.15 Copyright (C) 1993-2020 Kx Systems l64/ 4(16)core 7959MB stran amon 127.0.1.1 EXPIRE .. q)\l p.q q)p)from azure.identity import ClientSecretCredential q)p)from azure.mgmt.compute import ComputeManagementClient q)p)Subscription_Id = "xxxxxxxx" q)p)Tenant_Id = "xxxxxxxxx" q)p)Client_Id = "xxxxxxxxx" q)p)Secret = "xxxxxxxxx" q)p)credential = ClientSecretCredential(tenant_id=Tenant_Id, client_id=Client_Id, client_secret=Secret) q)p)compute_client = ComputeManagementClient(credential, Subscription_Id) q)p)for vm in compute_client.virtual_machines.list_all(): print(vm.name) staging-bastion tempVM windowsvm q) Kurl REST API¶ Finally, you can send HTTP requests to the Azure REST API endpoints. KX Insights provides a native q REST API called Kurl. Kurl provides ease-of-use cloud integration by registering Azure authentication information. When running on a cloud instance, and a role is available, Kurl discovers and registers the instance metadata credentials. When running outside the cloud, OAuth2, ENV, and file-based credential methods are supported. Kurl takes care of your credentials and properly formats the requests. In the code lines below, we call the Azure Resource Manager REST API to pull the list of VMs for a specific tenant. The following example uses a simple Bearer token for authorization. (38env) stran@amon:\~$ q KDB+ 4.1t 2021.07.12 Copyright (C) 1993-2021 Kx Systems l64/ 4(16)core 7959MB stran amon 127.0.1.1 EXPIRE .. q)url:"https://management.azure.com/subscriptions" q)url,:"/c4f7ecef-da9e-4336-a9d6-d11d5838caff/resources" q)url,:"?api-version=2021-04-01" q)url,,:"&%24filter=resourceType%20eq%20%27microsoft.compute%2Fvirtualmachines%27" q)params:``headers!(::;enlist["Authorization"]!enlist "bearer XXXXXXXXXXXXXXX") q)resp:.kurl.sync (`$url;`GET;params) q)t: (uj) over enlist each (.j.k resp[1])[`value] q)select name, location, tags from t name location tags ----------------------------------------------------------------------- "staging-bastion" "westeurope" `cluster_name`name!("staging";"staging") "tempVM" "westeurope" `cluster_name`name!("";"") "windowsvm" "westeurope" `cluster_name`name!("";"") q) Package, manage, and deploy¶ QPacker (QP) is a tool to help developers package, manage and deploy kdb+ applications to the cloud. It automates the creation of containers and virtual machines using a simple configuration file qp.json . It packages kdb+ applications with common shared code dependencies, such as Python and C. QPacker can build and run containers locally as well as push to container registries such as DockerHub and Azure Container Registry. Software is often built by disparate teams, who may individually have remit over a particular component, and package that component for consumption by others. QPacker stores all artefacts for a project in a QPK file. While this file is intended for binary dependencies, it is also portable across environments. QPacker can interface with Hashicorp Packer to generate virtual-machine images for Azure. These VM images can then be used as templates for a VM instance running in the cloud. When a cloud target is passed to QPacker (qp build -azure ), an image is generated for each application defined in the top-level qp.json file. The QPK file resulting from each application is installed into the image and integrated with Systemd to allow the startq.sh launch script to start the application on boot. Azure Functions¶ Function as a service (FaaS) lets developers create an application without considering the complexity of building and maintaining the infrastructure that runs it. Cloud providers support only a handful of programming languages natively. Azure’s FaaS solution, Functions, supports Bash scripts that can start any executable including a q script. One use case is to start a gateway by Azure Functions to execute a client query. This provides cost transparency, zero cost when service is not used and full client query isolation. Access management¶ We distinguish application- and infrastructure-level access control. Application-level access management controls who can access kdb+ components and run commands. TP, RDB, and HDB are generally restricted to kdb+ infrastructure administrators only and the gateway is the access point for the users. One responsibility of the gateway is to check if the user can access the tables (columns and rows) s/he is querying. This generally requires checking user ID (returned by .z.u ) against some organizational entitlement database, cached locally in the gateway and refreshed periodically. Secure access – Azure Bastion¶ Azure Bastion, a fully platform-managed PaaS service that you provision inside your virtual network, lets you manage your kdb+ Azure Virtual Machines through an RDP and SSH session directly in the Azure portal using a single-click seamless experience. Azure Bastion provides secure and auditable instance management for your kdb+ tick deployment without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Use this to permission access to the KX gateway. This is a key task for the administrators of the KX system, and both user and API access to the entire system is controlled entirely through the KX gateway process. Hardware specifications¶ A number of Azure VM types are especially performant for kdb+ workloads. Azure offers five different sub-groups of VMs in the memory-optimized family – each with a high memory-to-vCPU ratio. Although the DSv2-series 11-15 and Esv4-series have a comparable vCPU to memory ratio, DSv2R instances support bandwidth up to 24.4Gbps and max uncached disk throughput performance of 62.5K IOPS and expected network bandwidth of 24.4Gbps, providing 2× higher disk throughput and network bandwidth performance compared to Esv4 instances. The Azure Hypervisor, based on Windows Hyper-V, is the native hypervisor powering the Azure Cloud Service platform and providing the building blocks for delivering Azure Virtual Machine types with a selection of compute, storage, memory, and networking options. | service | Azure VM type | storage | CPU, memory, I/O | |---|---|---|---| | tickerplant | Memory optimized: Dv4/DSv4, Ev4/Esv4, Ev5/Esv5, M HPC-optimized: HBv3 | Azure Managed Premium SSD/Ultra Disk Lustre FS? | High Perf Medium Medium | | realtime database | Memory optimized: Dv4/DSv4, Ev4/Esv4, Ev5/Esv5, M HPC-optimized: HBv3 | – | High Perf High Capacity Medium | | historical database | Memory optimized: Dv4/DSv4, Ev4/Esv4, Ev5/Esv5, M | Azure Managed Premium SSD/Ultra Disk Lustre FS? | Medium Perf Medium Memory High IO | | complex event processing (CEP) | Memory optimized: Dv4/DSv4, Ev4/Esv4, Ev5/Esv5, M | – | Medium Perf Medium Memory High IO | | gateway | Memory optimized: Dv4/DSv4, Ev4/Esv4, Ev5/Esv5, M | – | Medium Perf Medium Memory High IO | Resources¶ GitHub repository with standard tick.q scripts Building Real-time Tick Subscribers Data recovery for kdb+ tick Disaster-recovery planning for kdb+ tick systems Intraday writedown solutions Query Routing: a kdb+ framework for a scalable load-balanced system Order Book: a kdb+ intraday storage and access methodology kdb+tick profiling for throughput optimization KX Cloud Edition
kdb+ and FIX messaging¶ Electronic trading volumes have increased significantly in recent years, prompting financial institutions, both buy and sell side, to invest in increasingly sophisticated Order Management Systems (OMS). OMSs efficiently manage the execution of orders using a set of pre-defined conditions to obtain the best price of execution. OMSs typically use the Financial Information eXchange (FIX) protocol, which has become the industry standard for electronic trade messaging since it was first developed in 1992. The demand for post-trade analytics and compliance requirements (for example proving a client order was filled at the best possible price) provide a need to retain all the FIX messages produced by an OMS. For large volumes of data this can prove extremely challenging; however kdb+ provides an ideal platform to capture and process the FIX messages. It allows efficient querying of large volumes of historical data, and in conjunction with a kdb+ tick set-up can produce powerful real-time post-trade analytics for the front office users. This paper will introduce the key steps to capture a FIX message feed from an OMS, and understand the data contained within each message. We produce an example that demonstrates a kdb+ set up that captures a FIX feed and produces a final-order state table. All tests were run using kdb+ version 3.1 (2013.12.27) FIX messages¶ FIX message format¶ FIX messages consist of a series of key-value pairs that contain all the information for a particular state of a transaction. Each tag relates to a field defined in the FIX specification for a given system. In FIX4.4, tags 1-956 are predefined and values for these fields must comply with the values outlined in the FIX protocol. Outside of this range custom fields may be defined; these may be unique to the trading system or firm. Some common tags are tabulated below. 1 Account 29 LastCapacity 6 AvgPx 30 LastMkt 8 BeginString 31 LastPx 9 BodyLength 32 LastQty 10 CheckSum 34 MsgSeqNum 11 ClOrdID 35 MsgType 12 Commission 37 OrderID 13 CommType 38 OrderQty 14 CumQty 39 OrderStatus 15 Currency 49 SenderCompID 17 ExecID 52 SendingTime 19 ExecRefID 56 TargetCompID 21 HandlInst 151 LeavesQty Some common FIX tags and respective fields A FIX message is comprised of a header, body and trailer. All messages must begin with a header consisting of BeginString (8), BodyLength (9), MsgType (35), SenderCompID (49), TargetCompID (56), MsgSeqNum (34) and SendingTime (52) tags. BeginString states the FIX version used, BodyLength is a character count of the message and MsgType gives the type of message, for instance New Order, Execution Report, etc. SenderCompID and TargetCompID contain information on the firms sending and receiving the message respectively. The message must finish with tag CheckSum (10); this is the count of all characters from tag 35 onwards including all delimiters. The body of the message consists of all other relevant tags, depending on the message type. FIX messages are delimited by ASCII SOH (Start of Heading), however as this unprintable we will use | as a delimiter in this paper. Below is an example of some FIX messages that we will use for this whitepaper. 8=FIX.4.4|9=178|35=D|49=A|56=B|1=accountA|6=0|11=00000001|12=0.0002| 13=2|14=|15=GBp|17=|1 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=0| 11=00000001|12=0.0002|13=2|14=|15=GBp|17=|19=| 21=|29=|30=|31=|32=|37=|38=10000|39=0|41=|44=|48=VOD.L| 50=AB|52=20131218-09:01:13|54=1|55=VOD|58=|59=1| 60=20131218-09:01:13|10=168 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.5| 11=00000001|12=|13=|14=1500|15=GBp|17=1 00000001|19=| 21=1|29=1|30=XLON|31=229.5|32=1500|37=|38=10000|39=1| 41=|44=|48=VOD.L|50=AB|52=20131218-09:02:01|54=1|55=VOD|58=|59=1| 60=20131218-09:02:01|10=193 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.6125| 11=00000001|12=|13=|14=6000|15=GBp|17=100000002|19=| 21=1|29=1|30=XLON|31=229.65|32=4500|37=|38=10000|39=1| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:03|54=1|55=VOD|58=|59=1| 60=20131218-09:01:03|10=197 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.6353846| 11=00000001|12=|13=|14=6500|15=GBp|17=100000003|19=| 21=1|29=1|30=XLON|31=229.91|32=500|37=|38=10000|39=1| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:14|54=1|55=VOD|58=|59=1| 60=20131218-09:01:14|10=199 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.7496933| 11=00000001|12=|13=|14=8150|15=GBp|17=100000004|19=| 21=1|29=1|30=XLON|31=230.2|32=1650|37=|38=10000|39=1|41=|44=|48=VOD.L| 50=AB|52=20131218-09:01:15|54=1|55=VOD|58=|59=1|60=20131218-09:01:15|10=199 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.6295| 11=00000001|12=|13=|14=10000|15=GBp|17=100000005|19=| 21=1|29=1|30=XLON|31=229.1|32=1850|37=|38=10000|39=2| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:46|54=1|55=VOD|58=|59=1| 60=20131218-09:01:46|10=197 Feed handler¶ A feed handler may be used to deliver the messages to kdb+. The feed handler should receive the flow of FIX messages from the OMS, parse the messages to extract the required fields and send them to the kdb+ tickerplant. Feed handlers are generally written in Java or C++ and are widely available, for example from KX. For the example provided in this white paper we load a file of FIX messages to a q feed handler. Our feed handler reads each FIX message from the file, extracts the tags and casts to the desired q type. The FIX tag and field names are stored in a FIX specification, which should include all possible tags from the OMS including any custom tags unique to our setup. The FIX specification allows us to create reference dictionaries to map the tags to the correct column names. q)fixTagToName 1 | Account 6 | AvgPx 8 | BeginString 11| ClOrdID 12| Commission 13| CommType 14| CumQty ... We include functions to parse the FIX messages and extract the desired tags. These functions can also be included in the RDB to allow us to extract additional information from the raw FIX message for fields not included in our schema. getAllTags:{[msg](!)."S=|"0:msg} getTag:{[tag;msg](getAllTags[msg])[tag]} We read the file containing the FIX messages, parse each message to extract the information and flip into a table. fixTbl:(uj/) {flip fixTagToName[key d]!value enlist each d:getAllTags x} each fixMsgs We need to extract the desired fields and cast to the correct type. Functions are used to match the schema of our FIX messages to a predefined schema in the RDB. colConv:{[intype;outtype] $[(intype in ("C";”c”))&(outtype in ("C";”c”)); eval'; (intype in ("C";”c”)); upper[outtype]$; (outtype in ("C";”c”)); string; upper[outtype]$string ] } matchToSchema:{[t;schema] c:inter[cols t;cols schema]; metsch:exec "C"^first t by c from meta schema; mett:exec "C"^first t by c from meta t; ?[t;();0b;c!{[y;z;x](colConv[y[x];z[x]];x)}[mett;metsch] each c] } We add the full FIX message to the table as a column of strings. This ensures no data is lost from the original message that was received and information can easily be obtained when necessary. The FIX message is then sent to the tickerplant. genFixMsgs:{[] //read fix message file fixMsgs:read0 hsym `$path,"/fixMsgs.txt"; // extract each tag, map to name and convert to table fixTbl:(uj/) {flip fixTagToName[key d]!value enlist each d:getAllTags x} each fixMsgs; // cast fixTbl to correct types t:matchToSchema[fixTbl;fixmsgs]; // Add the original fix message as a column update FixMessage:fixMsgs from t } runFIXFeed:{[] t:genFixMsgs[]; tick_handle[“upd”;`fixmsgs;t]; } FIX tags¶ In this section we look at some of the most important FIX messages. MsgType ¶ MsgType (tag 35) is a required field in the FIX message. It defines the type of message received, for example order, execution, allocation, heartbeat, Indication of Interest, etc. For the purpose of this paper we limit ourselves to handling the following message types, which will be most common from an OMS. | code | meaning | |---|---| | 8 | Execution report | | D | Order – single | | G | Order cancel/Replace request | | F | Order cancel request | Every time we receive a new order, the first message should contain MsgType D . We should only receive one D message per order. If this has to be amended at any stage we should receive an order replace request, (MsgType G ), to replace the original order. As the order executes we will receive execution reports (MsgType 8 ) for each execution. These are linked back to the original order through one of the ID fields, generally ClOrdID . The execution message contains some important updates to the overall state of the order, particularly CumQty and AvgPx . If the order is cancelled before the full order quantity is executed, a Cancel Request (MsgType F ) message is sent. This can be rejected with an Order Cancel Reject (MsgType 9 ) and the order will continue to execute. It is important to note that this only cancels any outstanding shares not yet executed and not the full order. OrdStatus ¶ OrdStatus tells us the current state the order is in. It is an important indicator in cases where the order has not been filled, showing if it is still executing, cancelled, done for the day (for multi-day orders) etc. The valid values are: 0 New 7 Stopped 1 Partially filled 8 Rejected 2 Filled 9 Suspended 3 Done for day A Pending New 4 Canceled B Calculated 5 Replaced C Expired 6 Pending Cancel/Replace Commission ¶ In FIX there are two fields needed to obtain the correct commission on an order: Commission (12) and CommType (13). Commission and CommType both return a numerical value; the latter a number defined as follows: 1 per unit (implying shares, par, currency, etc) 2 percentage 3 absolute (total monetary amount) 4 (for CIV buy orders) percentage waived - cash discount 5 (for CIV buy orders) percentage waived - enhanced units 6 points per bond or contract We will only be concerned with the first three cases (from the list above) for our example in this paper. We define a function to calculate the commission value: calcComm:{[comval;comtyp;px;qty] $[comtyp=`1; comval*qty; comtyp=`2; comval*px*qty; comtyp=`3; comval] } LastCapacity ¶ LastCapacity tells us the broker capacity in the execution. It indicates whether a fill on an order was executed as principal or agency. A principal transaction occurs when the broker fills the part of the order from its own inventory while an agency transaction involves the broker filling the order on the market. It is vital in calculating benchmarks or client loss ratios to distinguish between principal and agency flow. The valid values are: 1 Agent 2 Cross as agent 3 Cross as principal 4 Principal Example – order state¶ Approach¶ Our aim is to create a final-state table for all orders. In our example the RDB will subscribe to the tickerplant, receive all the messages and generate an order state. For large volumes this could be separated in two processes: the RDB should just capture all messages from the tickerplant and store them in a single table while a separate process can then be set up to subscribe to this table and generate the order and execution tables. This example details an approach to handling the most common messages expected from an OMS. The standard fields expected from an OMS are included, along with some derived fields. Schema¶ We set up the schema below for the fixmsgs table. It contains columns for every tag defined by our FIX spec, and well as a column called FixMessage , which contains the full FIX message as a string, and a column containing the tickerplant time. The FixMessage field is important as any information in the FIX message missing from our schema can still be extracted. fixmsgs:([] Account:`$(); AvgPx:`float$(); ClOrdID:(); Commission:`float$(); CommType:`$(); CumQty:`float$(); Currency:`$(); ExecID:(); ExecRefID:(); HandlInst:`$(); LastCapacity:`$(); LastMkt:`$(); LastPx:`float$(); LastQty:`int$(); LeavesQty:`float$(); MsgType:`$(); OrderID:(); OrderQty:`int$(); OrdStatus:`$(); OrigClOrdID:(); Price:`float$(); SecurityID:`$(); SenderSubID:`$(); SendingTime:`datetime$(); Side:`$(); Symbol:`$(); Text:(); TimeInForce:`$(); TransactTime:`datetime$(); FixMessage:(); Time:`datetime$() ) The order schema contains the core fields from the fixmsgs schema as well as derived fields: OrderTime and AmendTime . These fields are not included in the FIX spec but will be required by end users and as such are added in the RDB. The order table is keyed on OrderID . In practice a ClOrderID or a combination of ClOrderID and OrigClOrdID may be needed. If an order is cancelled and replaced the OrigClOrdID contains the ClOrderID of the previous version of the order. Only the final version is required in the final state, so we need to track these orders. order:([OrderID:()] ClOrdID:(); OrigClOrdID:(); SecurityID:`$(); Symbol:`$(); Side:`$(); OrderQty:`int$(); CumQty:`float$(); LeavesQty:`float$(); AvgPx:`float$(); Currency:`$(); Commission:`float$(); CommType:`$(); CommValue:`float$(); Account:`$(); MsgType:`$(); OrdStatus:`$(); OrderTime:`datetime$(); TransactTime:`datetime$(); AmendTime:`datetime$(); TimeInForce:`$() ) Processing orders¶ We define the following upd function on the RDB: upd:{[t;x] t insert x; x:`TransactTime xasc x; updNewOrder[`order;select from x where MsgType in `D]; x:select from x where not MsgType in `D; {$[(first x`MsgType)=`8; updExecOrder[`order;x]; (first x`MsgType)=`G; updAmendOrder[`order;x]; (first x`MsgType) in `9`F; updCancelOrder[`order;x]; :()]; } each (where 0b=(=':)x`MsgType) cut x } And a series of functions to handle each MsgType : updNewOrder:{[t;x] ...} updAmendOrder:{[t;x] ...} updCancelOrder:{[t;x] ...} updExecOrder:{[t;x] ...} We first ensure the messages are ordered correctly, according to TransactTime . This is so the messages are processed in the order they were generated, which is important when looking at the final state of an order. New orders are processed first since we should only ever receive one D message per order. updNewOrder[`order;select from x where MsgType in `D] For all subsequent updates for each order we need to ensure that all amendments, cancellations and executions are handled in the correct order. We separate the remaining messages into chunks of common MsgType and process each chunk sequentially. This is particularly important in the case where we receive an amended order in the middle of a group of executions. This is essential for the final order state to show the correct TransactTime , MsgType and OrdStatus of the final order. {$[(first x`MsgType)=`8; updExecOrder[`order;select from x where MsgType in `8]; updAmendOrder[`order;select from x where MsgType in `G`F]] } each (where 0b=(=':)x`MsgType) cut x New orders¶ Whenever a new order is received we must ensure it is entered into our final-state table. We define the following function: updNewOrder:{[t;x] x:update OrderTime:TransactTime from x; t insert inter[cols t;cols x]#x; } For each order, users will want to know the time the order was received. TransactTime is not sufficient here, since it will be overwritten in the final-state table by subsequent updates. We introduce a custom field called OrderTime . This contains the TransactTime of the new order message and will not be updated by any other messages. For a new order message we want to insert all the columns provided in the FIX message. We extract all common columns between our message and the schema. We also note the order table is keyed on OrderID . t insert inter[cols t;cols x]#x We receive the following new-order FIX messages from the OMS. 8=FIX.4.4|9=178|35=D|49=A|56=B|1=accountA|6=0| 11=0000001|12=0.0002|13=2|14=|15=GBp|17=|19=| 21=|29=|30=|31=|32=|151=10000|37=00000001|38=10000|39=| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:00|54=1|55=VOD|58=|59=1| 60=20131218-09:01:00|10=184 8=FIX.4.4|9=178|35=D|49=A|56=B|1=accountB|6=0| 11=0000002|12=0.0002|13=2|14=|15=GBp|17=|19=| 21=|29=|30=|31=|32=|151=4000|37=00000002|38=4000|39=| 41=|44=|48=RIO.L|50=AD|52=20131218-10:24:07|54=2|55=RIO|58=|59=1| 60=20131218-10:24:07|10=182 8=FIX.4.4|9=178|35=D|49=A|56=B|1=accountA|6=0| 11=0000003|12=0.0002|13=2|14=|15=GBp|17=|19=| 21=|29=|30=|31=|32=|151=20100|37=00000003|38=20100|39=| 41=|44=|48=BARC.L|50=AR|52=20131218-11:18:22|54=1|55=BARC|58=|59=1| 60=20131218-11:18:22|10=186 8=FIX.4.4|9=178|35=D|49=A|56=B|1=accountC|6=0| 11=0000004|12=0.0002|13=2|14=|15= The order state shows a series of unfilled orders. The CumQty and OrdStatus are initially null, as they are not present on the new order message. They will be populated by subsequent execution updates. q)select OrderID, MsgType, OrdStatus, SecurityID, Account, OrderQty, CumQty, Commission from order OrderID MsgType OrdStatus SecurityID Account OrderQty CumQty Commission --------------------------------------------------------------------------- "00000001" D VOD.L accountA 10000 0.0002 "00000002" D RIO.L accountB 4000 0.0002 "00000003" D BARC.L accountA 20100 0.0002 "00000004" D EDF.PA accountC 15000 0.0002 "00000005" D VOD.L accountD 3130 0.0002 Amendments and cancellations¶ Any value of the order may be amended by sending a message with MsgType G . This could reflect a correction to commission value, a change in the order quantity etc. The function to update amendments differs slightly from that for new orders. A field to display the latest amend time is added – this provides the end user with the TransactTime of the last change to the order. Every amend message should have been preceded by a new order message, so the amendment is upserted (rather than inserted) into the order state table. A production system could include some sanity checks to ensure we have received an order for any amendment. updAmendOrder:{[t;x] x:update AmendTime:TransactTime from x; t upsert inter[cols t;cols x]#x; } The following example shows an update to the commission value. We have received a new order with commission specified in percent. An update modifies this to an absolute value. The amendment is reflected in the order state and the total value of the commission is extracted using the calcComm function outlined earlier. 8=FIX.4.4|9=178|35=G|1=accountA|6=253.8854627| 11=0000003|12=700|13=3|14=20100| 15=GBp|17=|19=| 21=|29=|30=|31=|32=|151=0|37=00000003|38=20100|39=2| 41=|44=|48= BARC.L|50=AR|52=20131218-16:33:12|54=1|55=BARC|58=|59=1| 60=20131218- 16:33:12|10=195 q)select OrderID,MsgType,Commission,CommType from fixmsgs where OrderID like "00000003",MsgType in `D`G`F`9 OrderID MsgType Commission CommType -------------------------------------- "00000003" D 0.0002 2 "00000003" G 700 3 q)select OrderID, MsgType, CumQty, AvgPx, Commission, CommType, CommValue:calcComm'[Commission;CommType;AvgPx;CumQty] from order where OrderID like "00000003" OrderID MsgType CumQty AvgPx Commission CommType CommValue ---------------------------------------------------------------- "00000003" G 20100 253.8855 700 3 700 An Order Cancel Request (MsgType F ) indicates the cancellation of any outstanding unfilled order quantity. It can be rejected with an Order Cancel Reject (MsgType 9 ). Along with the order cancel message we should get an Execution Report to confirm the cancellation, with OrdStatus 4 to indicate the order is cancelled. As such this may be sufficient to indicate to end users a cancellation, with the Order Cancel Request and Order Cancel Reject omitted from the Order State logic. For this example we upsert only the MsgType and AmendTime from the cancel messages. updCancelOrder:{[t;x] x:update AmendTime:TransactTime from x; t upsert `OrderID xkey select OrderID,MsgType,AmendTime from x; } When the order is cancelled we receive the following FIX message to request a cancel. The order table shows an order that is not fully filled, but cancelled with nothing left to fill. 8=FIX.4.4|9=178|35=F|1=accountC|6=25.3156| 11=0000004|12=|13=|14=12500|15=EUR| 17=100000018|19=| 21=3|29=1|30=XPAR|31=0|32=0|151=2500|37=00000004|38=15000|39 =| 41=|44=|48=EDF.PA|50=CD|52=20131218-13:33:11|54=1|55=EDF|58=|59=1| 60=20131218-13:33:11|10=206 q)select OrderID,MsgType,OrdStatus,OrderQty,CumQty from order where MsgType=`F OrderID MsgType OrdStatus OrderQty CumQty -------------------------------------------- "00000004" F 1 15000 12500 The execution report should follow the cancel request to confirm the order has been cancelled and update the status of the order. The confirmation updates the OrdStatus and changes the LeavesQty to reflect the cancellation. We will see how to handle the execution report in the next section. 8=FIX.4.4|9=178|35=8|1=accountC|6=25.3156| 11=0000004|12=|13=|14=12500|15=EUR| 17=100000018|19=| 21=3|29=1|30=XPAR|31=0|32=0|151=2500|37=00000004|38=15000|39 =4| 41=|44=|48=EDF.PA|50=CD|52=20131218- 13:33:11|54=1|55=EDF|58=|59=1| 60=20131218-13:33:11|151=0|10=210 q)select OrderID,MsgType,OrdStatus,OrderQty,CumQty,LeavesQty from order where MsgType=`F OrderID MsgType OrdStatus OrderQty CumQty LeavesQty ------------------------------------------------------ "00000004" 8 4 15000 12500 0 Execution reports¶ Execution reports (MsgType 8 ) are sent every time there is a change in the state of the order. We are only interested in certain fields from execution messages. In our case we want to update OrderID , MsgType , OrdStatus , LastQty , LastPx , AvgPx , CumQty , LeavesQty and LastMkt in the order table. AvgPx , CumQty and LeavesQty are derived columns, giving the latest information for the full order. They should be calculated by the OMS and upserted straight into the order state. The LastQty contains the quantity executed on the last fill, and LastPx the price of the last fill. It is important to always take the latest OrdStatus from the execution messages, this ensures the order state always reflects the current state of the order. updExecOrder:{[t;x] t upsert select OrderID, MsgType, OrdStatus, LastQty, LastPx, AvgPx, CumQty, LeavesQty, LastMkt from x; } The following messages show all the execution reports received for one order. The first message is a confirmation of the new order and sets the OrdStatus to 0. The subsequent messages show each fill on the order. The OrdStatus is set to 1 for each fill until order is complete, when we receive an OrdStatus of 2. 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=0| 11=0000001|12=0.0002|13=2|14=|15=GBp|17=|19=| 21=|29=|30=|31=|32=|151=10000|37=00000001|38=10000|39=0| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:00|54=1|55=VOD|58=|59=1| 60=20131218-09:01:00|10=185 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=0| 11=0000001|12=0.0002|13=2|14=|15=GBp|17=|19=| 21=|29=|30=|31=|32=|151=10000|37=00000001|38=10000|39=0| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:03|54=1|55=VOD|58=|59=1| q60=20131218-09:01:03|10=185 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.5| 11=0000001|12=|13=|14=1500|15=GBp|17=100000001|19=| 21=1|29=1|30=XLON|31=229.5|32=1500|151=8500|37=00000001|38=10000|39=1| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:11|54=1|55=VOD|58=|59=1| 60=20131218-09:01:11|10=209 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.6125| 11=0000001|12=|13=|14=6000|15=GBp|17=100000002|19=| 21=1|29=1|30=XLON|31=229.65|32=4500|151=4000|37=00000001|38=10000|39=1| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:13|54=1|55=VOD|58=|59=1| 60=20131218-09:01:13|10=213 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.6353846| 11=0000001|12=|13=|14=6500|15=GBp|17=1##|19=| 21=1|29=1|30=XLON|31=229.91|32=500|151=3500|37=0000 0001|38=10000|39=1| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:14|54=1|55=VOD|58=|59=1| 60=20131218-09:01:14|10=215 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.7496933| 11=0000001|12=|13=|14=8150|15=GBp|17=100000004|19=| 21=1|29=1|30=XLON|31=230.2|32=1650|151=1850|37=00000001|38=10000|39=1| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:15|54=1|55=VOD|58=|59=1| 60=20131218-09:01:15|10=215 8=FIX.4.4|9=178|35=8|49=A|56=B|1=accountA|6=229.6295| 11=0000001|12=|13=|14=10000|15=GBp|17=100000005|19=| 21=1|29=1|30=XLON|31=229.1|32=1850|151=0|37=00000001|38=10000|39=2| 41=|44=|48=VOD.L|50=AB|52=20131218-09:01:46|54=1|55=VOD|58=|59=1| 60=20131218-09:01:46|10=210 The final table shows this order (OrderID "00000001" ) as fully filled. We can also see the cancelled order ("00000004" ) reflected with OrdStatus 4. The order we amended ("00000003" ) shows an amended commission value of 700. q)select OrderID, SecurityID, Side, MsgType, OrdStatus, OrderQty, CumQty, AvgPx, CommValue:calcComm'[Commission;CommType;AvgPx;CumQty] from order OrderID SecurityID Side MsgType OrdStatus OrderQty CumQty AvgPx CommValue ------------------------------------------------------------------------------- "00000001" VOD.L 1 8 2 10000 10000 229.6295 459.259 "00000002" RIO.L 2 8 2 4000 400 3253.537 260.283 "00000003" BARC.L 1 G 2 20100 20100 253.8855 700 "00000004" EDF.PA 1 8 4 15000 12500 25.3156 63.289 "00000005" VOD.L 2 8 2 3130 3130 229.7559 143.8272 Author¶ Damien Barker is a financial engineer who has worked as a consultant for some of the world's largest financial institutions. Based in London, Damien is currently working on trading and analytics application at a US investment bank.
if[not[app.describeOnly] and not app.passOnly; / Only want to print this when running to see results .tst.callbacks.expecRan:{[s;e]; app.expectationsRan+:1; r:e[`result]; if[r ~ `pass; app.expectationsPassed+:1]; if[r in `testFail`fuzzFail; app.expectationsFailed+:1]; if[r like "*Error"; app.expectationsErrored+:1]; if[.tst.output.interactive; 1 $[r ~ `pass;"."; r in `testFail`fuzzFail;"F"; r ~ `afterError;"B"; r ~ `afterError;"A"; "E"]; ]; if[(app.failFast or app.failHard) and not r ~ `pass; s[`expectations]:enlist e; 1 "\n",.tst.output.spec s; if[app.failHard;.tst.halt:1b]; if[app.exit and not app.failHard;exit 1]; ]; } ]; \d . (.tst.loadTests hsym `$) each .tst.app.args; \d .tst if[app.failHard;.tst.app.specs[;`failHard]: 1b]; if[not app.runPerformance;.tst.app.specs[;`expectations]: {x .[;();_;]/ where x[;`type] = `perf} each app.specs[;`expectations]]; if[0 <> count app.runSpecs;.tst.app.specs: app.specs where (or) over app.specs[;`title] like/: app.runSpecs]; if[0 <> count app.excludeSpecs;.tst.app.specs: app.specs where not (or) over app.specs[;`title] like/: app.excludeSpecs]; app.results: $[not app.describeOnly;.tst.runSpec each app.specs;app.specs] if[not .tst.halt; app.passed:all `pass = app.results[;`result]; if[not app.passOnly; if[.tst.output.interactive and not app.describeOnly;-1 "\n"]; if[.tst.output.always or not app.passed; -1 {-1 _ x} .tst.output.top app.results; ]; if[not app.describeOnly; if[.tst.output.interactive; -1 "For ", string[count app.specs], " specifications, ", string[app.expectationsRan]," expectations were run."; -1 string[app.expectationsPassed]," passed, ",string[app.expectationsFailed]," failed. ",string[app.expectationsErrored]," errors."; ]; ]; ]; if[app.exit; exit `int$not app.passed]; ]; ================================================================================ FILE: qspec_lib_fixture.q SIZE: 2,984 characters ================================================================================ / Need to manage directories or only attempt to use absolute paths (latter is probably easier) .tst.fixtureAs:{[fixtureName;name]; dirPath: (` vs .tst.tstPath) 0; fixtureInDir:{$[any mp:x = (` vs' ps:(key y))[;0];` sv y,first ps where mp;`]}; fixture: $[not ` ~ fp:fixtureInDir[fixtureName;dirPath]; .tst.loadFixture[fp;name]; (`fixtures in key dirPath) and not ` ~ fp:fixtureInDir[fixtureName;` sv dirPath,`fixtures]; .tst.loadFixture[fp;name]; '"Error loading fixture '", (string fixtureName), "', not found in:\n\t", (1 _ string dirPath),"\n\t", (1 _ string ` sv dirPath,`fixtures)]; fixture ^ name } .tst.loadFixture:{[path;name]; $[2 = count fixtureName:` vs (` vs path) 1; / If there is an extension on the file path of the fixture .tst.loadFixtureTxt[path;name]; -11h = type key path; .tst.loadFixtureFile[path;name]; all -11h = (type key@) each ` sv' path,'key path; / If the path is a directory of files (splayed dir) .tst.loadFixtureFile[path;name]; .tst.loadFixtureDir[path;name]]; first fixtureName } .tst.fixture:.tst.fixtureAs[;`] .tst.currentDirFixture:` .tst.loadFixtureDir:{[f;name]; fixtureName: (` vs f) 1; dirFixtureLoaded: not ` ~ .tst.currentDirFixture; if[not dirFixtureLoaded;.tst.saveDir[];]; if[not fixtureName ~ .tst.currentDirFixture; if[dirFixtureLoaded;.tst.removeDirVars[];]; system "l ", 1 _ string f; .tst.currentDirFixture: fixtureName; ]; } .tst.loadFixtureTxt:{[f;name]; fname: ((` vs (` vs f) 1) 0) ^ name; .tst.mock[fname;(raze l[0;1] vs l[0];enlist l[0;1]) 0: 1 _ l: read0 f]; fname } .tst.loadFixtureFile:{[f;name]; .tst.mock[fname:((` vs f) 1) ^ name;get f]; fname } .tst.savedDir:.tst.defaultSavedDir:`directory`vars!("";(`,())!(),(::)) .tst.saveDir:{ if[not () ~ dirVars: .tst.findDirVars[]; .tst.savedDir:`directory`vars!(system "cd";(!).(::;get each)@\:` sv' `.,'dirVars); .tst.removeDirVars dirVars]; } .tst.removeDirVars:{![`.;();0b;] $[(::) ~ x;.tst.findDirVars[];x]} .tst.restoreDir:{ if[not ` ~ .tst.currentDirFixture; .tst.removeDirVars[]; .tst.currentDirFixture:`]; if[not "" ~ .tst.savedDir.directory; system "l ", .tst.savedDir.directory; (key .tst.savedDir.vars) set' value .tst.savedDir.vars; .tst.savedDir: .tst.defaultSavedDir;] } / Get a list of files (and thus variables) from the partition directory that do not match special partition directory files: / ie: Exclue the par.txt file and any partition directories (contained in the list .Q.ps), include the partition variable (.Q.pf) and the known partition tables (.Q.pt) / These will be the variables to delete from the top level namespace when we swap out a partition directory fixture .tst.findDirVars:{ $[count where -1h = (type .Q.qp get@) each ` sv' `.,'tables `.; /.Q.qp returns a boolean only when a table is a partition table or a splayed table distinct @[get;`.Q.pf;()],@[get;`.Q.pt;()],pvals where not any (pvals:key `:.) like/:(string @[get;`.Q.pv;()]),enlist "par.txt"; ()] } ================================================================================ FILE: qspec_lib_init.q SIZE: 598 characters ================================================================================ .utl.require .utl.PKGLOADING,"/mock.q" .utl.require .utl.PKGLOADING,"/fixture.q" .utl.require .utl.PKGLOADING,"/tests/internals.q" .utl.require .utl.PKGLOADING,"/tests/assertions.q" .utl.require .utl.PKGLOADING,"/tests/ui.q" .utl.require .utl.PKGLOADING,"/tests/spec.q" .utl.require .utl.PKGLOADING,"/tests/expec.q" .utl.require .utl.PKGLOADING,"/tests/fuzz.q" .utl.require .utl.PKGLOADING,"/loader.q" .tst.PKGNAME: .utl.PKGLOADING .tst.loadOutputModule:{[module]; if[not module in ("text";"xunit";"junit"); '"Unknown OutputModule ",module]; .utl.require .tst.PKGNAME,"/output/",module,".q" } ================================================================================ FILE: qspec_lib_loader.q SIZE: 416 characters ================================================================================ \d .tst loadTests:{[paths]; .utl.require each findTests[paths]} findTests:{[paths]; distinct raze suffixMatch[".q"] each distinct (),paths } suffixMatch:{[suffix;path]; if[path like "*",suffix;:enlist path]; f: ` sv' path,'f where not (f:(),key path) like ".*"; d: f where 11h = (type key@) each f; f: f where f like "*",suffix; raze f, .z.s[suffix] each d } testFilePath:{` sv (` vs .tst.tstPath)[0],x} ================================================================================ FILE: qspec_lib_mock.q SIZE: 1,136 characters ================================================================================ \d .tst initStore:store:(enlist `)!enlist (::) removeList:() / Used to replace the variable specified by name with newVal. Existing values will be clobbered / until restored. Standard variable re-assignment caveats apply / CAUTION: Mocking out the mock functions and variables is inadvisable mock:{[name;newVal]; name:$[$[1 = c:count vn:` vs name;1b;not null first vn]; / Create fully qualified name if given a local one ` sv .tst.context,name; (2 = c) and ` ~ first vn; '"Can't mock top-level namespaces!"; name]; / Early abort if name will be removed later if[name in removeList; :name set newVal]; if[`dne ~ @[get;name;`dne]; removeList,:name; :name set newVal]; if[not name in key store; store[name]:get name]; name set newVal } / Restores the environment to the previous state before any .tst.mock calls were made restore:{ / Restore all fully qualified symbols (set') . (key;value) @\: 1 _ store; `.tst.store set initStore; / Drop each fully qualified symbol from its respective namespace if[count removeList;(.[;();_;]') . flip ((` sv -1 _;last) @\: ` vs) each removeList]; `.tst.removeList set (); } ================================================================================ FILE: qspec_lib_output_junit.q SIZE: 2,840 characters ================================================================================ .utl.require .tst.PKGNAME,"/output/xml.q" \d .tst printJUnitTime:{string[`int$`second$x],$["000" ~ ns:3#((9 - count n)#"0"),9#n:string nano:(`long$x) mod 1000000000;"";".",ns]} expecTypes:`test`fuzz`perf!("should";"it holds that";"performs") output:()!() output[`top]:{[specs] xml.node["testsuites";()!()] raze output.spec each specs } output[`spec]:{[spec]; e:spec`expectations; attrs:`name`skipped`tests`errors`failures`time!(spec`title;0;count e;sum e[;`result] like "*Error";sum e[;`result]=`testFail;printJUnitTime sum e[;`time]); xml.node["testsuite";attrs;-1 _ ` sv output[`expectation] each e] } output[`expectation]:{[e]; label: expecTypes[e`type]," ",name:e[`desc]; outstr:output[e`type][e]; atr:`name`time!(label;printJUnitTime e[`time]); //if[e[`result] like "*Error";'blah;]; xml.node["testcase";atr] $[(e[`result] like "*Error") or count e`failures; output[e`type][e]; "" ] } output[`code]:{[e]; o:""; if[not "{}" ~ last value e[`before];o,:"Before code: \n", (last value e[`before]),"\n"]; o,:"Test code: \n",(last value e[`code]),"\n"; if[not "{}" ~ last value e[`after];o,:"After code: \n", (last value e[`after]),"\n"]; o } output[`anyFailures]:{[t];(`failures in key t) and count t[`failures]} output[`assertsRun]:{[t]; (string t[`assertsRun]), $[1 = t[`assertsRun];" assertion was";" assertions were"]," run.\n" } codeOutput:{[e] (output[`assertsRun] e),output.code e} output[`error]:{[e]; o:$[count e[`errorText]; xml.node["error";`type`message!(e[`errorText];xml.safeString[e`result], " occurred in test execution");xml.cdata codeOutput e]; "" ]; o } output[`test]:{[t]; o:""; o,:output.error[t]; if[output[`anyFailures] t; o,:raze {xml.node["failure";`type`message!(y;"Assertion failure occured during test");xml.cdata codeOutput x]}[t] each t`failures; ]; o } output[`fuzzLimit]:10; output[`fuzz]:{[t]; o:""; o,:output.error[t]; / If the fuzz assertions errors out after tests have been run, but not all failure processing has completed, the output will not pring correctly / Consider trying to figure out how to print the fuzz that the test failed on (store last fuzz?) if[(o~"") and output[`anyFailures] t; o,:raze {[t;f] h:"Maximum accepted failure rate: ", (string t[`maxFailRate]), "\n"; h,:"Failure rate was ", (string t[`failRate]), " for ", (string t[`runs]), " runs\n"; h,:"Displaying ", (string displayFuzz:min (.tst.output.fuzzLimit;count t[`fuzzFailureMessages])), " of ", (string count t[`fuzzFailureMessages]), " fuzz failures messages\n"; h,:raze (raze displayFuzz # t[`fuzzFailureMessages]),\:"\n"; xml.node["failure";`type`message!(f;"Fuzz failure occured during test");xml.cdata h,codeOutput t] }[t] each t`failures; ]; o } output[`perf]:{[p]; } output[`always]:1b output[`interactive]:0b ================================================================================ FILE: qspec_lib_output_text.q SIZE: 2,307 characters ================================================================================ \d .tst
replay:{[tabs;realsubs;schemalist;logfilelist] // realsubs is a dict of `subtabs`errtabs`instrs // schemalist is a list of (tablename;schema) // logfilelist is a list of (log count; logfile) .lg.o[`subscribe;"replaying the log file(s)"]; // store the orig version of upd origupd:@[value;`..upd;{{[x;y]}}]; // only use tables user has access to subtabs:realsubs[`subtabs]; if[count where nullschema:0=count each schemalist; tabs:(schemalist where not nullschema)[;0]; subtabs:tabs inter realsubs[`subtabs]]; // set the replayupd function to be upd globally if[not (tabs;realsubs[`instrs])~(`;`); .lg.o[`subscribe;"using the .sub.replayupd function as not replaying all tables or instruments"]; @[`.;`upd;:;.sub.replayupd[origupd;subtabs;realsubs[`instrs]]]]; {[d] @[{.lg.o[`subscribe;"replaying log file ",.Q.s1 x]; -11!x;};d;{.lg.e[`subscribe;"could not replay the log file: ", x]}]}each logfilelist; // reset the upd function back to original upd @[`.;`upd;:;origupd]; .lg.o[`subscribe;"finished log file replay"]; // return updated version of realsubs @[realsubs;`subtabs;:;subtabs] } subscribe:{[tabs;instrs;setschema;replaylog;proc] // if proc dictionary is empty then exit - no connection if[0=count proc;.lg.o[`subscribe;"no connections made"]; :()]; // check required flags are set, and add a definintion to the reconnection logic // when the process is notified of a new connection, it will try and resubscribe if[(not .sub.reconnectinit)&.sub.AUTORECONNECT; $[.servers.enabled; [.servers.connectcustom:{x@y;.sub.autoreconnect[y]}[.servers.connectcustom]; .sub.reconnectinit:1b]; .lg.o[`subscribe;"autoreconnect was set to true but server functionality is disabled - unable to use autoreconnect"]]; ]; // work out from the remote connection what type of tickerplant we are subscribing to // default to `standard tptype:@[proc`w;({@[value;`tptype;`standard]};`);`]; if[null tptype; .lg.e[`subscribe;e:"could not determine tickerplant type"]; 'e]; // depending on the type of tickerplant being subscribed to, change the functions for requesting // the tables and subscriptions $[tptype=`standard; [tablesfunc:{key `.u.w}; subfunc:{`schemalist`logfilelist`rowcounts`date!(.u.sub\:[x;y];enlist(.u`i`L);(.u `icounts);(.u `d))}]; tptype in `chained`segmented; [tablesfunc:`tablelist; subfunc:`subdetails]; [.lg.e[`subscribe;e:"unrecognised tickerplant type: ",string tptype]; 'e]]; // pull out the full list of tables to subscribe to utabs:@[proc`w;(tablesfunc;`);()]; // reduce down the subscription list realsubs:reducesubs[tabs;utabs;instrs;proc]; // check if anything to subscribe to, and jump out if[0=count realsubs`subtabs; .lg.o[`subscribe;"all tables have already been subscribed to"]; :()]; // pull out subscription details from the TP details:@[proc`w;(subfunc;realsubs[`subtabs];realsubs[`instrs]);{.lg.e[`subscribe;"subscribe failed : ",x];()}]; if[count details; if[setschema;createtables[details[`schemalist]]]; if[replaylog;realsubs:replay[tabs;realsubs;details[`schemalist];details[`logfilelist]]]; .lg.o[`subscribe;"subscription successful"]; updatesubscriptions[proc;;realsubs[`instrs]]each realsubs[`subtabs]]; // return the names of the tables that have been subscribed for and // the date from the name of the tickerplant log file (assuming the tp log has a name like `: sym2014.01.01 // plus .u.i and .u.icounts if existing on TP - details[1;0] is .u.i, details[2] is .u.icounts (or null) logdate:0Nd; if[tptype in `standard`chained; d:(`subtables`tplogdate!(details[`schemalist][;0];(first "D" $ -10 sublist string last first details[`logfilelist])^logdate)); :d,{(where 101 = type each x)_x}(`i`icounts`d)!(details[`logfilelist][0;0];details[`rowcounts];details[`date])]; if[tptype~`segmented; retdic:`logdir`subtables!(details[`logdir];details[`schemalist][;0]); :retdic,{(where 101 = type each x)_x}`i`icounts`d`tplogdate!details[`logfilelist`rowcounts`date`date]; ] } // wrapper function around upd which is used to only replay syms and tables from the log file that // the subscriber has requested replayupd:{[f;tabs;syms;t;x] // escape if the table is not one of the subscription tables if[not (t in tabs) or tabs ~ `;:()]; // if subscribing for all syms then call upd and then escape if[(syms ~ `)or 99=type syms; f[t;x];:()]; // filter down on syms // assuming the the log is storing messages (x) as arrays as opposed to tables c:cols[`. t]; // convert x into a table x:select from $[type[x] in 98 99h; x; 0>type first x;enlist c!x;flip c!x] where sym in syms; // call upd on the data f[t;x] } checksubscriptions:{update active:0b from `.sub.SUBSCRIPTIONS where not w in key .z.W;} retrysubscription:{[row] subscribe[row`table;$[((),`) ~ insts:row`instruments;`;insts];0b;0b;3#row]; } // if something becomes available again try to reconnect to any previously subscribed tables/instruments autoreconnect:{[rows] s:select from SUBSCRIPTIONS where ([]procname;proctype)in (select procname, proctype from rows), not active; s:s lj 2!select procname,proctype,w from rows; if[count s;.sub.retrysubscription each s]; } pc:{[result;W] update active:0b from `.sub.SUBSCRIPTIONS where w=W;result} // set .z.pc handler to update the subscriptions table .dotz.set[`.z.pc;{.sub.pc[x y;y]}@[value;.dotz.getcommand[`.z.pc];{[x]}]]; // if timer is set, trigger reconnections $[.timer.enabled and checksubscriptionperiod > 0; .timer.rep[.proc.cp[];0Wp;checksubscriptionperiod;(`.sub.checksubscriptions`);0h;"check all subscriptions are still active";1b]; checksubscriptionperiod > 0; .lg.e[`subscribe;"checksubscriptionperiod is set but timer is not enabled"]; ()] ================================================================================ FILE: TorQ_code_common_timer.q SIZE: 5,024 characters ================================================================================ // Functionality to extend the timer \d .timer enabled:@[value;`enabled;1b] // whether the timer is enabled debug:@[value;`debug;0b] // print when the timer runs any function logcall:(not @[value;`.proc.lowpowermode;0b]) & @[value;`logcall;1b] // log each timer call by passing it through the 0 handle nextscheduledefault:@[value;`nextscheduledefault;2h] // the default way to schedule the next timer // Assume there is a function f which should run at time T0, actually runs at time T1, and finishes at time T2 // if mode 0, nextrun is scheduled for T0+period // if mode 1, nextrun is scheduled for T1+period // if mode 2, nextrun is scheduled for T2+period id:0 getID:{:id+::1} // Store a table of timer values timer:([id:`int$()] // the of the timer timerchange:`timestamp$(); // when the function was added to the timer periodstart:`timestamp$(); // the first time to fire the timer periodend:`timestamp$(); // the the last time to fire the timer period:`timespan$(); // how often the timer is run funcparam:(); // the function and parameters to run lastrun:`timestamp$(); // the last run time nextrun:`timestamp$(); // the next scheduled run time active:`boolean$(); // whether the timer is active nextschedule:`short$(); // determines how the next schedule time should be calculated description:()); // a free text description // utility function to check funcparam comes in the correct format check:{[fp;dupcheck] if[dupcheck; if[count select from timer where fp~/:funcparam; '"duplicate timer already exists for function ",(-3!fp),". Use .timer.rep or .timer.one with dupcheck set to false to force the value"]]; $[0=count fp; '"funcparam must not be an empty list"; 10h=type fp; '"funcparam must not be string. Use (value;\"stringvalue\") instead"; fp]} // add a repeatingtimer rep:{[start;end;period;funcparam;nextsch;descrip;dupcheck] if[not nextsch in `short$til 3; '"nextsch mode can only be one of ",-3!`short$til 3]; `.timer.timer upsert (getID[];cp;start;0Wp^end;period;check[funcparam;dupcheck];0Np;$[start<cp;period*ceiling(cp-start)%period;0D]+start:(cp:.proc.cp[])^start;1b;nextsch;descrip);} // add a one off timer one:{[runtime;funcparam;descrip;dupcheck] `.timer.timer upsert (getID[];.proc.cp[];.proc.cp[];0Np;0Nn;check[funcparam;dupcheck];0Np;runtime;1b;0h;descrip);} // projection to add a default repeating timer. Scheduling mode 2 is the safest - least likely to back up repeat:rep[;;;;nextscheduledefault;;1b] once:one[;;;1b] // Remove a row from the timer remove:{[timerid] delete from `.timer.timer where id=timerid} removefunc:{[fp] delete from `.timer.timer where fp~/:funcparam} // run a timer function and reschedule if required run:{ // Pull out the rows to fire // Assume we only use period start/end when creating the next run time // sort asc by lastrun so the timers which are due and were fired longest ago are given priority torun:`lastrun xasc 0!select from timer where active,nextrun<x; runandreschedule each torun} nextruntime:-0Wp // run a timer function and reschedule it if required runandreschedule:{ // if debug mode, print out what we are doing if[debug; .lg.o[`timer;"running timer ID ",(string x`id),". Function is ",-3!x`funcparam]]; start:.proc.cp[]; @[$[logcall;0;value];x`funcparam;{update active:0b from `.timer.timer where id=x`id; .lg.e[`timer;"timer ID ",(string x`id)," failed with error ",y,". The function will not be rescheduled"]}[x]]; // work out the next run time n:x[`period]+(x[`nextrun];start;.proc.cp[]) x`nextschedule; // check if the next run time falls within the sceduled period // either up the nextrun info, or switch off the timer $[n within x`periodstart`periodend; update lastrun:start,nextrun:n from `.timer.timer where id=x`id; [if[debug;.lg.o[`timer;"setting timer ID ",(string x`id)," to inactive as next schedule time is outside of scheduled period"]]; update lastrun:start,active:0b from `.timer.timer where id=x`id]]; .timer.nextruntime:exec min[nextrun] from .timer.timer; } //Set .z.ts if[.timer.enabled; .dotz.set[`.z.ts;$[@[{value x;1b};.dotz.getcommand[`.z.ts];0b]; {[x;y] .timer.run now:.proc.cp[]; x@y}[value .dotz.getcommand[`.z.ts]]; {if[.proc.cp[]>.timer.nextruntime;.timer.run[.proc.cp[]]]}]];
Dictionary programs¶ From GeeksforGeeks Python Programming Examples Follow links to the originals for more details on the problem and Python solutions. Sort dictionary by keys or values¶ Sort keys ascending¶ >>> kv = {2:'56', 1:'2', 5:'12', 4:'24', 6:'18', 3:'323'} >>> sorted(kv.keys()) [1, 2, 3, 4, 5, 6] q)kv:2 1 4 5 6 3!64 69 23 65 34 76 q)asc key kv `s#1 2 3 4 5 6 A dictionary is a mapping between two lists: the keys and the values. Keys are commonly of the same datatype; as are values. So most dictionaries are a mapping between two vectors. (Homogeneous lists.) Above, dictionary kv is formed from two vectors by the Dict operator ! . A list of key-value pairs can be flipped into two lists, and passed to (!). to form a dictionary. q)(!). flip(2 56;1 2;5 12;4 24;6 18;3 323) 2| 56 1| 2 5| 12 4| 24 6| 18 3| 323 Sort entries ascending by key¶ >>> [[k, kv[k]] for k in sorted(kv.keys())] [[1, 2], [2, 56], [3, 323], [4, 24], [5, 12], [6, 18]] q)k!kv k:asc key kv 1| 2 2| 56 3| 323 4| 24 5| 12 6| 18 Sort entries ascending by value¶ >>> sorted(kv.items(), key = lambda x:(x[1], x[0])) [(1, 2), (5, 12), (6, 18), (4, 24), (2, 56), (3, 323)] q)asc kv 1| 2 5| 12 6| 18 4| 24 2| 56 3| 323 The value of kv is the dictionary’s values. q)value kv 56 2 12 24 18 323 So an ascending sort of the dictionary returns it in ascending order of values. Sum of values¶ >>> d = {'a': 100, 'b':200, 'c':300} >>> sum(d.values()) 600 Dictionaries are first-class objects in q, and keywords apply to their values. q)d:`a`b`c!100 200 300 q)sum d 600 Delete an entry¶ >>> d = {"Arushi" : 22, "Anuradha" : 21, "Mani" : 21, "Haritha" : 21} >>> # functional removal >>> {key:val for key, val in d.items() if key != 'Mani'} {'Arushi': 22, 'Anuradha': 21, 'Haritha': 21} >>> # removal in place >>> d.pop('Mani') 21 >>> d {'Anuradha': 21, 'Haritha': 21, 'Arushi': 22} q)d:`Anuradha`Haritha`Arushi`Mani!21 21 22 21 q)delete Mani from d / functional removal Anuradha| 21 Haritha | 21 Arushi | 22 q)delete Haritha from `d / removal in place `d q)d Anuradha| 21 Arushi | 22 Mani | 21 Removal in place in q is effectively restricted to global tables. Within functions, use functional methods. Sort list of dictionaries by value¶ >>> lis = [{ "name" : "Nandini", "age" : 20}, ... { "name" : "Manjeet", "age" : 20 }, ... { "name" : "Nikhil" , "age" : 19 }] >>> >>> sorted(lis, key=itemgetter('age', 'name')) [{'name': 'Nikhil', 'age': 19}, {'name': 'Manjeet', 'age': 20}, {'name': 'Nandini', 'age': 20}] >>> sorted(lis, key=itemgetter('age'),reverse = True) [{'name': 'Nandini', 'age': 20}, {'name': 'Manjeet', 'age': 20}, {'name': 'Nikhil', 'age': 19}] A list of q same-key dictionaries is… a table. q)show lis:(`name`age!(`Nandini;20); `name`age!(`Manjeet;20); `name`age!(`Nikhil;19)) name age ----------- Nandini 20 Manjeet 20 Nikhil 19 q)lis iasc lis`age / sort ascending by age name age ----------- Nikhil 19 Nandini 20 Manjeet 20 q)lis{x iasc x y}/`name`age / sort by name within age name age ----------- Nikhil 19 Manjeet 20 Nandini 20 Merge two dictionaries¶ Using Python 2 def merge(dict1, dict2): d = {} d.update(dict1) d.update(dict2) return d >>> d1 = {'a': 10, 'b': 8, 'c': 42} >>> d2 = {'d': 6, 'c': 4} >>> merge(d1, d2) {'a': 10, 'b': 8, 'c': 4, 'd': 6} or in Python 3 >>> d1 = {'a': 10, 'b': 8, 'c': 42} >>> d2 = {'d': 6, 'c': 4} >>> {**d1, **d2} {'a': 10, 'b': 8, 'c': 4, 'd': 6} The Join operator (, ) in q has upsert semantics. q)d1:`a`b`c!10 8 42 q)d2:`d`c!6 4 q)d1,d2 a| 10 b| 8 c| 4 d| 6 Grade calculator¶ grades.py jack = { "name":"Jack Frost", "assignment" : [80, 50, 40, 20], "test" : [75, 75], "lab" : [78.20, 77.20] } james = { "name":"James Potter", "assignment" : [82, 56, 44, 30], "test" : [80, 80], "lab" : [67.90, 78.72] } dylan = { "name" : "Dylan Rhodes", "assignment" : [77, 82, 23, 39], "test" : [78, 77], "lab" : [80, 80] } jess = { "name" : "Jessica Stone", "assignment" : [67, 55, 77, 21], "test" : [40, 50], "lab" : [69, 44.56] } tom = { "name" : "Tom Hanks", "assignment" : [29, 89, 60, 56], "test" : [65, 56], "lab" : [50, 40.6] } def get_average(marks): total_sum = sum(marks) total_sum = float(total_sum) return total_sum / len(marks) def calculate_total_average(students): assignment = get_average(students["assignment"]) test = get_average(students["test"]) lab = get_average(students["lab"]) # Result based on weightings return (0.1 * assignment + 0.7 * test + 0.2 * lab) def assign_letter_grade(score): if score >= 90: return "A" elif score >= 80: return "B" elif score >= 70: return "C" elif score >= 60: return "D" else : return "E" def class_average_is(student_list): result_list = [] for student in student_list: stud_avg = calculate_total_average(student) result_list.append(stud_avg) return get_average(result_list) students = [jack, james, dylan, jess, tom] for i in students : print(i["name"]) print("=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=") print("Average marks of %s is : %s " %(i["name"], calculate_total_average(i))) print("Letter Grade of %s is : %s" %(i["name"], assign_letter_grade(calculate_total_average(i)))) print() class_av = class_average_is(students) print( "Class Average is %s" %(class_av)) print("Letter Grade of the class is %s " %(assign_letter_grade(class_av))) $ python3 grades.py Jack Frost =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= Average marks of Jack Frost is : 72.79 Letter Grade of Jack Frost is : C James Potter =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= Average marks of James Potter is : 75.962 Letter Grade of James Potter is : C Dylan Rhodes =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= Average marks of Dylan Rhodes is : 75.775 Letter Grade of Dylan Rhodes is : C Jessica Stone =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= Average marks of Jessica Stone is : 48.356 Letter Grade of Jessica Stone is : E Tom Hanks =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= Average marks of Tom Hanks is : 57.26 Letter Grade of Tom Hanks is : E Class Average is 72.79 Letter Grade of the class is C Median not average The output above displays the class median score and letter grade, not the average. Oops. Shorter programs are easier to get right. grades.q / grade calculator students:flip`name`assignment`test`lab!flip( (`JackFrost; 80 50 40 20; 75 75; 78.20 77.20); (`JamesPotter; 82 56 44 30; 80 80; 67.90 78.72); (`DylanRhodes; 77 82 23 39; 78 77; 80 80); (`JessicaStone; 67 55 77 21; 40 50; 69 44.56); (`TomHanks; 29 89 60 56; 65 56; 50 40.6) ) students[`mark]:sum .1 .7 .2*(avg'')students `assignment`test`lab lg:{"EDCBA"sum 60 70 80 90<\:x} / letter grade from mark update letter:lg mark from `students; show students "Class average: ",string ca:avg students`mark "Class letter grade: ",lg ca q)\l grades.q name assignment test lab mark letter ------------------------------------------------------- JackFrost 80 50 40 20 75 75 78.2 77.2 72.79 C JamesPotter 82 56 44 30 80 80 67.9 78.72 75.962 C DylanRhodes 77 82 23 39 78 77 80 80 75.775 C JessicaStone 67 55 77 21 40 50 69 44.56 48.356 E TomHanks 29 89 60 56 65 56 50 40.6 57.26 E "Class average: 66.0286" "Class letter grade: D" Mirror characters in a string¶ def mirrorChars(s, k): original = 'abcdefghijklmnopqrstuvwxyz' reverse = 'zyxwvutsrqponmlkjihgfedcba' m = dict(zip(original,reverse)) lst = list(s) ti = range(k-1, len(lst)) for i in ti: lst[i] = m[lst[i]] return ''.join(lst) >>> mirrorChars('paradox', 3) 'paizwlc' mirrorChars:{[s;k] m:{x!reverse x}.Q.a; / mirror dictionary ti:(k-1)_ til count s; / target indexes @[s;ti;m] } q)mirrorChars["paradox";3] "paizwlc" Python and q solutions implement the same strategy: - write a mirror dictionary m - identify the indexes to be targeted ti - replace the characters at those indexes with their mirrors The Python has the further steps of converting the string to a list and back again. Count frequency¶ >>> lst = ([1, 1, 1, 5, 5, 3, 1, 3, 3, 1, 4, 4, 4, 2, 2, 2, 2]) >>> from collections import Counter >>> Counter(lst) Counter({1: 5, 2: 4, 3: 3, 4: 3, 5: 2}) q)lst:1 1 1 5 5 3 1 3 3 1 4 4 4 2 2 2 2 q)count each group lst 1| 5 5| 2 3| 3 4| 3 2| 4 Tuples to dictionary¶ >>> tups = [("akash", 10), ("gaurav", 12), ("anand", 14), ("suraj", 20), ... ("akhil", 25), ("ashish", 30)] >>> {t[0]:t[1] for t in tups} {'akash': 10, 'gaurav': 12, 'anand': 14, 'suraj': 20, 'akhil': 25, 'ashish': 30} q)tups:(("akash";10);("gaurav";12);("anand";14);("suraj";20);("akhil";25);("ashish";30)) q)(!).flip tups "akash" | 10 "gaurav"| 12 "anand" | 14 "suraj" | 20 "akhil" | 25 "ashish"| 30 Here we flip the tuples to get two lists, which we pass to Apply (. ) as the arguments to Dict (! ). The heading suggests a more general problem than turning a list of pairs into a dictionary. In q, the general case, with tuples of unspecified length, is handled by keyed tables.
// @kind function // @category utility // @desc Retrieve previous generated model from disk // @param config {dictionary} Information about a previous run of AutoML // including the feature extraction procedure used and the best model // produced // @returns {table} Features produced using config feature extraction // procedures utils.loadModel:{[config] modelLibrary:config`modelLib; loadFunction:$[modelLibrary~`sklearn; .p.import[`joblib][`:load]; modelLibrary~`keras; $[check.keras[]; .p.import[`keras.models][`:load_model]; '"Keras model could not be loaded" ]; modelLibrary~`torch; $[0~checkimport 1; .p.import[`torch][`:load]; '"Torch model could not be loaded" ]; modelLibrary~`theano; $[0~checkimport 5; .p.import[`joblib][`:load]; '"Theano model could not be loaded" ]; '"Model Library must be one of 'sklearn', 'keras' or 'torch'" ]; modelPath:config[`modelsSavePath],string config`modelName; modelFile:$[modelLibrary in`sklearn`theano; modelPath; modelLibrary in`keras; modelPath,".h5"; modelLibrary~`torch; modelPath,".pt"; '"Unsupported model type provided" ]; loadFunction pydstr modelFile } // @kind function // @category utility // @desc Generate the path to a model based on user-defined dictionary // input. This assumes no knowledge of the configuration, rather this is the // gateway to retrieve the configuration and models. // @param dict {dictionary} Configuration detailing where to retrieve the // model which must contain one of the following: // 1. Dictionary mapping `startDate`startTime to the date and time // associated with the model run. // 2. Dictionary mapping `savedModelName to a model named for a run // previously executed. // @returns {char[]} Path to the model information utils.modelPath:{[dict] pathStem:path,"/outputs/"; model:$[all `startDate`startTime in key dict;utils.nearestModel[dict];dict]; keyDict:key model; pathStem,$[all `startDate`startTime in keyDict; $[all(-14h;-19h)=type each dict`startDate`startTime; "dateTimeModels/", ssr[string[model`startDate],"/run_",string[model`startTime],"/";":";"."]; '"Types provided for date/time retrieval must be a date and", " time respectively" ]; `savedModelName in keyDict; $[10h=type model`savedModelName; "namedModels/",model[`savedModelName],"/"; -11h=type model`savedModeName; "namedModels/",string[model`savedModelName],"/"; '"Types provided for model name based retrieval must be a string/symbol" ]; '"A user must define model start date/time or model name."; ] } // @kind function // @category utility // @desc Extract model meta while checking that the directory for the // specified model exists // @param modelDetails {dictionary} Details of current model // @param pathToMeta {symbol} Path to previous model metadata hsym // @returns {dictionary} Returns either extracted model metadata or errors out utils.extractModelMeta:{[modelDetails;pathToMeta] details:raze modelDetails; modelName:$[10h=type raze value modelDetails;;{sv[" - ";string x]}]details; errFunc:{[modelName;err]'"Model ",modelName," does not exist\n"}modelName; @[get;pathToMeta;errFunc] } // @kind data // @category utility // @desc Dictionary outlining the keys which must be equivalent for // data retrieval in order for a dataset not to be loaded twice (assumes // tabular return under equivalence) // @type dictionary utils.dataType:`ipc`binary`csv! (`port`select;`directory`fileName;`directory`fileName) // @kind data // @category utility // @desc Dictionary with console print statements to reduce clutter // @type dictionary utils.printDict:(!) . flip( (`describe;"The following is a breakdown of information for each of the ", "relevant columns in the dataset"); (`errColumns;"The following columns were removed due to type restrictions", " for "); (`preproc;"Data preprocessing complete, starting feature creation"); (`sigFeat;"Feature creation and significance testing complete"); (`totalFeat;"Total number of significant features being passed to the ", "models = "); (`select;"Starting initial model selection - allow ample time for large", " datasets"); (`scoreFunc;"Scores for all models using "); (`bestModel;"Best scoring model = "); (`modelFit;"Continuing to final model fitting on testing set"); (`hyperParam;"Continuing to hyperparameter search and final model fitting ", "on testing set"); (`kerasClass;"Test set does not contain examples of each class removing ", "multi-class keras models"); (`torchModels;"Attempting to run Torch models without Torch installed, ", "removing Torch models"); (`theanoModels;"Attempting to run Theano models without Theano installed, ", "removing Theano models"); (`latexError;"The following error occurred when attempting to run latex", " report generation:\n"); (`score;"Best model fitting now complete - final score on testing set = "); (`confMatrix;"Confusion matrix for testing set:"); (`graph;"Saving down graphs to "); (`report;"Saving down procedure report to "); (`meta;"Saving down model parameters to "); (`model;"Saving down model to ")) // @kind data // @category utility // @desc Dictionary of warning print statements that can be turned // on/off. If two elements are within a key,first element is the warning // given when ignoreWarnings=2, the second is the warning given when // ignoreWarnings=1. // @type dictionary utils.printWarnings:(!) . flip( (`configExists;("A configuration file of this name already exists"; "A configuration file of this name already exists and will be ", "overwritten")); (`savePathExists;("The savePath chosen already exists, this run will be", " exited"; "The savePath chosen already exists and will be overwritten")); (`loggingPathExists;("The logging path chosen already exists, this run ", "will be exited"; "The logging path chosen already exists and will be overwritten")); (`printDefault;"If saveOption is 0, logging or printing to screen must be ", "enabled. Defaulting to .automl.utils.printing:1b"); (`pythonHashSeed;"For full reproducibility between q processes of the NLP ", "word2vec implementation, the PYTHONHASHSEED environment variable must ", "be set upon initialization of q. See ", "https://code.kx.com/q/ml/automl/ug/options/#seed for details."); (`neuralNetWarning;("Limiting the models being applied. No longer running ", "neural networks or SVMs. Upper limit for number of targets set to: "; "It is advised to remove any neural network or SVM based models from ", "model evaluation. Currently running with in a number of data points in", " excess of: ")) ) // @kind data // @category utility // @desc Decide how warning statements should be handles. // 0=No warning or action taken // 1=Warning given but no action taken. // 2=Warning given and appropriate action taken. // @type int utils.ignoreWarnings:2 // @kind data // @category utility // @desc Default printing and logging functionality // @type boolean utils.printing:1b utils.logging :0b // @kind function // @category api // @desc Print string to stdout or log file // @param filename {symbol} Filename to apply to log of outputs to file // @param val {string} Item that is to be displayed to standard out of any type // @param nline1 {int} Number of new line breaks before the text that are // needed to 'pretty print' the display // @param nline2 {int} Number of new line breaks after the text that are needed // to 'pretty print' the display // @return {::} String is printed to std or to log file utils.printFunction:{[filename;val;nline1;nline2] if[not 10h~type val;val:.Q.s val]; newLine1:nline1#"\n"; newLine2:nline2#"\n"; printString:newLine1,val,newLine2; if[utils.logging; h:hopen hsym`$filename; h printString; hclose h; ]; if[utils.printing;-1 printString]; } // @kind function // @category utility // @desc Retrieve the model which is closest in time to // the user specified `startDate`startTime where nearest is // here defined at the closest preceding model // @param dict {dictionary} information about the start date and // start time of the model to be retrieved mapping `startDate`startTime // to their associated values // @returns {dictionary} The model whose start date and time most closely // matches the input utils.nearestModel:{[dict] timeMatch:sum dict`startDate`startTime; datedTimed :utils.getTimes[]; namedModels:utils.parseNamedFiles[]; if[(();())~(datedTimed;namedModels); '"No named or dated and timed models in outputs folder,", " please generate models prior to model retrieval" ]; allTimes:asc raze datedTimed,key namedModels; binLoc:bin[allTimes;timeMatch]; if[-1=binLoc;binLoc:binr[allTimes;timeMatch]]; nearestTime:allTimes binLoc; modelName:namedModels nearestTime; if[not (""~modelName)|()~modelName; :enlist[`savedModelName]!enlist neg[1]_2_modelName]; `startDate`startTime!("d";"t")$\:nearestTime } // @kind function // @category utility // @desc Retrieve the timestamp associated // with all dated/timed models generated historically // @return {timestamp[]} The timestamps associated with // each of the previously generated non named models utils.getTimes:{ dateTimeFiles:key hsym`$path,"/outputs/dateTimeModels/"; $[count dateTimeFiles;utils.parseModelTimes each dateTimeFiles;()] } // @kind function // @category utility // @desc Generate a timestamp for each timed file within the // outputs folder // @param folder {symbol} name of a dated folder within the outputs directory // @return {timestamp} an individual timestamp denoting the date+time of a run utils.parseModelTimes:{[folder] fileNames:string key hsym`$path,"/outputs/dateTimeModels/",string folder; "P"$string[folder],/:"D",/:{@[;2 5;:;":"] 4_x}each fileNames,\:"000000" } // @kind function // @category utility // @desc Retrieve the dictionary mapping timestamp of // model generation to the name of the associated model // @return {dictionary} A mapping between the timestamp associated with // start date/time and the name of the model produced utils.parseNamedFiles:{ (!).("P*";"|")0:hsym`$path,"/outputs/timeNameMapping.txt" } // @kind function // @category utility // @desc Delete files and folders recursively // @param filepath {symbol} File handle for file or directory to delete // @return {::|err} Null on success, an error if attempting to delete // folders outside of automl utils.deleteRecursively:{[filepath] if[not filepath>hsym`$path;'"Delete path outside of scope of automl"]; orderedPaths:{$[11h=type d:key x;raze x,.z.s each` sv/:x,/:d;d]}filepath; hdel each desc orderedPaths; } // @kind function // @category utility // @desc Delete models based on user provided information // surrounding the date and time of model generation // @param config {dictionary} User provided config containing, start date/time // information these can be date/time types in the former case or a // wildcarded string // @param pathStem {string} the start of all paths to be constructed, this // is in the general case .automl.path,"/outputs/" // @return {::|err} Null on success, error if attempting to delete folders // which do not have a match utils.deleteDateTimeModel:{[config;pathStem] dateInfo:config`startDate; timeInfo:config`startTime; pathStem,:"dateTimeModels/"; allDates:key hsym`$pathStem; relevantDates:utils.getRelevantDates[dateInfo;allDates]; dateCheck:(1=count relevantDates)&0>type relevantDates; relevantDates:string $[dateCheck;enlist;]relevantDates; datePaths:(pathStem,/:relevantDates),\:"/"; fileList:raze{x,/:string key hsym`$x}each datePaths; relevantFiles:utils.getRelevantFiles[timeInfo;fileList]; utils.deleteRecursively each hsym`$relevantFiles; emptyPath:where 0=count each key each datePaths:hsym`$datePaths; if[count emptyPath;hdel each datePaths emptyPath]; }
.dotz.set[`.z.pw;p0[`pw;value .dotz.getcommand[`.z.pw];;]]; .dotz.set[`.z.po;p1[`po;value .dotz.getcommand[`.z.po];]]; .dotz.set[`.z.pc;p1[`pc;value .dotz.getcommand[`.z.pc];]]; .dotz.set[`.z.wo;p1[`wo;value .dotz.getcommand[`.z.wo];]]; .dotz.set[`.z.wc;p1[`wc;value .dotz.getcommand[`.z.wc];]]; .dotz.set[`.z.ws;p2[`ws;value .dotz.getcommand[`.z.ws];]]; .dotz.set[`.z.exit;p2[`exit;value .dotz.getcommand[`.z.exit];]]; .dotz.set[`.z.pg;p2[`pg;value .dotz.getcommand[`.z.pg];]]; .dotz.set[`.z.pi;p2[`pi;value .dotz.getcommand[`.z.pi];]]; .dotz.set[`.z.ph;p2[`ph;value .dotz.getcommand[`.z.ph];]]; .dotz.set[`.z.pp;p2[`pp;value .dotz.getcommand[`.z.pp];]]; .dotz.set[`.z.ps;p3[`ps;value .dotz.getcommand[`.z.ps];]];] ================================================================================ FILE: TorQ_code_handlers_permissions.q SIZE: 10,977 characters ================================================================================ \d .pm if[@[1b; `.access.enabled;0b]; ('"controlaccess.q already active";exit 1) ] enabled:@[value;`enabled;0b] // whether permissions are enabled maxsize:@[value;`maxsize;200000000] // the maximum size of any returned result set readonly:@[value;`.readonly.enabled;0b] val:$[readonly;reval;eval] valp:$[readonly;{reval parse x};value] / constants ALL:`$"*"; / used to indicate wildcard/superuser access to functions/data err.:(::); err[`func]:{"pm: user role does not permit running function [",string[x],"]"} err[`selt]:{"pm: no read permission on [",string[x],"]"} err[`selx]:{"pm: unsupported select statement, superuser only"} err[`updt]:{"pm: no write permission on [",string[x],"]"} err[`expr]:{"pm: unsupported expression, superuser only"} err[`quer]:{"pm: free text queries not permissioned for this user"} err[`size]:{"pm: returned value exceeds maximum permitted size"} / determine whether the system outputs booleans (permission check only) or evaluates query runmode:@[value;`runmode;1b] / determine whether unlisted variables are auto-allowlisted permissivemode:@[value; `permissivemode; 0b] / schema user:([id:`symbol$()]authtype:`symbol$();hashtype:`symbol$();password:()) groupinfo:([name:`symbol$()]description:()) roleinfo:([name:`symbol$()]description:()) usergroup:([]user:`symbol$();groupname:`symbol$()) userrole:([]user:`symbol$();role:`symbol$()) functiongroup:([]function:`symbol$();fgroup:`symbol$()) access:([]object:`symbol$();entity:`symbol$();level:`symbol$()) function:([]object:`symbol$();role:`symbol$();paramcheck:()) virtualtable:([name:`symbol$()]table:`symbol$();whereclause:()) publictrack:([name:`symbol$()] handle:`int$()) / api adduser:{[u;a;h;p] if[u in key groupinfo;'"pm: cannot add user with same name as existing group"]; user,:(u;a;h;p)} removeuser:{[u]user::.[user;();_;u]} addgroup:{[n;d] if[n in key user;'"pm: cannot add group with same name as existing user"]; groupinfo,:(n;d)} removegroup:{[n]groupinfo::.[groupinfo;();_;n]} addrole:{[n;d]roleinfo,:(n;d)} removerole:{[n]roleinfo::.[roleinfo;();_;n]} addtogroup:{[u;g] if[not g in key groupinfo;'"pm: no such group, .pm.addgroup first"]; if[not (u;g) in usergroup;usergroup,:(u;g)];} removefromgroup:{[u;g]if[(u;g) in usergroup;usergroup::.[usergroup;();_;usergroup?(u;g)]]} assignrole:{[u;r] if[not r in key roleinfo;'"pm: no such role, .pm.addrole first"]; if[not (u;r) in userrole;userrole,:(u;r)];} unassignrole:{[u;r]if[(u;r) in userrole;userrole::.[userrole;();_;userrole?(u;r)]]} addfunction:{[f;g]if[not (f;g) in functiongroup;functiongroup,:(f;g)];} removefunction:{[f;g]if[(f;g) in functiongroup;functiongroup::.[functiongroup;();_;functiongroup?(f;g)]]} grantaccess:{[o;e;l]if[not (o;e;l) in access;access,:(o;e;l)]} revokeaccess:{[o;e;l]if[(o;e;l) in access;access::.[access;();_;access?(o;e;l)]]} grantfunction:{[o;r;p]if[not (o;r;p) in function;function,:(o;r;p)]} revokefunction:{[o;r]if[(o;r) in t:`object`role#function;function::.[function;();_;t?(o;r)]]} createvirtualtable:{[n;t;w]if[not n in key virtualtable;virtualtable,:(n;t;w)]} removevirtualtable:{[n]if[n in key virtualtable;virtualtable::.[virtualtable;();_;n]]} addpublic:{[u;h]publictrack::publictrack upsert (u;h)} removepublic:{[u]publictrack::.[publictrack;();_;u]} cloneuser:{[u;unew;p] adduser[unew;ul[0] ;ul[1]; value (string (ul:raze exec authtype,hashtype from user where id=u)[1]), " string `", p]; addtogroup[unew;` sv value(1!usergroup)[u]]; assignrole[unew;` sv value(1!userrole)[u]]} / permissions check functions / making a dictionary of the parameters and the argument values pdict:{[f;a] d:enlist[`]!enlist[::]; d:d,$[not ca:count a; (); f~`select; (); (1=count a) and (99h=type first a); first a; /if projection first obtain a list of function and fixed parameters (fnfp) 104h=type value f; [fnfp:value value f; (value[fnfp 0][1])!fnfp[1],a]; /get paramaters and make a dictionary with the arguments 101h<>type fp:value[value[f]][1]; fp!a; ((),(`$string til ca))!a ]; d} fchk:{[u;f;a] r:exec role from userrole where user=u; / list of roles this user has o:ALL,f,exec fgroup from functiongroup where function=f; / the func and any groups that contain it c:exec paramcheck from function where (object in o) and (role in r); k:@[;pdict[f;a];::] each c; / try param check functions matched for roles k:`boolean$@[k;where not -1h=type each k;:;0b]; / errors or non-boolean results treated as false max k} / any successful check is sufficient - e.g. superuser trumps failed paramcheck from another role achk:{[u;t;rw;pr] if[fchk[u;ALL;()]; :1b]; if[pr and not t in key 1!access; :1b]; t: ALL,t; g:raze over (exec groupname by user from usergroup)\[u]; / groups can contain groups - chase all exec 0<count i from access where object in t, entity in g, level in (`read`write!(`read`write;`write))[rw]} / expression identification xqu:{(first[x] in (?;!)) and (count[x]>=5)} / Query xdq:{first[x] in .q} / Dot Q isq:{(first[x] in (?;!)) and (count[x]>=5)} query:{[u;q;b;pr] if[not fchk[u;`select;()]; $[b;'err[`quer][]; :0b]]; / must have 'select' access to run free form queries / update or delete in place if[((!)~q[0])and(11h=type q[1]); if[not achk[u;first q[1];`write;pr]; $[b;'err[`updt][first q 1]; :0b]]; $[b; :qexe q; :1b]; ]; / nested query if[isq q 1; $[b; :qexe @[q;1;.z.s[u;;b;pr]]; :1b]]; / select on named table if[11h=abs type q 1; t:first q 1; / virtual select if[t in key virtualtable; vt:virtualtable[t]; q:@[q;1;:;vt`table]; q:@[q;2;:;enlist first[q 2],vt`whereclause]; ]; if[not achk[u;t;`read;pr]; $[b; 'err[`selt][t]; :0b]]; $[b; :qexe q; :1b]; ]; / default - not specifally handled, require superuser if[not fchk[u;ALL;()]; $[b; 'err[`selx][]; :0b]]; $[b; :qexe q; :1b]} dotqd:enlist[`]!enlist{[u;e;b;pr]if[not (fchk[u;ALL;()] or fchk[u;`$string(first e);()]);$[b;'err[`expr][]];:0b];$[b;qexe e;1b]}; dotqd[`lj`ij`pj`uj]:{[u;e;b;pr] $[b;val @[e;1 2;expr[u]];1b]} dotqd[`aj`ej]:{[u;e;b;pr] $[b;val @[e;2 3;expr[u]];1b]} dotqd[`wj`wj1]:{[u;e;b;pr] $[b;val @[e;2;expr[u]];1b]} dotqf:{[u;q;b;pr] qf:.q?(q[0]); p:$[null p:dotqd qf;dotqd`;p]; p[u;q;b;pr]} / flatten an arbitrary data structure, maintaining any strings flatten:{raze $[10h=type x;enlist enlist x;1=count x;x;.z.s'[x]]} / string non-strings, maintain strings str:{$[10h=type x;;string]x}' lamq:{[u;e;b;pr] / get names of all defined variables to look for references to in expression rt:raze .api.varnames[;"v";1b]'[.api.allns[]]; / allow public tables to always be accessed rt:rt except distinct exec object from access where entity=`public; / flatten expression & tokenize to extract any possible variable references pq:`$distinct -4!raze(str flatten e),'" "; / filter expression tokens to those matching defined variables rqt:rt inter pq; prohibited:rqt where not achk[u;;`read;pr] each rqt; if[count prohibited;'" | " sv .pm.err[`selt] each prohibited]; $[b; :exe e; :1b]} exe:{v:$[(104<>a)&100<a:abs type first x;val;valp]x; if[maxsize<-22!v; 'err[`size][]]; v} qexe:{v:val x; if[maxsize<-22!v; 'err[`size][]]; v} / check if arg is symbol, and if so if type is <100h i.e. variable - if name invalid, return read error isvar:{$[-11h<>type x;0b;100h>type @[get;x;{[x;y]'err[`selt][x]}[x]]]} mainexpr:{[u;e;b;pr] / store initial expression to use with value ie:e; e:$[10=type e;parse e;e]; / variable reference if[isvar f:first e; if[not achk[u;f;`read;pr]; $[b;'err[`selt][f]; :0b]]; :$[b;qexe $[f in key virtualtable;exec (?;table;enlist whereclause;0b;()) from virtualtable[f];e];1b]; ]; / named function calls if[-11h=type f; if[not fchk[u;f;1_ e]; $[b;'err[`func][f]; :0b]]; $[b; :exe ie; :1b]; ]; / queries - select/update/delete if[isq e; :query[u;e;b;pr]]; / .q keywords if[xdq e;:dotqf[u;e;b;pr]]; / lambdas - value any dict args before razing if[any (100 104h)in type each raze @[e;where 99h=type'[e];value]; :lamq[u;ie;b;pr]]; / if we get down this far we don't have specific handling for the expression - require superuser if[not (fchk[u;ALL;()] or fchk[u;`$string(first e);()]); $[b;'err[`expr][f]; :0b]]; $[b; exe ie; 1b]} / projection to determine if function will check and execute or return bool, and in second arg run in permissive mode expr:mainexpr[;;runmode;permissivemode] allowed:mainexpr[;;0b;0b]
defeps:(!) . flip ( (L2R_LR;0.01); (L2R_L2LOSS_SVC;0.01); (L2R_L2LOSS_SVR;0.001); (L2R_L2LOSS_SVC_DUAL;0.1); (L2R_L1LOSS_SVC_DUAL;0.1); (MCSVM_CS;0.1); (L2R_LR_DUAL;0.1); (L1R_L2LOSS_SVC;0.01); (L1R_LR;0.01); (L2R_L1LOSS_SVR_DUAL;0.1); (L2R_L2LOSS_SVR_DUAL;0.1)) defparam:{[prob;param] if[0f>=param`eps;param[`eps]:defeps param`solver_type]; param} sparse:{{("i"$1+i)!x i:where not 0f=x} each flip x} prob:{`x`y!(sparse x;y)} read_problem:{[s] i:s?\:" "; y:i#'s; x:{(!/)"I: "0:x _y}'[1+i;s]; if[3.5>.z.K;x:("i"$key x)!value x]; `bias`x`y!-1f,"F"$(x;y)} write_problem:{ s:(("+";"")0>x`y),'string x`y; s:s,'" ",/:{" " sv ":" sv' string flip(key x;value x)} each x`x; s:s,\:" "; s} ================================================================================ FILE: funq_linreg.q SIZE: 3,690 characters ================================================================================ \c 20 100 \l funq.q plt:.ut.plot[30;15;.ut.c10] -1"generating 2 sets of independent normal random variables"; / NOTE: matrix variables are uppercase -1 .ut.box["**"]( "suppress the desire to flip matrices"; "matlab/octave/r all store data in columns"; "the following matrix *is* a two column matrix in q"); show X:(.ml.bm 10000?) each 1 1f / perhaps q needs the ability to tag matrices so they can be displayed / (not stored) flipped -1"plotting uncorrelations x,y"; show plt[sum] X -1"using $ to generate correlated x and y"; rho:.8 / correlation X[1]:(rho;sqrt 1f-rho*rho)$X -1"plotting correlations x,y"; show plt[sum] X -1 .ut.box["**"] ( "mmu is usually used for matrix multiplication"; "$ is usually used for vector dot product"; "but they can be used interchangeably"); Y:-1#X X:1#X -1"linear algebra often involves an operation such as"; -1"Y times X transpose or Y*X'. Matlab and Octave can parse"; -1"this syntax and perform the multiplication/transpose"; -1"by a change of indexation rather than physically moving the data"; -1"to get this same effect in q, we can change the"; -1"operation from 'Y mmu flip X' to 'X$/:Y'"; -1"timing with the flip"; \ts:100 Y mmu flip X -1"and without"; \ts:100 X$/:Y -1"fitting a line *without* intercept"; show THETA:Y lsq 1#X -1"to fit intercept, prepend a vector of 1s"; show .ml.prepend[1f] X -1"fitting a line with intercept"; show THETA:Y lsq .ml.prepend[1f] 1#X -1"plotting data with fitted line"; show plt[avg] .ml.append[0N;X,Y],'.ml.append[1]X,.ml.plin[X] THETA; -1"fitting with normal equations (fast but not numerically stable)"; .ml.normeq[Y;.ml.prepend[1f] X] if[2<count key `.qml; -1"qml uses QR decomposition for a more numerically stable fit"; 0N!.qml.mlsqx[`flip;.ml.prepend[1f] X;Y]; ]; -1"its nice to have closed form solution, but what if we don't?"; -1"we can use gradient descent as well"; alpha:.1 / learning rate THETA:enlist theta:2#0f / initial values -1"by passing a learning rate and function to compute the gradient"; -1".ml.gd will take one step in the steepest direction"; mf:.ml.gd[alpha;.ml.lingrad[();Y;X]] mf THETA -1"we can then use q's iteration controls"; -1"to run a fixed number of iterations"; 2 mf/ THETA -1"iterate until the cost is within a tolerance"; cf:.ml.lincost[();X;Y] (.4<0N!cf::) mf/ THETA -1"or even until convergence"; mf over THETA -1"to iterate until cost reductions taper off, we need our own function"; -1"we can change the logging behavior by changing the file handle"; -1"no logging"; first .ml.iter[0N;.01;cf;mf] THETA -1"in-place progress"; first .ml.iter[1;.01;cf;mf] THETA -1"new-line progress"; first .ml.iter[-1;.01;cf;mf] THETA -1"by passing an integer for the limit, we can run n iterations"; first .ml.iter[-1;20;cf;mf] THETA l:1000f / l2 regularization factor -1"we can reduce over-fitting by adding l2 regularization"; gf:.ml.lingrad[.ml.l2[l];Y;X] first .ml.iter[1;.01;cf;.ml.gd[alpha;gf]] THETA -1"we can also use the fmincg minimizer to obtain optimal theta values"; cgf:.ml.lincostgrad[.ml.l2[l];Y;X] first .fmincg.fmincg[1000;cgf;theta] -1"linear regression with l2 regularization has a closed-form solution"; -1"called ridge regression"; -1"in this example, we fit an un-regularized intercept"; .ml.ridge[0f,count[X]#l;Y;.ml.prepend[1f]X] -1"let's check that we've implemented the gradient calculations correctly"; cf:.ml.lincost[.ml.l2[l];Y;X]enlist:: gf:first .ml.lingrad[.ml.l2[l];Y;X]enlist:: .ut.assert . .ut.rnd[1e-6] .ml.checkgrad[1e-4;cf;gf;theta] cgf:.ml.lincostgrad[.ml.l2[l];Y;X] cf:first cgf:: gf:last cgf:: .ut.assert . .ut.rnd[1e-6] .ml.checkgrad[1e-4;cf;gf;theta] ================================================================================ FILE: funq_liver.q SIZE: 328 characters ================================================================================ liver.f:"bupa.data" liver.b:"http://archive.ics.uci.edu/ml/machine-learning-databases/" liver.b,:"liver-disorders/" -1"[down]loading liver data set"; .ut.download[liver.b;;"";""] liver.f; liver.XY:((6#"E"),"H";",")0:`$liver.f liver.X:-1_liver.XY liver.c:`mcv`alkphos`sgpt`sgot`gammagt`drinks`train liver.t:flip liver.c!liver.XY ================================================================================ FILE: funq_logreg.q SIZE: 3,371 characters ================================================================================ \c 20 100 \l funq.q \l wdbc.q -1"partitioning wdbc data into train and test"; show d:.ut.part[`train`test!3 1;0N?] "f"$update "M"=diagnosis from wdbc.t y:first get first `Y`X set' 0 1 cut value flip d`train yt:first get first `Yt`Xt set' 0 1 cut value flip d`test -1"the sigmoid function is used to represent a binary outcome"; plt:.ut.plot[30;15;.ut.c10;sum] show plt .ml.sigmoid .1*-50+til 100 / logistic regression cost -1"to use gradient descent, we must first define a cost function"; THETA:enlist theta:(1+count X)#0f; -1"compute cost of initial theta estimate"; .ml.logcost[();Y;X;THETA] if[2<count key `.qml; -1"qml comes with a minimizer that can be called"; -1"with just this cost function:"; opts:`iter,1000,`full`quiet; /`rk`slp`tol,1e-8 0N!first 1_.qml.minx[opts;.ml.logcost[();Y;X]enlist::;THETA]; ]; -1"we can also define a gradient function to make this process faster"; .ml.loggrad[();Y;X;THETA] -1"check that we've implemented the gradient correctly"; rf:.ml.l2[1] cf:.ml.logcost[rf;Y;X]enlist:: gf:first .ml.loggrad[rf;Y;X]enlist:: .ut.assert . .ut.rnd[1e-6] .ml.checkgrad[1e-4;cf;gf;theta] cgf:.ml.logcostgrad[rf;Y;X] cf:first cgf:: gf:last cgf:: .ut.assert . .ut.rnd[1e-6] .ml.checkgrad[1e-4;cf;gf;theta] if[2<count key `.qml; -1"qml can also use both the cost and gradient to improve performance"; 0N!first 1_.qml.minx[opts;.ml.logcostgradf[();Y;X];THETA]; ]; -1"but the gradient calculation often shares computations with the cost"; -1"providing a single function that calculates both is more efficient"; -1".fmincg.fmincg (function minimization conjugate gradient) permits this"; -1 .ut.box["**"]"use '\\r' to create a progress bar with in-place updates"; theta:first .fmincg.fmincg[1000;.ml.logcostgrad[();Y;X];theta] -1"compute cost of initial theta estimate"; .ml.logcost[();Y;X;enlist theta] -1"test model's accuracy"; avg yt="i"$p:first .ml.plog[Xt;enlist theta] -1"lets add some regularization"; theta:(1+count X)#0f; theta:first .fmincg.fmincg[1000;.ml.logcostgrad[.ml.l1[10];Y;X];theta] -1"test model's accuracy"; avg yt="i"$p:first .ml.plog[Xt;enlist theta] show .ut.totals[`TOTAL] .ml.cm["i"$yt;"i"$p] -1"demonstrate a few binary classification evaluation metrics"; -1"how well did we fit the data"; tptnfpfn:.ml.tptnfpfn . "i"$(yt;p) -1"accuracy: ", string .ml.accuracy . tptnfpfn; -1"precision: ", string .ml.precision . tptnfpfn; -1"recall: ", string .ml.recall . tptnfpfn; -1"F1 (harmonic mean between precision and recall): ", string .ml.f1 . tptnfpfn; -1"FMI (geometric mean between precision and recall): ", string .ml.fmi . tptnfpfn; -1"jaccard (0 <-> 1 similarity measure): ", string .ml.jaccard . tptnfpfn; -1"MCC (-1 <-> 1 correlation measure): ", string .ml.mcc . tptnfpfn; -1"plot receiver operating characteristic (ROC) curve"; show .ut.plt roc:2#.ml.roc[yt;p] -1"area under the curve (AUC)"; .ml.auc . 2#roc fprtprf:(0 0 .5 .5 1;0 .5 .5 1 1;0w .8 .4 .35 .1) -1"confirm accurate roc results"; .ut.assert[fprtprf] .ml.roc[0 0 1 1;.1 .4 .35 .8] -1"use random values to confirm large vectors don't explode memory"; y:100000?0b p:100000?1f show .ut.plt roc:2#.ml.roc[y;p] -1"confirm auc for random data is .5"; .ut.assert[.5] .ut.rnd[.01] .ml.auc . roc ================================================================================ FILE: funq_mansfield.q SIZE: 339 characters ================================================================================ / mansfield park mansfield.f:"141.txt" mansfield.b:"https://www.gutenberg.org/files/141/old/" -1"[down]loading mansfield park text"; .ut.download[mansfield.b;;"";""] mansfield.f; mansfield.txt:read0 `$mansfield.f mansfield.chapters:1_"CHAPTER" vs "\n" sv 35_-373_mansfield.txt mansfield.s:{(2+first x ss"\n\n")_x} each mansfield.chapters ================================================================================ FILE: funq_markov.q SIZE: 830 characters ================================================================================ \l funq.q \l iris.q / markov clustering / https://www.cs.ucsb.edu/~xyan/classes/CS595D-2009winter/MCL_Presentation2.pdf / example from mcl man page / http://micans.org/mcl/man/mcl.html t:flip `k1`k2`v!"ssf"$\:() t,:`cat`hat,0.2 t,:`hat`bat,0.16 t,:`bat`cat,1.0 t,:`bat`bit,0.125 t,:`bit`fit,0.25 t,:`fit`hit,0.5 t,:`hit`bit,0.16 / take max of bidirectional links, enumerate keys k:() m:.ml.inflate[1;0f] .ml.addloop m|:flip m:.ml.full enlist[2#count k],exec (v;`k?k1;`k?k2) from t .ut.assert[(`hat`bat`cat;`bit`fit`hit)] (get`k!) each .ml.interpret .ml.mcl[2;1.5;0f] over m
End of preview. Expand in Data Studio

Q Code Pretraining Corpus

This dataset provides a corpus of Q programming language code and documentation, curated for pretraining large language models and code models.

📊 Dataset Overview

  • Total Data: Over 1.6 million Q tokens, 5+ million characters
  • Documents: 342 training chunks, 39 validation chunks
  • Source Types:
    • Open-source Q repositories (MIT/Apache 2.0 licenses)
    • Official KDB+/Q documentation and tutorials
    • Hand-curated code snippets and scripts
  • Format: Cleaned, deduplicated, chunked for efficient pretraining

🎯 Key Features

  • Q-Only: All data is pure Q language (no mixed Python or non-code noise)
  • Permissive Licensing: All source code is MIT or Apache 2.0, suitable for both research and commercial use
  • Coverage: Includes code from analytics, time-series, database queries, and utilities
  • Filtered & Scored: LLM-assisted quality scoring plus manual review for top-tier data fidelity
  • Chunked & Ready: Delivered as 4k-token chunks for immediate use with Hugging Face, TRL, or custom pipelines

🏗️ Dataset Structure

Each record is a text chunk, containing code or documentation in Q.

Splits:

  • train: Main corpus for pretraining (342 chunks)
  • validation: Holdout set for evaluation (39 chunks)

Sample record:

{
    "text": str   # Raw Q code or documentation chunk
}

🧑‍💻 Usage

Loading the Dataset

from datasets import load_dataset

# Load the full Q pretraining dataset
dataset = load_dataset("morganstanley/q_pretrained_dataset")

# Access splits
train_data = dataset["train"]
val_data = dataset["validation"]

Example: Previewing Data

sample = dataset["train"][0]
print(sample["text"])

Training Usage

This dataset is designed for language model pretraining using next-token prediction or masked language modeling objectives.
Supports efficient training with Hugging Face Transformers, TRL, or custom frameworks.

🔤 About Q Programming Language

Q is a vector and array programming language developed by Kx Systems for high-performance analytics, finance, and time-series applications.

It features:

  • Concise, functional, array-oriented syntax
  • Powerful built-in operators for large-scale data manipulation
  • Industry adoption in trading, banking, and real-time analytics

📁 Source Repositories

Major open-source Q repos included:

  • DataIntellectTech/TorQ
  • psaris/qtips
  • psaris/funq
  • KxSystems/ml
  • finos/kdb
  • LeslieGoldsmith/qprof
  • jonathonmcmurray/reQ
  • ...and more

All with permissive licenses (MIT or Apache 2.0).

📈 Data Preparation & Filtering

  • Automated Scoring: Qwen-2.5-32B was used to score each file (0–10) for quality and relevance; only files scoring ≥4 were included.
  • Manual Review: Additional cleaning to remove non-Q files or low-value content.
  • Deduplication: Duplicate and boilerplate code removed.

📝 Citation

If you use this dataset in your research, please cite:

@dataset{q_pretraining_corpus_2024,
    title={Q Code Pretraining Corpus},
    author={Brendan Rappazzo Hogan},
    year={2024},
    url={https://huggingface.co/datasets/bhogan/q-pretraining-corpus},
    note={Dataset for domain-adaptive pretraining of language models on the Q programming language}
}

Associated Paper: [Link to paper will be added here]

Downloads last month
134