[erlang-questions] Market data, trading strategies and dropping the last element of a list

Christian S chsu79@REDACTED
Tue Dec 19 15:35:17 CET 2006


On 12/19/06, Joel Reymont <joelr1@REDACTED> wrote:
> Each trading strategy would need to keep the last N quotes for
> several securities (MSFT, IBM, GOOG, etc.) and this N can be up to
> several hundred. Ideally, strategies would subscribe to quotes and
> specify how many quotes they want to keep in the buffer. I would then
> supply functions to retrieve a quote up to M quotes back. This way
> you could do close('MSFT', 100) to retrieve the closing price of MSFT
> 100 quotes back.

What information is associated with each quote?

Only point-in-time and the price?
Volume?
Buyer and Seller?
Transaction id?

100 quotes or even 1000 quotes doesnt strike me as being very
much data.

> I will likely store market data in disk_logs. I suppose I could
> somehow slide through binaries that I'm storing in the disk_log but I
> would like to insulate trading strategies from having to deal with
> parsing of binaries.

I would use binaries. They are compact and the data posts in question
are fixed-size and do not change (more posts are added though, that
needs
special casing). Random accesses into a binary can be O(1). The
problem is to grow them.

You collect the most recent quotes in a list, and compact the list of
quotes into a binary every N quote or so, and put those binaries of N
quotes into a list of binaries, which you could in turn compact and/or
push out to mnesia to be replicated to other nodes.

That way a request for the 100 last trades to the subscriber for that
security would be answered as "here are the at most N last trades and
the rest can be obtained by calling this function that i supply (which
makes a mnesia lookup)".

Alternatively a subscriber for a security would keep its last N quotes
update in a mnesia table called "security", lookling like
  {SecurityId :: term(), Quotes :: list(), HistoryCount :: integer()},
and then you would be able to lookup {SecurityId, HistoryCount} in a
mnesia table called "security_history", looking like
  {{SecurityId, HistoryCount}, NextHistoryCount :: integer(),
HistoryData :: binary()}
 to find the next block of quote history you look under the key
{SecurityId, NextHistoryCount}, and then so on.

Think a single replicated "security" mnesia table could take enough
write-locked transactions to it to keep up with the number of trades
on a large stock market like NASDAQ? Using table fragmentation one
could break it down across several nodes if it is too much.


A mnesia table subscriber could be used to kill off security_history
rows that are too old, to keep the database more compact.



More information about the erlang-questions mailing list