@jon I just re-read your reply. I’ve also experienced the same time / tedium requirement when applying machine learning to prediction of financial markets. I was literally obsessed for the first year, staying up all night several nights in a week. I’m sure I grew a few grey hairs in the process.
I finally decided to automate many of the tedious and time consuming tasks via a genetic algorithm. And it’s paid off with more experiments running on their own without the need for manually tuning parameters.
On additional success it lead to was also based, at least in part, on a theory that many stocks/ETFs/currency might move through times where they become more of less predictable. For example, it makes sense that stocks/funds based on corn would tend to fall around harvest time due to increased supply (an oversimplification I know, but just for example sake). One could analyze weather patterns to predict higher probability of severe crop damaging storms in the mid-west and yielding in higher predictability of corn prices over long time periods.
But, there’s always the random chance a freak storm based on who knows what destroys crops at a whim, introducing seemingly random patterns to a well understood system.
The presumption for this example being, corn prices might be moderately to highly predictable during normal weather patterns. But, random storms or pest / disease outbreaks might introduce unforeseen variables resulting in seemingly “random” price swings or “unpredictability”.
With that theory in mind, it seemed to me I needed to find a way to detect whether a stock was in a predictable period of time or not. That proved to be quite difficult, so I finally decided to predict every stock on the market, and created a metric showing how predictable a particular stock was over the specified period of time. In that way, I could just take the top 10 or 20 stocks for the latest time period and viola, they’d be guaranteed to be the most predictable.
The next challenge was how to predict the entire stock market, in real-time, and constantly, and autonomously adapt to changing market conditions. That problem as it turns out is solved by two things:
- lots and lots of servers
- tons of real-time and historical market data
Or in other words, a big budget to buy all the servers and data. Couple a big budget with the right technology and you have Aqua. Actually, Aqua was just a code name used on this forum. The actual name of the product we’ll be releasing is DaviidAI (re-branded from DaviidtheQuant). The idea behind the name is DaviidAI VS the Goliaths on Wall Street. Round one, ready fight!
In the next week we’ll be publishing a Kickstarter campaign to raise funding for the servers we need and additional feeds of real-time stock data. I’ll post a link to the Kickstarter here. We’re pretty excited about it! And we’re all to happy to plug Numenta if @rhyolight approves!