Okay, I'll research temporal pooling a bit.
I should be able to tell if the neural net evolution code is working out by tomorrow. If so I put the code in the public domain and likely do a Java version.
What parts of AI are actually working out well at the moment?
Evolutionary algorithms have been working well on engineering problems for a number of years.
Deep neural networks clearly are working. What is less clear is how best way to train them. There is a ton of engineering work you could do to find far better ways than gradient descent. I like the idea of getting creative in say 100 dimensions at a time, relying on the overcompleteness of the net to allow more freedom of choice. That way little clusters of intelligence form inside a larger net.
What I call manifold learning just driven by the input data maybe not too many people are looking at, but I think it could be very useful for an agent trying to plan its future actions.
You could imagine a deep neural net being allowed to access a manifold memory system rather than having to try to design its own memory storage/access behavior. That is a much simpler problem.
Reinforcement learning, temporal pooling I haven't coded so I don't know.