Is this a good idea?

Sparse RBM with boosting is just an algorithm I made up by mashing a bunch of stuff I particularly like together. There’s nothing special about it, its pretty much like a spatial pooler except it learns like a RBM. I dont claim it to be better than autoencoders, it is just a special case of a symmetric autoencoder with a single, very wide hidden layer and K-WTA as a bottleneck.

3 Likes

Sorry it’s been so long, I’ve been lasered in on my Satori project. So this architecture is designed for deterministic environments, I don’t know if or where I specified that but its really important.

Since our goal is really complex (the goal being to make a smallest unit of hierarchical-manifesting intelligence nodes) we reduce the complexity everywhere else. We start with the assumption that we’re only mapping a deterministic state-space since it’s much simpler than a non-deterministic one.

Therefore errors in all layers, especially the lower ones are dramatically reduced to be non-existent or trivial.

Since we’re mapping a static state space if we’re in a particular location (state) in the space we will always move to the deterministic next state, therefore it’s really really easy to learn perfectly.

1 Like