Hi
In the paper “The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding,” a figure is given as follows. Can anyone analyze these charts? It is extraordinary that the error in the state of “No SP learning” is less than “SP learning, No boosting” Does anyone know the reason for this? Has this happened in this whole database? Does this apply to other datasets, as well?
In general, No SP learning is more efficient than SP learning, No boosting?