I haven’t read the official TM learning algorithm. I’ve just followed what Matt describe in HTMSchool. Maybe that’s the cause?
This is the current TM core code.
xt::xarray<bool> active_cells = applyBurst(predictive_cells_, x); //If column has no cell being predictive, burst the column
xt::xarray<uint32_t> overlap = cells_.calcOverlap(active_cells, connected_permanence_); //Run trough all connections and calculate how many connected cells are on for all cells
predictive_cells_ = (overlap > connected_thr_); //Filter all badly predicted cells, all cells passing this stage is predictive
if(learn == true) {
xt::xarray<bool> apply_learning = selectLearningCell(active_cells); //Select a random cell if column in bursting
xt::xarray<bool> last_active = active_cells_;
cells_.learnCorrilation(last_active, apply_learning, permanence_incerment_, rmanence_decerment_); //inc/dec the connection strength
cells_.growSynapse(last_active, apply_learning, initial_permanence_); //Make new connections to all active cells that's not connected
}
active_cells_ = active_cells; //Store the new active cells
return xt::sum(predictive_cells_, -1); //Since only one cell is needed for the column to be predicting, sum up the cells in the column to generate the prediction
My algorithm tries to connect a learning cell to all previously active (including bursting) cells. As what Matt have states in HTM School. Which causes a lot of unneeded connection. But I can always remove them later on.