I’ve been trying to re-implement Thousands Brains Theory so people could experiment with it. But to no success.I still have no idea how TBT models cross cortial-column communications (and how exactly those works), how the displacement cells are emulated, etc…; after tens if not a hundred hours of reading related lectures and papers.
So far I’ve gone through
I think I’m missing some critical part. Any ideas?
Not sure how much this’ll help, but I’ve found it illuminating on how inter-layer and inter-column connections are enacted – through the Network API at least.
Several examples that test aspects of TBT have been ported from numenta/research to the htm.core implementation.
Thank you! I’ll look into the code. It seems very clean and expressive. What material did you use to implement these?
Not my implementation. I just ported the code directly from numenta/research/projects.
I don’t remember which projects specifically, but I choose the ones that had the most up to date TBT implementations. It shouldn’t be hard to find which projects as the examples use the same package names.
@marty1885 i am trying to use the code “thing classification” aka l2l4l6 experiment for 2D object recognition project + visualization with pandaVis. So far grid cells narrow union when object is recognised. Using just one macrocolumn, without lateral connections.
These codes are using NetworkAPI, which is very nice.
If you would also use NAPI, you and people can use pandaVis, since it generates visualization automatically.
I will make some tutorial video about this, i am struggling to find time. But should be very soon.
Here is the promised video tutorial: