The model is highly inspired by HTM, especially the mini-column structure. However, it takes a fully different representation (concept-centered representation), instead of SDR. Each event corresponds to a single mini-column, and it can deal with uncertainty due to the logic.
Comments and criticisms are welcomed and appreciated!
Hey this is cool! Did you do it as part of a PhD program? Just curious.
I have some nit picks if you want some feedback, but I have never published a paper so feel free to ignore this random person on the internet.
The abstract says “works well” but I would recommend putting a number here, accuracy or whatever you feel is appropriate. This is helpful to see where it sits at the macro level among other algorithms.
results section, I did not see a table? Gotta have a table!
more diagrams where possible. Andrew Ng teaches a class “how to read a paper” and he gives the class 5 minutes hard stop. If someone skims this for 5 minutes, you basically just look at the diagrams. So that would be nice.
were you able to compare same task with another algorithm? Sorry I skimmed really quick (missed the results table)
Anyways great job putting this together, I wish I had the grit to follow through like you have!
Hey, thanks for the feedbacks~
For the PhD program, Maybe. Actually I’m quite interested in the sensorimotor procedure of intelligence, and sequence learning is the foundation of perception (based on some justifications). I can see a quite different way to do perception from mainstream approaches like convolutional neural network. Grid cells in Hawkins’ theory inspired me a lot. I believe he and his team are in the right track.
For the opinions you gave, in the manuscript, the pictures are on the last pages. There is a diagram illustrating the working mechanisms of the model, and several figures showing the accuracies.
It is tested by using the sequence prediction task, as (Hawkins & Ahmad, 2016) does.
For convenience, some of the results are pasted here.
P.S. theoretically only 50% events are predictable, as the same setting in (Hawkins & Ahmad, 2016), though some patterns from the random events are also learned (so that it reaches higher accuracy).