2D Object Recognition Project

I just watched this video, cool vision!

One question about it:

I thought layer one was mostly informed by the hierarchy - that is regions above this region would communicate their expectation of future states (their prediction) for this region down to this region as a union of active cells in layer one. Are you suggesting that…

  1. that isn’t the case: the union is exclusively produced by the horizontal voting of regions? or …
  2. that is the case but the union is influenced heavily by horizontal voting? or…
  3. that is the case, and we’re simply not going to model that hierarchical aspect yet?

Thanks!

2 Likes

There is no hierarchy in this proposed model, which includes the Columns and Columns+ papers. These present an idea of just one level of the hierarchy. We are not saying hierarchy doesn’t exist, we are just not attempting to explain it.

So we are suggesting #3 above. :nerd_face:

2 Likes

Please like this post if you would watch someone else in the community doing a Twitch stream on this project while I’m busy elsewhere!
brain-emoteArtboard-1

5 Likes

I listed what I expect to be the essential requirements needed for feeling the shape of a cup or other object. Looks like we more specifically need to start with a primary motor cortex column, for each hemisphere. To add eyes and other complex sensors: a primary somatosensory cortex column can later be added to derive these needed essentials. For right now vestibular and other signals can be taken from precise already calculated program variables used to draw into environment.

Sensory In
  Vestibular system
    Linear displacement, speed, most simply distance from previous location to current.
    Rotational displacement, most simply positive or negative change in angle since previous timestep.
  Touch
    Bit that changes state when bumps or applies force against a solid. 
  Motor, main drive
    1 bit Forward and 1 bit Reverse interoceptive feedback, typically motor stall, must reverse out.   
    1 bit Left and 1 bit Right interoceptive feedback, typically motor stall, must turn other way.
    Optionally 4 motor bits (see below) and/or speed, or sequence of readings to recall unique routines.
    
Motor Out
  Motor, main drive
    1 bit Forward and 1 bit Reverse thrust through speed range. Subtract bits for +1,0,-1 shift direction.   
    1 bit Left and 1 bit Right thrust through (optional) speed range. Subtract bits for +1,0,-1 direction.
      Note: Bilateral columns both have only one possible motor direction, already (-1) oppose each other.

Hi,
this is a very cool research project, I’ve been keeping an eye on this for a long time, but need to catch up with what you have here!

Just FYI, we have a grid cell encoder


for both c++,py. Comes also with nice visualizations.
Would be great if you could validate it for us and use it.
Cheers, breznak

1 Like

Hello,
i have enhanced the agent with four sensors, environment with boundary checks etc…
Started PR7

First test result: Agent starts on some position and moves 5 times to the right, the UP sensor is encoded into SDR with category encoder and then put into Sensory layer as proximal input. Here is the result:
HTM2D
Note: SP has learning switched off
So it seems good :slight_smile:
With use of HTM.core i want to move this little bit further and observe what parts we are missing.

8 Likes

I have used the grid cell encoder output as the direct representation of the LL (Location layer) and wired up to the secondary distal input of the SL(Sensor layer).

I got this result:

And there is anomaly spike always on [7,4] when goes to the RIGHT
and on [4,4] when goes to the LEFT.

My first tought was that it is because of Repeating inputs problem so i call tm.reset() when agent comes to the same place where he started ( [3,4] ).

So then i got anomaly of 1.0 always the next step after the reset(). Is that expected?
When i ignore the 1.0 after each reset anomaly looks like that:
obrazek
Anomaly always at [7,4] when going to the RIGHT.

The dimensions of the SL,LL and other parameters are really roughly set-up… any recommendations for the dimensions of the layers?

Also about the LL - now its just agent position encoded by GCE, but as i understand, it should be SP with GCE on its proximal input right? And i shouldn’t encode agent position but rather his movement. (incremental) and keep the actual position de facto inside the LL.

code is on this branch
Thanks for any help or recommendations

4 Likes

This is expected.

Regarding your other questions, I am sorry to put them off, but they deserve more attention than I can give them today. I will be less busy tomorrow and will get your PR running properly and respond to your questions. I like what you are doing, and I can see some experiments we might start running this way.

4 Likes

Hey @Zbysekz nice work! I ran your PR and reviewed test changes. Looks great! Thanks for hooking up htm.core. I merged your PR after updating spaces to tabs. Please give us another PR with your more recent work and I will review and run it as well.

1 Like

Ok thanks Matt, i know that you are busy with BHTM so i really appreciate it :slight_smile: Ok i will start the PR for the latest work, but about the formatting… i will use tabs instead of spaces, but i am not sure about the flake8 and black… i am using Spyder3 as python IDE and it seems that i can’t setup this. What are you using?

2 Likes

I added a requirements file and used pip inside anaconda instead of pipenv and that worked fine.

1 Like

Ok fine. But about the “black”? When i run it from cmd line, it formats the file with 4 spaces indent instead of tabs and it seems that this can’t be configured. I just want to be aligned with formatting to prevent any unneccesary future work. I personally don’t care if we use black, tabs or spaces.

I wish! This is low on my list right now. I’m still trying to keep up with Numenta research on Deep Learning applications. One of these days I’ll be back at building up docs. But I can’t ignore a dedicated community member helping me by doing exactly what I was asking for in my live streams and hooking up htm.core to an agent. You are the best!

I don’t know what “black” is… I thought there were just tabs and spaces? I don’t think I added that dependency… it was @codeallthethingz! :innocent: Will, do you remember why you added this package?

I don’t know what “black” is

black” is “the uncompromising Python code formatter”. It formats python code in a sensible way, without configuration options. All ‘black’-formatted code looks the same.

We use it at work to remove any questions about python format and style – we accept whatever ‘black’ returns. In our experience at my job, no one loves the format black uses, but no one hates any one part of it enough to complain.

4 Likes

I’d only add to @Balladeer’s great response that a standard command line formatter is a benefit for open source projects as you don’t have to worry about format wars in PRs as you can set up the build server to fail pull requests that don’t meet the standard and then the person creating the PR can fix it without having to go through that part of a code review.

3 Likes

I am for using the black formatter. Do you agree @rhyolight? l can simply format all the files just by typing
black path/to/file.py

And i can modify readme informing users that they need to setup IDE for 4spaces indent.

1 Like

Ok let’s do it, thanks for the info everyone.

Saw this fly across my news stream today.

What immediately jumps out is that they tried hard to keep it biologically based, while only dealing with one layer of the cortical column (it mentions one layer out of six)… HTM might be able to take this up and run with it, or Numenta might want to reach out and say a friendly hello.

7 Likes

This is the latest state of the experiment

I wanted to figure out the repetitive anomaly peak, but its quite hard to see what is going on, so i decided to brush the dust from 3D HTM visualization tool that i’ve played with few months ago. Its here: https://github.com/Zbysekz/HTMpandaVis
It will take me some time to put that in usable state, but it’s pretty close and hope that it can helpful not just for me, but for anybody while experimenting with HTM systems.

7 Likes

Wow, cool!
Too bad I didn’t know about your 3D visualization, it could have saved my own effort https://github.com/alior101/LayersVisualizer