Help with an HTM implementation

Hi All, my name is Jason Toy, I did a talk with Numenta a couple of days ago, here are my slides from the talk. My talk was on a project I’ve been working on called TouchNet, which is a 3D dataset and touch simulator for the purpose of researching motorsensory touch systems.

I wrote a reinforcement learning agent that learns to touch and discriminate objects as an example of how to use the platform. I’m planning on releasing the software open source in a couple of weeks and I want to include a HTM version as well.

I was wondering if anyone is available who could help me write this. Feel free to message me or email me directly (username AT username DOT net ) if you would like to help on this.

1 Like

I moved this from #nupic:developers into #nupic because it will get a wider audience there.

If anyone has been following along with the research code on the SMI circuit described in A Theory of How Columns in the Neocortex Enable Learning the Structure of the World, here is your chance to contribute.

DGkHxOfUQAIXEXe

I’m sorry.

2 Likes

@lscheinkman and I have been talking about this, and we wanted to ask about details of building an API client. For HTM, the API will need to be much different from the RL API. We have no concept of reward or a reinforcement loop, or even feedback right now. We will need to deal strictly in locations and sensations somehow. 3D coordinates can be used for locations, but sensations are going to depend on the world, and how collisions are translated into sensations.

Do you have any insight about these problems? Thanks again @jtoy for your work.

I have been reading the docs, trying to understand how an integration would work. If HTM could be plugged into an RL framework somehow, people could at least try to compare performance. We could attempt to make a RL api for HTM. Here I wrote a simple RL algorithm with the TouchNet environment (code is still not production ready): https://github.com/jtoy/touchnet/blob/master/agents/pytorch_cnn.py

1 Like

I personally can’t imagine a functional SMI system (which generates motor commands) without RL, since that is the mechanism which actually determines what motor commands to activate in a given context (otherwise actions are simply random or have to be pre-programmed). I think this is why there is so much recent interest in this area currently in this community (at least it is for me).

If you build it, they will come.

1 Like