HTM Hackers' Hangout - July 5, 2019

Matt, I was very excited by what I found at the Google link, for Torch. My thanks to everyone who helped provide this:

The code ran for me!

Unfortunately the four here experienced an error I expect you already know about:

I was impressed by being able to (after Run All or associated code above it) clicking after changing a variable (for example fill_alpha=0.6 to fill_alpha=0.1) then click corner to run a block of code, like in this charting example:

I think this programming environment is exactly what I needed.

Even though I love the hard core neuroscience research (into even RNA World origins) for you the focus probably does need to be on the machine learning toys like the new Google powered Torch. I’m now most focused on stacking of spatial poolers as in this thread:

Using the system for motion in an environment that neuroscientists use for testing live animals keeps the model connected to neuroscience, while at the same time making progress on the long ago started 4 sensor HTM entity that learns environments we map out for it. I expected that at some point it would become too easy to make a self-exploring critter/finger to be worth writing code that moves it around, fakes it.

Starting with invisible surfaces would be learning by touch. Put a handle on one side of the circular arena makes it stuck inside a coffee mug. Making a hole in the circle and placing food outside would allow it to get out and learn the cups external shape. From there can use four or more most detailed 2D maps in the 3D stack for view from above, and can navigate up and over rim of cup to get out. Instead of buggy looking critter like I ended up with you could just show a finger, which for testing purposes must still watch out for a moving shock zone at the bottom while daring itself to get zapped a whole bunch of times.

3 Likes