HTM Hackers' Hangout - July 5, 2019

HTM Hackers’ Hangout is a live monthly Google Hangout held for our online community. Anyone is free to join in the discussion either by connecting directly to the hangout or commenting on the YouTube video during the live stream.

9AM PDT

I am open for suggestions regarding topics. I would like to chat a bit about:

  • Numenta’s research direction (more focus on machine learning)
  • nupic.torch progress (new examples from @lscheinkman)
  • spammers on the forum
  • Twitch vs YouTube (and my plans going forward)
  • BHTMS progress

Please post below anything you would like to chat about. Who is planning on joining in live?

1 Like

Summary of research meetings on capsules vs HTM?
Grandma capsules?
Really?

1 Like

I haven’t asked @mrcslws if he can make it yet, so we’ll see.

3 Likes

The last live stream sort of contained the summary.

Both @mrcslws and @lucasosouza should be able to join us.

4 Likes

Many here are still trying to work out HTM.

Consider that for many, capsules are cool because of the halo around Hinton. This could be a path to HTM for the capsule crowd.

I think a “compare and contrast” would help with an explanation for “the rest of us” to help with evangelizing how HTM is different and the advantages of that. I would start with the hacker hangout to frame the issues and perhaps follow it up with an HTM school lesson.

For that matter, a compare and contrast with DL could be an interesting HTM school session.

Just saying …

5 Likes

I would very much like to hear a discussion about classifying low level input higher abstracted objects. In short, how are new objects learned (i.e. apples-> fruit, banana->fruit, corn->vegetable).

2 Likes

Live stream has been activated. You can join here to get into the conversation, or just watch the watch live and join in the chatroom there. The stream starts in 20 minutes, but I’ll be hanging out to chat beforehand.

3 Likes

Point for consideration in the Hackers Hangout:

  1. Capsules are all about levels and much of the power is in combining the low-level vectors into higher-level representations.

  2. I can see that at the level of the capsule – there is almost a direct one-to-one correspondence with HTM macro columns. Yes, one is SDRs and one is vectors but that is almost a quibble as it is arguing about data representation rather than the underlying process.

  3. Where I see capsules falling down is the lack of formation of a distributed representation. The current formulation still has a reduction pyramid to grandmother cells. Yuch!

  4. Numenta could very well run into the same problems if they do not implement the H of HTM with distributed representations.

3 Likes

I realise now how foolish my question was. After all the papers and video’s Numenta produced, I should have know this. But it is important for me to understand where my thinking error was.

It makes sense that the (noisy) sensory information needs to be stabilized through a spacial pooler before it can be considered a feature as far as HTM is concerned.

Thanks for the clarification.

4 Likes

It was not a foolish question if the answer clarified something in your mind.

3 Likes

Thank you Matt/Marcus for addressing my question. I hate that I missed the stream (I had the time wrong), but I did watch and heard your response to my question which was extremely helpful. The hard thing about classification that presents a problem to modern day solutions is concept drift. I have been studying classifiers vs concept drift for several years now and for my dissertation have proposed building an sequence classifier that is able to classify an executable file as either stalling code or not that is built on an HTM system. I just submitted the proposal yesterday to my dissertation chair. I’m sure that I’ll have the opportunity to modify the proposal in the coming days. Stick around, things are sure to get interesting :slight_smile:

3 Likes

Matt, I was very excited by what I found at the Google link, for Torch. My thanks to everyone who helped provide this:

The code ran for me!

Unfortunately the four here experienced an error I expect you already know about:

I was impressed by being able to (after Run All or associated code above it) clicking after changing a variable (for example fill_alpha=0.6 to fill_alpha=0.1) then click corner to run a block of code, like in this charting example:

I think this programming environment is exactly what I needed.

Even though I love the hard core neuroscience research (into even RNA World origins) for you the focus probably does need to be on the machine learning toys like the new Google powered Torch. I’m now most focused on stacking of spatial poolers as in this thread:

Using the system for motion in an environment that neuroscientists use for testing live animals keeps the model connected to neuroscience, while at the same time making progress on the long ago started 4 sensor HTM entity that learns environments we map out for it. I expected that at some point it would become too easy to make a self-exploring critter/finger to be worth writing code that moves it around, fakes it.

Starting with invisible surfaces would be learning by touch. Put a handle on one side of the circular arena makes it stuck inside a coffee mug. Making a hole in the circle and placing food outside would allow it to get out and learn the cups external shape. From there can use four or more most detailed 2D maps in the 3D stack for view from above, and can navigate up and over rim of cup to get out. Instead of buggy looking critter like I ended up with you could just show a finger, which for testing purposes must still watch out for a moving shock zone at the bottom while daring itself to get zapped a whole bunch of times.

3 Likes