Project : Full-layer V1 using HTM insights


#22

The David Marr Vision book; this has all been done before.
You owe it to yourselves to read it to keep from reinventing the wheel:
http://kryakin.site/David%20Marr-Vision.pdf


#23

Yes, that’s the keeper. If it survives the test of time then that will have been a tremendous help understanding how a 2D map becomes applied to our 3D world. Thanks for adding the paper to this discussion too!

I’m not sure what to make of that one. It’s loaded with information but after scanning for what I in this case need to see accounted for by the word “wave” the only waves mentioned are in regards to sine wave examples. It’s from what I can see missing the kind of spatial reasoning I have been experimenting with and appears to assume a symbolic information processing system.

It may sound like I’m being overly demanding but if the virtual cortical sheet is not lighting up with wave action shown in more recent papers describing results of using new voltage sensitive agents and able to perform as expected in a two frame place avoidance test then it’s not a model that will impress modern neuroscience, therefore the search must go on.


#24

That would be quite typical of me I believe. Jumping to an interpretation from a first photographic impression. Should have pondered about that some more, I’m sorry.
Yet I must say, I was quite impressed by the behavior of that lab rat you modeled, and so I’ve spent quite some time trying to decipher how you did it and what it was all about. In the end I’m not sure I’ve understood much of it. So, I’m still struggling to follow that approach. Please bear with my slow grasp on the matter. Adding to my confusion is that at times you can get quite metaphoric (eg, zombie bots), please consider english is not my mothertongue :wink:

I’ll try to give my understanding another shot, and answer to your post later in the day (And also to your reply on your oscillatory topic). At the moment I’m starting to read Marr’s paper. Thank you again for that gem, @Bitking.


#25

The oscillation has always been in the background of most neuroscience.
It is only very recently that this has been understood to be part of a distinct wave pattern.

While that is important the older work where the oscillator features where a background item are still fully relevant.

I would be wary of forcing wave behavior over focusing on the local functions. I personally think that much of the wave behavior comes from Thalamo-cortico-thalamic connections.


#26

Been off for a while, finished reading David Marr’s Vision, watched several more from the MIT course, and looked at some papers.

That book, Vision, is where Marr develops on his proposition for 3 levels of analysis, which I had to punch somewhat before I was able to integrate it as an interesting and useful viewpoint, or more precisely, an interesting and useful method for expressing (and being aware of) different viewpoints when studying processes.

Carefully studying the visual system as he did, is precisely what I wish to avoid here. But his work on vision, derived from intriguing experiments in psychophysics, is definitely something to read. Most is concerned about how to solve several functional aspects of human vision, and his proposed framework for doing so, with lots related to the definition of a “primal sketch” and study of possible ways of sketching it, all the while solving for, eg., stereoscopy concerns, and amenable to higher level understandings such as adding an egocentric “depth” to the primal sketch (what he calls 2.5D) and inferring whole surfaces before switching to full-blown 3D semantics, which we maybe could call an allocentric representation here.

I’ve also finally dug out some of the work which was trying to explore at V1 dynamics in the same way as I intended. Found those two (related) papers Receptive Field and Feature Map Formation in the Primary Visual Cortex via Hebbian Learning with Inhibitory Feedback and The Dynamics of Image Processing by Feature Maps in the Primary Visual Cortex. Haven’t yet read every detail in those, but there’s already a few things standing out :

  • What’s encouraging, is that the from-scratch formation of orientation selectivity in V1 cells seems quite doable, from simple visual stimuli, and in reach of << quite standard ANN models >>.
  • As a downside, however, the from-scratch formation of orientation selectivity in V1 cells seems quite doable, from simple visual stimuli, and in reach of << quite standard ANN models >>.

So… what to do ? maybe after I learn more about it all, there would be other well-known V1 features which are not captured by these, or shall I look at another approach and try to model concerns more related to SMI or something, like saccades ? or explore more of a hierarchy, I don’t know really.


#27

On the subject of oscillation I recommend the book
Rhythms of the Brain by Gyorgy Buzsaki
Oxford Press 2006


#28

How about a model like this for personal computers?

It would be nice to compare notes with the NEST group. At 2:40 in this video is a researcher modeling the visual cortex of a monkey:

My thoughts are to keep processing time to a minimum by using the usual 2D shock zone environment, which may be tilted to match a 3D terrain. This seems to be closest to how our brain works at the 2D cortical sheet network level. In either case we have many questions.

I would like to invite a guest neuroscientist or two to explain how their model works. We can all go from there. Your thoughts?

Since all “scientific theory” is tentative: whatever as a whole develops in the Numenta forum is still “HTM theory”. The guests would be working on supercomputer sized neuroscientific models that ultimately have to get into the finest of neural detail, which is not the same as HTM theory where there is the added challenge of modeling a whole cortical sheet inside our desktop sized computers. There is no competition that I know of to worry about.


#29

Seems nice !

I’d definitely see the more exchange between such communities the better, but I’m Mr Nobody here. Since their project has been maintained for years, I believe Numenta is already aware it exists.

They claim to support a large number of neuron models, so I guess HTM is amenable to NEST, but NuPIC itself is python based so I don’t know of the benefits it could bring to do such a port (other than obvious positive effect of idea exchange and goodwill synergies). But it seems like it is exactly what @kaikun referred to in the quote you posted, so if there are any volunteers for it already, this could be neat.

As for myself, I’ll try to have a look at some of these links, and the brain-to-robot stuff of the Neurobotics platform is exciting my curiosity, however I know I have great difficulty to function fluently as a lib-user. And all those reference to “Clouds”, “Multi-Device”, “Global”, “Customizable” keywords are for me a no-go : Even if I do realize it offers immense flexibility to a fair number of people, To my mind it indicates that this would turn any issue I could encounter while using the system into an OS-level-configuration issue, for which I have a deep, almost Pavlovian fear.
So, even though I still did not understand your code, Gary, I’m way more confident in my ability to follow your own bitfield-kind-of-reasoning than I’m confident in my ability to use anything like this ^^.

Regards,
Guillaume


#30

Here some notes from what I learned just reading about the problem:

  • The main difference is that NEST is working with spiking neural networks (SNNs), which makes all the simplification of HTM to binary computations much more computational complex.
  • Nevertheless HTM theory comes from neuroscience and the algorithms are designed in a way so they should work for SNNs but adaptation is non-trivial.
  • However in the past the SNNs have lacked scalability and essential features like sufficient plasticity-customization were missing. (This is a general problem in the scientific community and reason why they often switched away from SNNs in e.g. robotics, if they aren’t experimentalists)
  • On the other hand an implementation in NEST can also be compatible with neuromorphic hardware projects like SpiNNaker or BrainScales. This makes it really interesting, even though they add complexity they ultimately are designed to run in parallel, which is hard to archive for HTM on traditional computer/network architectures.

Kind regards


#31

About SNN adaptation : what I got very succinctly from NEST framework presentation is a reference to the fact that they would not typically operate on weight-based models, but rely on much more topology-oriented connectivity lists. Topology is not what the canonical HTM library would do in default “global” mode but is still, in my view, one of the primary strong points of the HTM state of mind : caring first about a topology of dendritic tree and not bother too much about synaptic weight.

As for conversion of a spike “frequency” to a one-bit signal… After seeing one of their visuals which does greatly look like an SDR, could this be for HTM simply a matter of tuning the simulation clock to capture each spike at precise time t as an “on” bit ?

Another idea I had a few days ago to integrate a per-cell scalar information (such as spike frequency) as input to an HTM model was that it could avoid impacting the implementation of the excitatory pathways (ie, not driving higher excitation to postsynaptic pyramidal cells), but rather, to control the level, or extent, of surrounding inhibition.

[Edit]Oh sorry, @kaikun, I think I finally understand what is at stake here. Is it that SNN have progressive increase in depolarization level until they fire at threshold-crossing ? Yes this seems harder to reconcile with HTM indeed.


#32

That’s how I do it. A bit staying on is the same thing as a neuron spiking as fast as it can. If it’s one spike per hundred time cycles there is a 1% duty cycle. This is useful for giving things priority. Whatever most often signals (such as hunger bit or other need) gets most acted upon.


#33

I found a PDF version, and scanned through it a little:

That led me to this one that I now maybe half understand:

We really need a visual of these waves, or something.