Semantic relationship requirement for encoders?

I’m very new to all this, so apologize if I’ve missed an obvious explanation somewhere. Or if this needs to move areas.

Can someone explain why we need semantic relationships in encoder output? And how that relates to how initial layers in the cortex work (V1 for instance)? If the cortex works the same everywhere, and features are initially found in the cortex…

My intuition is the brain must extract features itself, so order of input shouldn’t matter. Moving where a feature is found in the bit pattern just changes which neuron/column processes it. But that’s what I thought semantic relationships in encoders forced…logical connections/relationships between the bits inside an SDR.

Yes, you need repeatable ways of encoding information, but after that…I don’t see a reason it should matter if I used either of the two integer encoding methods I saw in the HTM school videos (one was continuous and directional in the pattern it activated, where the other looks very random and spread out).

2 Likes

I suspect that the type of information being encoded makes a difference. Some information may be carried by the relationship between parts such as sound or images and in these cases it makes a great deal of sense to preserve these relationships.

1 Like

Another reason is that HTM doesn’t yet include a hierarchy which can form high levels of abstraction. So part of the job of an encoder is to do some of the work that lower hierarchical levels would do to capture abstract relationships (word SDRs are a good example)

2 Likes

Hey quick side question @Paul_Lamb, do you (/could I) have access to cotical.io’s word SDR generator? Or do you have your own?

I have been using word SDRs that I generated myself from previous semantic folding experiments. When I first generated them, though, I did some functionality comparisons with cortical.io’s. I just requested a free API key (see here). Take a look at the retinal API documentation on that page, but if I recall correctly I used the /terms API with the term parameter undefined to get a list of all terms, then called it for each term separately to get the fingerprints. (each fingerprint is an int array of indexes to the 1 bits of the SDR IIRC)

1 Like

@Bitking Yes, any relationship between information is useful, but wouldn’t seeing them at the same time (or close together) also preserve any relationship in your example?

My point was the only thing I see enforcing “semantic relationships in encoder output” giving was forcing processing to happen in different locations or to happen outside the system, assuming the system does what the brain does. Meaning if I removed that human generated relationship, a different computer element should see that same information and create a single “feature”, or set of related features, to be used by later elements. The system should extract the feature for me for later use in the same system.

@Paul_Lamb
“HTM doesn’t yet include a hierarchy which can form high levels of abstraction. So part of the job of an encoder is to do some of the work that lower hierarchical levels would do to capture abstract relationships (word SDRs are a good example)”

Perfect. Yes, I was about to return and ask if the newer idea of the brain using the same mechanism as location identification to organize the world (and it likely not existing in NuPic yet) was part of the problem. And yes, from the reading I’ve done it doesn’t appear the system has the ability to make layers yet.

Another reason I thought this might be required, was output. I don’t think I’ve seen a motor output mechanism or feature extractor (what I see our names for ideas as in the brain…finding the system’s highest processing element that signals something) or model creator (another way to picture decomposing a complex input into sub-ideas), so we’ll be the interpreter of any SDR output we need to understand beyond “yes/no” binary output. Sorry if I’m not using community accepted terms there, I’ll try to continue reading. And any logic we encode in that result output should help with finding relationships of our seemingly one possible layer. Or maybe I just haven’t gotten to where the output method is described?

It seems like offering flexible output to a system like this will introduce many interesting issues too.

Anyway, thanks all for the thoughts, and working through this with me. I am excited by a lot of what I’ve seen!

1 Like

Interesting. My intuition for this requirement is that it helps somehow simulate the sensitivity of the biological “sensor cells” in a consistent manner. In a way, the HTM inputs (encoder outputs) are not garbage, has a probable spatial meaning (per bit). In other ML systems, this is not a requirement because there is only little concept of a biological sensor, and much of the feature extraction are done in many different ways usually mathematically. In a very simplified sense, I also see the HTM as not extracting the features, rather it is only reactively burning familiar patterns (at least the SP), but its emergent behavior are “extracting features” and “learning”. Sorry for the poor bio terms.

I always find these simple creatures made by Theo Jansen interesting as it somehow relates to how the brain/cortex works in a very simplified sense and at least to my simple understanding.

3 Likes

@EternalStudent - All the features are there.
What’s the problem?

87bb2d7a39ff57be0e647c18c3da42b1--picasso-portraits-picasso-art

Topology matters.

3 Likes

I agree with @Bitking that topology matters. What makes the universe comprehensible is its remarkable consistency. Of all the (combinatorially large) number of possible things that could happen in any instance, and all of the possible ways your senses can detect and encode environmental stimulus from moment to moment, the fact that only a relatively small handful of events occur in specific sequences and on a regular basis is perhaps key to our ability to successfully represent, model, and make predictions about ourselves, our environment, and the interactions between them.

So, for me the key property that any encoder needs to posses is consistency. That is identical inputs should yield identical encoded representations (or at least highly similar in the statistical sense). Another property which is highly desirable is locality. That is inputs which are tightly coupled in space and/or time should have encoded representations which are fairly near to one-another. The more unique these features and/or the tighter their coupling are, the closer their encoded representations can (and probably should) be, since the presence of one would almost certainly imply a high probability for the presence of the other.

These two properties, taken together, might meet some rough criteria for semantic similarity. In a more formal sense, we might say that our encoded representations exist in some kind of a metric space, and therefore semantically similar inputs should be encoded into representations that are fairly close together in this space (with closeness defined by the connectivity of the space and a specified distance metric). In the end, semantic similarity is just a useful abstraction for describing how we compare and contrast the various properties/features detected by our input sensors or generated by our internal thought processes.

5 Likes

HTM school has a nice session on topology.

2 Likes