I think maybe it is time for you to think a little about nature. I have a theory that the brain looks for increasing novelty in order to pattern novel information and that increasingly complex behaviour is initiated to learn about the environment. If that were true then there would be a biological imperative to learn. I think if you look at natural systems you will see a pattern of self organizing behaviour emerges. There is a growing body of evidence that suggests that plants learn. What would happen if we designed an encoder to take action potential signals from a plant and simply looked for patterns that we could try to correlate with time lapsed plant behaviour. Would it pattern basic learning?
One other thing…I think there might be a fundamental law of nature…we could call it the law of increasing novelty… sort of like entropy that drives learning in all biology…did I pique your interest?
One last thing… if my theory of increasing novelty bears any sort of fruit the fact that my idea might be somewhat novel might require some consideration. See my previous post emotional logic control of behaviour based learning.
Sorry… The thoughts keep coming…it occurs to me that you guys have done a fabulous job of unraveling the most complex thing nature has ever produced…but have you thought about how nature’s simpler things might be trying to do the same things in more manageable chunks. Plant behaviours happen on a long time scale and this might slow things down a little for a machine that’s learning. It might give us the time to actually see what’s happening. It would be sort of akin to how I think kids should pull apart a small engine before they start to fully comprehend a car.
Do you have any articles/videos or such about this you’d recommend? Is it some sort of stimulus → chemical response learning? For example, something chomps a leaf so it releases a defensive chemical, and it learns to release that chemical when the insect starts walking on the leaf, perhaps?
As I understand it, plants use chemical/hormonal processes to control their behavior, so rather than action potentials like neurons use, you’d have to use something else for the encoder. If there is a substitute for action potentials which you can correlate with behavior, you might find something interesting like a general learning mechanism, but I’m not sure plants would evolve that because they could probably just use a bunch of specific strategies to deal with each situation.
In my opinion, organisms like plants, bacteria, and simple animals like worms probably don’t use mechanisms which can be applied to other things.
But for most animals, I agree. There are probably some mechanisms which are universal for animal intelligence. By studying insect brains and bird brains, for example, you could probably figure out which mechanisms of intelligence are universal and which are just specializations.
I bet if neuroscience chose to study insect brains in so much detail rather than mammal brains, we would have a much easier time creating AI, or at least laying the foundations (object recognition, memory, etc.) if insects don’t have general intelligence.
Still, there is so much more information about the cortex than about non-mammal brains. It has taken so much research to start understanding the cortex. From what I’ve seen, we’re a long way from understanding non-mammal brains in enough detail to theorize about their mechanisms of intelligence. I think we’ll probably solve general AI before studying other brains in detail, and then use non-mammal brains to further enhance AI and behavior.
A colleague of mine is experimenting with behavior driven encoders for a potential thesis direction. I can put you in contact with him if you’d like.
The search for common laws and principles of learning in nature is something the HTM community takes very seriously. The notion of a common cortical algorithm is an example.
Ok a few things… i think all animal brains work on an algorithm which searches for increasing novelty…next the plant behaviour thing is summed up nicely by a cbc (canadian broadcasting corporation) documentary on the nature of things… sorry…i didn’t include a link as i don’t think it is available for free outside canada but i think a google search would probably turn something up. Third…think globally …i mean …if nature figured something out once it probably recycled it in higher and higher animals…i see the same process of concept formation and object recognition at work in my dog as that at work in my own mind. Finally…I should let you know that my primary goal here is to discover human learning algorithms for the optimization of human learning. As a result I have chosen to study the link between emotion and behaviour…and before you get turned off by the word emotion…I think you should know I think that emotions are fundamental to learning…I’m not sure how you implement it in machine learning, but essentially I think that human or animal emotions are a critical piece which is being overlooked. If emotions drive behaviour…and I think they do! Then emotions are the algorithm that drives behavioural learning and this would be key. I think emotions are the breadcrumb trail of learning. Just like we need to have an egocentric and allocentric model of the world…we also need a map of the learning that went into discovering semantically similar objects or ideas…otherwise we would have to learn about a similar object or idea from scratch each time. Think about a ball…probably you are thinking about balls that are round, spherical, they bounce and roll and probably they are fun! All of the first few attributes are physical but the last…in the case of humans…is the current state of learning about spherical objects called balls. What if I gave you a bowling ball or a cannon ball. Now all of A sudden you are confronted with something semantically similar to a ball…but it doesn’t bounce nicely and if you drop it on your foot or it is lobbed over your castle wall it is not fun and does not exhibit the properties associated with a ball…your emotional response to this will drive your behavioural interaction with this New ball but it picks up where your last interactions left off rather than trying to construct a new model of a dissimilar ball…One which is not so fun or which requires a new definition of fun…if you’re the one lobbing cannon balls and they are doing what you expect…then it’s probably a new way of having fun with a new kind of ball. In this way I think emotional response or attitude to concepts or ideas could be viewed as almost like GPS way points along a complicated route which could have many permutations or combinations which eventually arrive at the same point. If you are confronted with something new or unexpected do you go all the way back to the beginning or do you just go back to the last known position along that route. Like I said…I’m just trying to figure out how people learn…if this idea is as big as I think it is then just throw me a bone later on when you figure it out…in the mean time…I would love to hear from someone who can really explain spatial pooling and specifically why it’s called that…I have some ideas but would like to hear from you all first…thanks jake
Cbc did a great doc on smarty plants… also Stefano mancuso? Could be misspelled…did a great Ted on it…long time ago now he has probably got your encoder worked out already…just plug and play …as far as laws of nature and stuff, I’m pretty sure it’s there but I lack the labs, free time and math skills to work it out…I will deal with theories I can experiment with in my classroom… I’ll leave the proof to people like you…of course let me know if I was sort of right. I have a hunch that things are not as complicated as you might think and that once you find it it will also apply to worms and even single cellular life…just a hunch but maybe it starts a conversation with people way more qualified than me.
To be clear, my opinion is no more valid than your’s. I’ll check out those videos when I have time, but if you can encode it in a similar way to neurons, much of what I wrote in my previous reply is probably wrong.
Perhaps related on a very deep level:
Ok so in the hexagon visualization I hope it wasn’t lost on people that these circular interference patterns emerged…has anyone checked to see whether these interferences follow zipfs law? I bet it does and I bet that there is literally one algorithm that does it all. Just a hunch.
If we devised an encoder that could capture data such as word exposure, object exposure, and a few other data points of a baby’s early life…using zipfs law we could accurately predict the baby’s first word. I think we should be studying babies!
I lack the math skills here, but, if a series of grid cells were used in hexagonal interference patterns to essentially filter information using zipfs law wouldn’t that create an sdr that could instantly be compared to a new input sdr and if the overlap was high enough…a behaviour is initiated to return more info to make a better comparison…and update if necessary…what if the probability of a match followed the 80 20 rule…I think most of us could agree that an 80 percent probability is a favorable outcome for a prediction. Is this a fundamental cut off value? There is more here…just not sure what it is.
Maybe i don’t lack the math skills the law of incresing novelty is the inverse log function of zipfs law…somebody please prove it for me…I’m a highschool teacher.
Riffing on your 80:20 match thingy …
First - we know that biology tends to find some very good solutions so whatever it is using is likely to be close to some optimal configuration. Knowing that some natural series appear in nature over and over - we should see this kind of pattern if it appears. (Fibinacy, e, logx, …) we could reasonably expect that the fact stores in the cortex give a very good “work performed per unit work done” ratio - in this case in data compression. 80:20 repetition gives pretty good coding density. Compression is somewhat based on central trendy types of analysis and rotation, in this case, it’s about the HF pattern anchors: It forms the root of a data directory. As it learns more the bits on the “edge” of your patterns fills in.
I have often thought that nature learns in delta coding. You resonate with the parts that you know - however well or poorly your internal model allows you to. You get better over time; what you learn is the delta between your internal model and the sensed reality. The difference is the pure error in perception. With a continuous stream of whatever your reality is you should be filing off the rough edges in your internal models. This delta coding is self-scaling. If you see something you don’t understand, you still understand it a little.
With the temporal part of HTM and some SDR based neurons, I can see how to code this.
At least that’s what I take from it for the data compression routines I have looked at. I have sketched up a model in my STM to grids that can learn this in real time. I feel the need to sling some code.
Ok so here is a thought… we talk about triangulation of data all the time… what about a hexangulation of data with 5 senses and time as the sixth sense…a map? Could timing signals vary to create variations in hexagonal grids?
You mentioned timing and waves and such. I have been thinking about timing in neuronal circuits for a while. I put some of those thoughts down here:
On the hexagon thing - I can’t quite see how that would work
Can you give me a code fragment - say in C, that demonstrates that idea?
Or a graphical representation?
This is what had to say about “pattern puzzle pieces.” Then can code whatever senses are routed to the same map patch. I suspect that most of the main hubs have a lot of sensory streams running at them. This drives grid formation.
This might interest you: