The spectrum of computation and memory (centralized, separated, serial -> networked, identical, parallel)

I’ve been thinking a lot lately about how computation is done in the brain very generally.

Generally speaking, computation is done in the brain by changing the network connections. The change in network connections is the computation itself since there is no central processing unit that does computation, it’s distributed across the network.

In trying to wrap my head around what this means I’ve answered two questions on Quora about the topic.

How does bandwidth relate to computation?

Are humans just complex pattern-recognition machines or is there something that inherently separates us from ML?

Perhaps generalization is best done on a network for a reason…

I’d love your help to kind of condense these thoughts down to one idea, one theory or model of what computation is in a network context. It seems to me that a rigorous and universal definition of computation has not been created, let alone a mathematical one. All we have are some languages that can approximate what computation is in a serial fashion (Turing machine, lambda calculus).

These languages seem to assume a central processing unit that does the computation or they’re abstracted away from the implementation of the computation, dealing with the realm of symbols and logic.

However. it seems to me that computational power (the power to change memory) exists along a spectrum of centralized to decentralized; from serial to parallel, and there’s no cohesive description to articulate what that spectrum means.

I think if we had this description (mathematically articulated) we’d have the perfect relationship between memory and computation. Where, on one side of the spectrum they are separate, and on the network side, they’re one and the same thing. Their interaction with each other is, perhaps, the mathematical description of what intelligence actually is. Or what it ideally is.

I mean, when Jeff says the brain is not a computer it’s a memory structure, that’s what I’m talking about. It changes over time, and the way it changes is based upon its base structure which is approximated by HTM theory, and that structure, is highly constrained (relative to the vast combinatorial complexity that it could allow) but highly variable, able to approximate almost any model, that is to say, able to mirror any data. But I’m certain that HTM is a specific case of a general memory-as-computation theory because HTM has evolutionary pressure to evolve the way it did. that memory-as-computation model lives at one end of the spectrum, which if we could define how memory interacts with computation as the two become decoupled we’d have a complete theory of efficient information processing.

But what do I know?

I’d love to hear your thoughts, it’d be great if someone could simplify this mess. It seems overly complex to me, and I can’t quite grasp it.

The spectrum looks like this in relation to memory and computing power. meaning structure and how the structure changes over time.

One the left side computation is serial so it is straightforward and simple with logic gates, on the other end it’s… complicated.

It seems to me that all along the spectrum from centralized to decentralized, from separate computation and memory to one and the same in the form of a network, the algorithm for how computation should be done on the network the algorithm for how the network should change should be changing - all along the spectrum. And I think the most generalizable context in which to view how that computational algorithm should change is through a feedback loop with some environment. we should view each computational entity, every memory structure as a sensory-motor inference engine.

You know, it seems like on the centralized end there’s very little feedback between computation and structure, in other words, the loop is really big, looping in the external environment. programmers need to be involved to change the way programs are written (how computation is encoded in memory). But on the right-hand side, the feedback loops are many and more interconnected. In other words one of the key metrics to I think has to be self-referential feedback loops. but I dont’ think that’s it because the optimal solution is not simply an RNN for everything.

Anyway, I think now I’m just ranting. Let me know how you view it.

1 Like

It seems we are asking some of the same questions:

1 Like

Maybe. You use a lot less words than I do, and they’re all very large. :grinning: So it’s hard for me to understand.:disappointed_relieved:

1 Like

If English is not your first language have you considered dropping the text into google translate?
The words I use are are the closest to an exact match to what I am thinking.

1 Like

lol, no English is my first language. I just don’t know the exact context of any of the words, because I’m not used to using them in my vernacular.

I find that compressing as much meaning as possible in as few words as possible means I’m only understood by people that are on the exact same page as I am. Its a lot of work to decompress language, and unfold it out into its meaning. so I tend to use more smaller words than fewer big ones (plus I don’t know many big words). I try to unfold the ideas as much as possible for people; to make it as easy as possible for them to consume, and still I feel like I’m rarely thoroughly understood.

It’s just a different communication style, I’m not used to yours, so its hard for me to understand. I’m not an academic, I’m just a regular guy of average intelligence. You seem to have a taller language hierarchy than I do.

1 Like

I have been doing this for decades and I am sure that each line would unroll into a post all by itself.
Understanding the whole of neuroscience is not an easy task.
I don’t know that there is a royal road but I do think that the named computational task are the things that you are trying to uncover. You are right - it’s not like traditional logic gates. The functions in the the first part of the list are very different than what a computer person expects.

If you want to work through the list I would be willing to help.
Perhaps you would take each item and tell me what you think it might mean and I can fill in what you are missing. This could be good for others reading the thread.

My list is split into two sections - the first is what the little mini-columns are doing by themselves. The second is how the macro-columns fit together; keep in mind that this is what is in the nodes that the network is connecting together. They way they are connected (topology) leads to how the nodes work together in a larger sense.

JH is correct that the cortex is doing the same thing everywhere and the connections between these little computational units (cortical columns) builds to a larger picture.

This fits into your picture that local memory and connections work together.

2 Likes