Agency, Identity and Knowledge Transfer

Sure, it makes sense.
However, I think that it would be more productive to accept the fact that the human mind has emerged properties, which are not useful from the control theory perspective. Otherwise, we would need to agree that binge-watching instead of, for example, preparation for an exam, is a beneficial adaptive behaviour :slight_smile:

Or we could agree that the basic drives do not always converge on the optimal behavior.
A moth flying around a light bulb is trapped in the outcome of its control loops.

As I just intimated, the loading of data to shape behavior results in a certain degree of plasticity. The genetic innovations of social groups and external memory bring high flexibility and much more rapid response to changing environmental situations - but also more variation in the ways that that behavior can result in sub-optimal expression of basic drives.

1 Like

@rhyolight, are you sure? How exactly do you know? I don’t think this is true.

Let me be very specific about what I think is incorrect here: it’s impossible that the structures you created on the original sensor set have absolutely no use in learning the structure of the environment using the new sensor set. (Maybe that’s not what you’re claiming, maybe you’re claiming they have no direct use, which is probably true; they can’t just be ported over, but they are indirectly useful).

Those structures are indirectly useful because they’re analogous. They guide your predictions. That’s what intelligence does - reasoning by analogy. So if you find a structure in your brain that seems to have predictive usefulness when creating a new model, you will naturally use it to guide your predictions and therefore your attention (how you go about discovering the new structure).

Just as you show (later in the video) that the agent can use its developed knowledge of how its sensors work to explore a new environment so too can it use its developed knowledge of the abstract conception of the environment to learn how to explore the same environment with new sensors. They’re two sides of the same coin, they have to be.

That’s not to say it doesn’t have to build a whole new conception - it does. but what I’m saying is the existing model necessarily is analogous to what it’s new conception must be and therefore is useful to help it learn it’s a new conception of the same environment with new sensors, and that a generally intelligent agent will capitalize on that analog - because that’s essentially what it means to be generally intelligent: to recognize the analog and capitalize on it.


Yes, this is what I was claiming.


I whole-heartedly agree then :slight_smile:

1 Like

I think both are present in people’s minds (sub-optimal control loops and emerged properties, which are not useful from the control theory perspective), but only first is related to a moth.

If I may suggest an even simpler example. Every six months my ex-wife would completely change around the furniture. She is quite good at it and imaginative. I suck btw.
So we see a many functions involved ranging from visualization to long term memory.
And yet there many mental constructions we do promptly discard and retain a description rather than any details. We remember they are not wanted though.
These are highly integrated and yet some things, such as agency or intent are foundational. So that is a behavioralist view perhaps. I guess it’s my view prior to reading the documents.

G12 is the next generation of miracles

“Rather than the minimizing energy expenditure model as described by Firsten, Cisek replaces this with control loops; some error signal that is reduced by some response.”
That’s not only my view but my most frequently used definition of consciousness. It means different things in different contexts. Define your terms (for purposes of discussion) eh?
So at the most fundamental level we have a state, intent (rules, etc.) and a loop in an anatomical and therefore purposeful hierarchy.
From there we can look at influence of energy economy, the medium, memory, prediction vs correction and more complex properties specific to mammals.
The repetition of functionality in different contexts is central to biology and the universe more broadly.
Your comment shows how this certainly applies to the brain. For example, dialog; that’s very high level. Dialog has multiple error signals.

Thought experiments can be problematic. In science and philosophy I made up a new term. The “canonical case fallacy”.
With the former you can constrain it to rationalize your theory. With the later, the failure to extend it to more complex examples leads to believing it’s universal. Philosophy is composed of countless partial truths converted to ideology.
Would you mind starting a thread on your views around free will sometime?

1 Like

I agree. I am skeptical about everything, including my current points of view.

There is a long thread that grew out of a post by @morning_star called Determinism in which a few of us discuss our views on free will. But it’s about 200 posts long.

That’s what am looking for, thanks. I am (by design) a determinist and rabid reductionist allergic to speaking in jargon.
I suggest there is no conflict with free will and agency vs determinism but you are better served using agency. Essentially I view stochastic processes (in physics) as emergent phenomena we can’t observe or perhaps calculate. This resolves the contradiction, I don’t have to be correct. Lol.
Linguistically, ‘free will’ is about as ambiguous as using the word ‘freedom’. I tend to look at it as equivalent to agency. I will have to make sure I don’t range to far off topic here.

Sure. The sanity check is your friend. While being our main way of learning the first thing I look at is any bias in my question or phrasing.

Could you explain what you mean by “heavily redundant” here? It’s a design concern but has many aspects.
Mine evolved over time but includes orderly failure, fault tolerance, replication, synchronization and backups.
So it’s cross cutting but I can see this work has this intrinsically in the signal. I have had a brief look at sparsity and related videos here.
My intent was to impliment (provisionally Numenta’s) utility functions via an API but my project argues that science’s eclipsed view ignores biology across domains. As in actively, anthropomorphicly, unconsciously (biases) denies it .
Not you guys tho. Your the cool kids. :sunglasses: :+1:All of this work is biologically driven.
Actually because of Matt, there are two more important approaches that would support HTMs paradigm I am considering.
I am trying to visualize an analogy to a federation or democracy. Or how to implement it using your constraints.
Second, I have a layer that could crowd source (distributed ) literal mini-columns. The cells I mean. In my version they were humans. They still would be tied to them in this case.
(In design phase) a statistically based categorization needing temporal, additive and continuous update? That’s certainly an extremely good match. Plus sparsity. Great.
So while there are disadvantages to being biologically constrained from the start it offers huge benefits in the domain here. In my view to humanity in a medical context.
It was preferable that although that was a goal it was easier and more practical to implement it on the system later in an isolated way.
But could you do it from the ground up? In theory you could and maybe get a better result. I don’t know yet.
So err… the project makes use of archetypical roles when commenting on social media (ie here.) So uh, that being the case…
We… err again… therefore feel it is imperative that we build a human hive mind to defeat our arch nemesis and source of all evil… Elon Musk…
Now it would be a horror to build a electronic pseudo human. However we could implement Numenta’s columns and mini-columns in the design safely.

If you don’t mind, I’ll reply in a few days. I don’t have the heart at the moment.


Of course Falco. I posted this feedback now with that in mind. Take all the time you need with any of my questions they have been on the plate for quite a while. No hurry, just a distraction.
I intellectualize when I am stressed and might not understand the boundaries around this. Having followed your comments on YouTube for a while, instead of a platitude I thought I would reach out.
I didn’t know your friend but he was central in a case study on Numenta (in the public domain.) So in a very strange way I came to know him quite well.
In three years there are three people I decided I would trust. Out of around two hundred. Your friend Matt was one of the three. You should know that. Take care.

1 Like

First off, what a strange thing to say. And why do you speak in the plural form? How serious do I have to take the rest of your post? I don’t think EM is the source of all evil.

When you consider that everything we know about the world is encoded in the neocortex over trillions of synapses in sparse representations (as Numenta describes based on many neurological observations), then you can imagine that most synapses (i.e. bits, in combination with many others) represent a feature of an object, or a concept. All these bits (of that object’s set of features) must connect at roughly the same time for the idea of that specific object to form in our mind. Since most of these synapses have stochastic behaviors, this system can only work if there is sufficient redundancy.

When I wrote that quote, I didn’t know Numenta’s model very well yet, and it evolved quite a bit since. (I am still far from understanding everything to be honnest). But one of the questions I have is how exactly new information finds its way through the tree structure of the neocortex to select (so to speak) a synapse to represent this information. And later through the same pathway this synapse communicates its information to the rest of the brain (to generate behavior).

To use a metaphor from microcircuitry that is quite wrong: what is the address bus of the neocortex?

This thread talks about the difficulty of hypothetically transposing the structure of one brain to another device. I think the problem of this lies in the meta-data of the address space of the neocortex. If there is a way to understand how this works, then I think in principle there must be a way to read this meta-data (the address space) together with the data itself. And therefor restitute it.

But this is of course very speculative.


2 posts were split to a new topic: Is Elon Musk the source of all evil?

I am very new to this. Thanks Falco and my understanding of this neural net redundancy is the same as yours. That’s one thing I was watching for and find mathematics based (NN) sparsity quite amazing. The voting or predictiveness more generally.
There isn’t much talk about the lateral connections which is where I see you will find your answer if they are any studies Numenta has not yet found. But they said they looked.
I am rooted in biology but my sw designs are not so I would examine that at a more functional (speculative) level and manner in a few ways.
It’s one of my jokes. It doesn’t have to be correct it has to match the model. Thanks again.

I am speculating on that but the idea is poorly formed. What it looks like seems to relate directly to the redundancy too. Object identification rests on line segments and textures that are fuzzy.
So here’s the mystery to me. The conscious mental act of categorizing a new object (or sub-category) connects to that state on an appropriate higher (virtual) layer to create that nerve or “cluster”.
However, the flow may actually be feed forward where the formation occurs giving rise to the idea. Myself I believe it is fully integrated and you likely can’t disentangle the layers but you could watch it happen.
I am an extreme reductionist but that better serves as a view. You have to separate emergent media.

FYI, I have split this into a separate thread. Feel free to discuss evil and Elon Musk over there.