Theory of abstraction

I’ve been thinking about a theory of abstraction in the context of HTM systems. My question is on what research has been done into feeding htm systems into htm systems? I have some ideas that seem theoretically sound to me, but would like to know if there already exists some experimental or computational observations.

Are you talking about abstract compsci models?

1 Like

No, mental abstractions in terms of sdr systems.

Does hierarchy fit what you are talking about? Their current focus is a single cortical region, although Numenta did some experiments with hierarchy before the current focus on objects and locations.

Do you mean something closer to intelligence than object recognition?

2 Likes

Abstraction is a hierarchy above object recognition, but I wouldn’t classify feeding and htm into an htm as an abstraction.

I think you’re talking about abstraction as in tying information to things that don’t have a literal space in the real world.

In literal object representation, we can tie the sensory features we’ve observed in any space to allocentric object space. For example if suddenly all coffee cups in the world had a new bumpy texture, over time your model of coffee cups would slowly change as you handled more and different cups with this texture in the real world.

I think about abstract concepts in the exact same way, except we create them, they are not observed directly by sensors. We use our experience with objects in space to imagine new objects that we have not observed in space. The first example we see of this in archeology is the lion man. The artist took two known concepts that had been observed in nature and created something new in an allocentric space. Then they literally created it in reality, because that is what artists do.

5 Likes

So within the scribble below is my idea for Artificial General Intelligence. There are still a few questions I still have concerning how concept-formation can be done using SDRs. Right now I’m interested in the idea of Disjoint Unions of Sets, Logarithms, Integrals, and Spatial Transformations, with the idea being that somehow there is a spatial relationship learned about related SDRs that causes Ampliation.

Any discussion is appreciated, I’ve been thinking about this for a while and would like to publish something maybe, but I’ve never published anything before, and I haven’t figured out the specifics of this crucial part I think.

1 Like

Your schema looks as clear as mine.

3 Likes

I really wish this forum had a ‘Ha Ha’ button.

1 Like

Clarity seems to disappear as you take a complex system and begin to look at it from multiple perspectives at the same time. I suppose the complications would decrease as the perspectives do, but where’s the fun in that.:sweat_smile:

It does seem very abstract to me.
Ayn Rand is clearly an ego worthy of consideration.
I would have expected a reference to Chaos Theory but perhaps that is too obvious?
I’m still working though the stunning inclusion of Stoke’s Theorem - inspired genius or utter madness?

1 Like

Why not both?

The below is from here

“In 1969 in a symposium on schizophrenia and the double bind at the National Institute of Mental Health, the cybernetician and ethnographer Gregory Bateson stood before an audience of some of the most prominent psychiatrists and psychologists in the world and proceeded to discuss the mental life of animals. This was not a question of expertise; Bateson was known as the inventor of the term “double bind” and a pioneer in creating models to treat addiction and wartime trauma, but he did not wish to discuss those cases. Rather, he invoked, by example, a porpoise.[1]

This porpoise had been trained at a Navy research facility to perform tricks and other trained acts in return for fish. One day, her trainers started a new regimen. They deprived her of food unless she produced a new trick. Starved if she repeated the same act, but also if she did not perform, the porpoise was trapped. This experiment was repeated with numerous porpoises, usually culminating in extreme aggression, and a descent into what from an anthropomorphic perspective might be labeled disaffection, confusion, antisocial, and violent behavior. Bateson with his usual lack of reservation was ready to label these dolphins as suffering the paranoid form of schizophrenia. The anthropologist was at pains to remind his audience that, however, before rushing to conclusions about genetic predeterminacy or innate typologies, the good doctors should recall that these psychotic porpoises were acting very reasonably and rationally. In fact, they were doing exactly what their training as animals in a navy laboratory would lead them to do. Their problem was that they had two conflicting signals. They had been taught to obey and be rewarded. But now obedience bought punishment and so did disobedience. The poor animals, having no perspective on their situation as laboratory experiments were naturally breaking apart—fissuring their personalities (and Bateson thought they had them) in efforts to be both rebellious and compliant, but above all to act as they had been taught. The motto of the story being that to act rationally in a set pattern following given rules might also be to act psychotically.

This one porpoise, however, appeared to possess a good memory. She was capable of other things. Bateson related how, between the fourteenth and fifteenth demonstration, the porpoise “appeared much excited,” and for her final performance she gave an “elaborate” display, including multiple pieces of behavior of which four were “entirely new—never before observed in this species of animal.” These were not solely genetically endowed abilities; they were learned, the result of an experiment in time. This process in which the subject—whether a patient or a dolphin—uses the memories of other interactions and other situations to transform his or her actions within the immediate scenario can become the very seat of innovation. The dolphin’s ego (in so far as we decide she has one) was sufficiently weakened to be reformed, developing new attachments to objects in its environment and to memories in its past. This rewired network of relations can lead to emergence through the recontextualization of the situation within which the confused and conflicted animal finds itself:”

1 Like

OK, now you have my attention. I found the following posted (over a decade ago) on the Julian Jaynes Society forum. Bateson is new to me, his work and thought does not impact my thought other than to support and confirm what I believe is how it works, mainly influenced by Jaynes.

Bateson’s Definition of Mind Extremely Useful Here…

by aesthetician » Mon Nov 30, 2009 9:22 am

I studied with Gregory Bateson in the Seventies. I think it would behoove many Jaynes scholars to review his definition of “mind” (best book for this is Mind and Nature: A Necessary Unity). It would appear that Bateson and Jaynes had no contact–pity, because they seem to have been reaching out towards one another (Bateson as scientist reaching out towards philosophy and psychology; Jaynes as philosopher/psychologist reaching out towards science) in their respective and monumentally original work. And both observed schizophrenics and schizophrenia in establishing substantial portions of their seminal theories and hypotheses.

For Jaynes scholars I think it would be particularly useful because Bateson beautifully shows how every entity that can be identified as a “mind” (individual human, species, community, culture, planet, solar system, etc.) DOES employ and exhibit consciousness–down to the subatomic level, beginning with the ability to distinguish between Is/Is Not, which is the primary and primeval basis for the differentiation between Self and Other.

In this light, it becomes easier to understand and perhaps even resolve many arguments about the definition of “consciousness” and one begins to perceive, experience and apprehend “consciousness” as a CONTINUUM THAT OPERATES IN RELATED BUT DIFFERING BANDWIDTHS OF ENERGY–EACH ONE WITH ITS PARTICULAR FORM OF LOGIC that resonates (interacts or intersects energetically), but does not necessarily equate, with the other bandwidths of consciousness.

In the light of Bateson’s work, Jaynes’ work becomes much less “antagonistic” to other theories, for a start. And one can begin to perceive Jaynes’ definition of consciousness as a latest form of it, rather than a completely “new” one.

1 Like

Random question: Is it possible for two random variables to be independent and, yet, there is a causal relationship between them?

1 Like

Jaynes, and Bateson are cool. John Boyd’s paper

DESTRUCTION AND CREATION

While, not directly linked does add some interesting perspective. It’s a short 8 page paper, but especially the heading “CREATING CONCEPTS” does seem to reinforce Bateson, which reinforces Jaynes.

Not sure of the relevance the specific relevance to the topic, but to try and answer the question.

First what do we mean “Randomness”? There are things that operate in ways that which given information about observed phenomena appear to be best understood under statistical mechanics. However, that may be more so because of philosophical choices made by mathematicians and sciences that occurred during the early 20 century and the efficacy that came with statistical modeling.

Now we can talk about contextual randomness, for instance why is a coin flip, that has 50% chance heads or tails said to be random? Because when abstracting all of the other factors that lead up to way a specific coin lands the way it does one can talk about generalized coin flips. In this generalization one ignores the fact that each coin when flipped is determined by all of the relevant factors.

When talking about randomness, one treats the range of relevant factors for each individual factor as unimportant. It is funnily enough an act of abstraction.

The answer to your question has to do with Hidden Variables, and Causation applies to real entities (which are deterministic, not random). To know whether or not 2 variables are truly independent is if they aren’t confounding. I’d look into Mill’s methods for more on that.

Not sure if I’ve answered the question appropriately, but hopefully its something to think about.

2 Likes

I just read Stanley Mulaik’s paper The Metaphoric Origins of Objectivity, Subjectivity, and Consciousness in the Direct Perception of Reality and found it to be the best overview of Jaynes concept of consciousness in terms of metaphoric structure as compared to Lakoff & Johnson’s ToM based on metaphor.

As J has stated:

Consciousness is primarily an analog ‘I’ ‘narratizing’ in a ‘mind-space,’ whose features are built up on the basis of metaphors. Present computer programs do not work on the basis of metaphors. Even if computers could simulate metaphoric processes, they still would not have the complex repertoire of physical behavior activities over time to utilize as metaphiers to bring consciousness into being. Computers, therefore, are not — and cannot be — conscious.

What we need for AGI is a conscious machine, that’s sentience. To get there, we need a robot with a ‘metaphoric brain’.

I agree 100%. However, metaphor is a secondary and not primary function of language. Concept-Formation is primary, dead metaphor is a way in which old concepts are used in new domains to create new ones.

Metaphor is not a function of language so much as it is a vehicle or constructor of language. As for concepts, I go with JJ:

A further major confusion about consciousness is the belief that it is specifically and uniquely the place where concepts are formed.

Concepts are simply classes of behaviorally equivalent things. Root concepts are prior to experience. They are fundamental to the aptic structures that allow behavior to occur at all.

Aptic structures are the neurological basis of aptitudes that are composed of an innate evolved aptic paradigm plus the results of experience in development. The term is the heart of an unpublished essay of mine and is meant to replace such prob- lematic words as instincts. They are organizations of the brain, always partially innate, that make the organism apt to behave in a certain way under certain conditions.

Concepts are one of the things in Jaynes Philosophy that he is actually wrong about. He is referring to a phenomena that may be real, but it isn’t concepts, however he is correct in understanding that there is a teleological and neurological basis for concepts. I think he’s actually referring to proto-conceptual thought in animals and mistaking that to be conceptualization in an of itself.