Chaos/reservoir computing and sequential cognitive models like HTM

An Algorithmic God | Gregory Chaitin | Inference.

Information and computation are better than matter and energy.

Matter and energy have a universal, definitive “interpretation” (i.e. laws of physics), provided by the god of our universe.

Information and computation are up to various interpretations, however the “designer” / “programmer” like it to be.

To interpret pieces of information, to carry out computations, we would effectively “play god” as of the simulated universe, which come into existence so in this way.

Should we feel good or bad about this?

1 Like

yeah, so much that I coulnt even grasp what that looks like.

if I get what you mean correctly then that would mean that some of the unseen combinations AiBj are false in reality yet we deem them “correct” and express them in language anyways?

it kinda looks like an attempted groking with insuficient data?

1 Like

I’m not aware of a cause-effect basis in Buddhism. But certainly these ideas of inherent contradictions in “meaning” align well with themes in Eastern philosophy. I’m thinking mostly of Daoism, Yin/Yang.

Western philosophy is catching up. Subjective truth has been an increasing theme for the last 200 years there too (to the detriment of our societies! One thread of this has just been to equate subjective “truth” with political power! We desperately need to anchor truth in physical reality again!)

And also in physics, chaos, QM, and mathematics, as I say.

That, I don’t think at all. It may be kind of the opposite. Computation may be the only firm foundation for meaning! It’s very easy to generate contradictions in sets. In fact, that they appear naturally has been a big problem for maths! This is at the core of the famous Russell’s Paradox, and then leading to Goedel’s proof. Which was based, to the extent I’ve gone into the details, on “diagonalization” of matrices, I think just summing things in different ways. (It’s not hard to understand. Intuitively we know that if you take a group of people and order them by height, you’ll disorder them wrt say, whatever… golf handicap, and ordering by gold handicap will disorder them wrt height, etc.)

Indeed, Chaitin traces Goedel’s proof to the foundation of computational theory. In a sense it may make computation more fundamental than maths. Maths is subjective (depending on the axioms you choose.) Computation is more flexible. It deals with different ways of ordering elements. It’s more grounded in the physical. In that sense it may be more fundamental than maths.

Here’s a very nice talk by him where he traces the idea that Goedel’s proof was the invention of computer programming languages:

A Century of Controversy Over the Foundations of Mathematics

G.J. Chaitin’s 2 March 2000 Carnegie Mellon University School of Computer Science Distinguished Lecture.
http://arxiv.org/html/nlin/0004007

All of which can be seen as a philosophical support for basing cognition on sets which can contradict.

But while tracing philosophical implications is nice, I don’t want to detract from the simplicity of implication for a cognitive model. Some people may be put off by all the woo woo of philosophy. But at a practical level this can be extremely simple.

At a practical level it says that all we may need to do to more forward with our cognitive models, is accept that different possible orderings of sets may contradict.

And HTM is well positioned to implement that. Because it’s application of networks to cognition is not trapped within a tradition of “learning”.

2 Likes

I missed this bit.

It’s hard for me to know which bit is mysterious. You know people have traditionally structured language in grammar.

Here’s a proposal I put together for Ben Goertzel’s Singularity.Net initiative a few months ago (rejected because they were excluding research!) Perhaps that might provide some clarity on the structure problem:

2 Likes

Ha. Just different orderings @JarvisGoBrr . Here’s another presentation I made years ago which might help:

Among examples I attempted there, I see, were “strong tea”/“powerful tea”. “Strong” and “powerful” will share many contexts, so you might put them in a single semantic class for many purposes. But they don’t share all contexts. “Tea” is one context they don’t share. So ordering the contexts of “powerful” one way, will put it in the same class with “strong”. But ordering them another way will not. The orderings contradict.

Other examples I’ve used over the years… A guy Peter Howarth had a nice analysis of “errors” made by non native learners of English. It said things to me about how we generalize word classes. This paper, I think Phraseology and Second Language Proficiency. Howarth, Peter. Applied Linguistics , v19 n1 p24-44 Mar 1998 (though my examples come from a pre-print.)

What interested me was his analysis of two types of collocational disfluencies he characterized as “blends” and “overlaps”.

By “overlaps” he meant an awkward construction which was nevertheless directly motivated by the existence of an overlapping collocation:

“…attempts and researches have been done by psychologist to find…”

*do an attempt
DO a study
MAKE an attempt/a study


e.g. Howarth’s example:

*pay effort
PAY attention/a call
MAKE a call/an effort

Trying to express that as a network:

            attention
          /
      pay
    /     \
(?)        a call
    \     /
      make
          \
            an effort

What the data seems to be saying, is that beginning speakers often analogize constructions based on shared connectivity like that with “a call”.

They seem to be grouped in a category because of a shared prediction.

“pay” predicts “a call”, “make” predicts" “a call”, and if you hypothesize a grouping based on that shared connectivity, then that might explain why beginning speakers tend to produce constructions like “pay effort”. As they do in Howarth’s data.

You might take that as an example where the word “pay” shares the context “a call” with “make”, but it doesn’t share context “an effort”, and “make” doesn’t share “attention”.

(Blends" by the way, were mix ups based on more fundamental semantic crossover:

‘*appropriate policy to be taken with regard to inspections’

TAKE steps
ADOPT a policy

The point Howarth was making was actually that overlaps were more common early errors than “blends”. Which supports the basic overlapping set theory, as opposed to say, shared embodied reference, but that’s a slightly different point.

They are not false, they just depend on context. It’s not false to say that “pay” and “make” share contexts. It is just that they share some contexts and not others. So you can’t “learn” a single class for them. You have to keep all the observations, and then at run time pick out groupings based on the contexts you have at the time.

1 Like

Per my understanding of Daoism, “ever-change” is at the central position, as well as contradiction (Yin vs Yang), but the actual “cause” (of change) is never suggested to be visible-to / perceivable-by beings of the world. “Changes” are per some higher (physical to humans) rules, not “effects” of priori facts or actions by anyone. So the best you can do is to follow and possibly to leverage if you can, just be well informed that there’s always “the other side” of one thing or affair.

Buddhism sees all existences / phenomena the effects of priori causes, and as the relationship is fixed, it’s up to a mind’s active decision about what to do. Thus Buddhism persuades any mind to behave causing manners for desirable effects to come back.

Isn’t theoretical computation “a” math by itself?

Theoretically, with a (single-threaded, so as to be at par with a universal Turing machine) computer and one or more programming languages to write its software, you can create a closed, mathematically-well-defined-formal world, having its own verification / judgement standards, taking no input/output other than its tape/disk/RAM.

But such a computer system is as useless as any math, to average daily people in-real-life. Simon Peyton Jones (who is a core creator of Glasgow Haskell Compiler, which is de-facto the only living Haskell compiler today) jokefully says Haskell is useless for its lacking of “effects”.

Well, “effects” today include to amaze / interact with your user(s), send/receive network packets to communicate with other computers (cellphones, smart wearables etc. included), and even to coordinate multiple execution cores/threads within a single CPU to prevent race-conditions those undesirable by the users as well as the designers.

So “effectful” programming languages (C serving the stereotype) are never mathematically sound, in the sense of having a self-contained, well-defined set of semantics, just too many things are “undefined”, yet programs written in them are at run massively today.

And majority of computer (exclude supercomputers here please) applications – videos, musics, social comm etc., those by average people are never-the-less misuse, if viewed from the mathematical perspective.

I’d regard computers today more of automation tools, visual/acoustic vessels, than agents of computation.

I don’t know set theory well enough, do sets have to be ordered? I know well about relational data model which enables relational databases today, there you always sort/order data records “at runtime” per expressed intent (or don’t care about the ordering by expressing none).

As a computer software engineering myself, I’m quite used to contradiction-free pieces, e.g. most data structures & algorithms developed today. So I would feel quite unnatural in understanding you when talking about contradiction with commonplace sounding.

My software mind would feel contradictions only happening by violating some well-defined rules, and usual data structures as in the CS domain don’t do that for granted.

1 Like

Yes, the key statement of Daoism I like is:

“the one true Dao is the Dao which cannot be known”, etc. (道可道,非常道?)

“Ever-change” would fit. This says “meaning” is a process, not an artifact. There is also a “process” physics, and even a “process” biology now, which speaks to the same idea.

In the linguistics space you have Paul Hopper, Emergent Grammar, also talking about this ever changing “process”:

“The notion of emergence is a pregnant one. It is not intended to be a standard sense of origins or genealogy, not a historical question of “how” the grammar came to be the way it “is”, but instead it takes the adjective emergent seriously as a continual movement towards structure, a postponement of 'deferral” of structure, a view of structure as always provisional, always negotiable, and in fact as epiphenomenal, that is, at least as much an effect as a cause."

https://journals.linguisticsociety.org/proceedings/index.php/BLS/article/viewFile/1834/1606

In philosophy, closest to grounding in the physical, might be Thomas Kuhn:

Structure of Scientific Revolutions, p.g. 192 (Postscript)
“When I speak of knowledge embedded in shared exemplars, I am not referring to a mode of knowing that is less systematic or less analyzable than knowledge embedded in rules, laws, or criteria of identification. Instead I have in mind a manner of knowing which is misconstrued if reconstructed in terms of rules that are first abstracted from exemplars and thereafter function in their stead.”

Though Wittgenstein comes close, shifting to a basis for meaning in “games” later in his life. Quoted by Kuhn here:

Thomas Kuhn, The Structure of Scientific Revolutions, p.g. 44-45:
(Quoting Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe, pp 31-36.)

'“What need we know, Wittgenstein asked, in order that we apply terms like ‘chair’, or ‘leaf’, or ‘game’ unequivocally and without provoking argument?”

‘That question is very old and has generally been answered by saying that we must know, consciously or intuitively, what a chair, or a leaf, or game is. We must, that is, grasp some set of attributes that all games and only games have in common. Wittgenstein, however, concluded that, given the way we use language and the sort of world to which we apply it, there need be no such set of characteristics. Though a discussion of some of the attributes shared by a number of games or chairs or leaves often helps us learn how to employ the corresponding term, there is no set of characteristics that is simultaneously applicable to all members of the class and to them alone. Instead, confronted with a previously unobserved activity, we apply the term ‘game’ because what we are seeing bears a close “family resemblance” to a number of the activities that we have previously learned to call by that name. For Wittgenstein, in short, games, and chairs, and leaves are natural families, each constituted by a network of overlapping and crisscross resemblances. The existence of such a network sufficiently accounts for our success in identifying the corresponding object or activity. Only if the families we named overlapped and merged gradually into one another–only, that is, if there were no natural families–would our success identifying and naming provide evidence for a set of common characteristics corresponding to each of the class names we employ.’

In philosophy you can find it all over the place. Even H. G. Wells!

“…My opening scepticism is essentially a doubt of the objective reality of classification.”

https://www.marxists.org/reference/archive/hgwells/1905/modern-utopia/appendix.htm

I can go on and on along the philosophy thread of this! As I say, after I noticed this for what was happening when I tried to learn grammar, it started popping up all over the place.

Better stop there. As I say, I don’t want to detract from the simplicity of its application to AI. The application is very simple. It might be better to focus there.

No doubt. It always reminds me of the preamble I remember from many physics lectures: let us assume the system is linear!! If you make the right assumptions, you can always avoid inconvenient truths!

If you want to see this contradictory ordering dynamic playing out in computer science space, though, you might look at the drift from OOP to functional programming.

Why has functional programming come to dominate object oriented programming in recent years?

Rich Hickey has given some nice talks on why object models are always imperfect, and that has led to a renewed emphasis on ad-hoc orderings of raw data in functional programming.

There’s also this series by Bartosz Milewski which goes into the relationship of programming theory to the mathematical field of category theory:

“Maybe composability is not a property of natures”
Category Theory 1.1: Motivation and Philosophy
Bartosz Milewski

Continuing the category theory theme, in the compositional semantics space, the first other work I came across expressing similar ideas was Bob Coecke. Also a category theory guy.

(Category theory, BTW, being invented to deal with the incompleteness/contradictory character of mathematics demonstrated by Goedel.)

From quantum foundations via natural language meaning to a theory of everything

"In this paper we argue for a paradigmatic shift from ‘reductionism’ to ‘togetherness’.

Being a maths guy, Coecke is very tied up in the parallel to the mathematical abstractions of category theory. And he’s taken the parallel to QM maths so far as to be building a company to analyse language using quantum computing! I don’t think we need to go that far. I think Coecke is squeezing language into a QM formalism, and then using quantum computing to pick it out again!

But the insights of subjectivity of category on the environment, shared with QM, I think are valid.

You can find the QM maths parallel being drawn elsewhere. For instance in this talk with my namesake the famous neurobiologist Walter Freeman:

NONLINEAR BRAIN DYNAMICS AND MANY-BODY FIELD DYNAMICS
Walter J. Freeman and Giuseppe Vitiello

I can go on and on along this angle too.

But like I say, the application to cognitive modeling may be very simple. It might be best to concentrate on that simplicity.

2 Likes

humm, sounds a lot like the ideas from HTM but it makes me think we need information to flow both ways in time, not just from past to future in order for it to work.

which reminds me… doesnt the rat hippocampus replay events backwards sometimes.

1 Like

Yeah, feedback. You’re right. I thought I would need some kind of feedback with the experiment I did too.

But what I found was that networks of word sequences like:

          attention
          /
      pay
    /     \
(?)         a call
    \     /
      make
          \
            an effort

Actually loop back around naturally. They’ll be followed by common words like “the”, which then in the network also precede all those words. So activation naturally fed back, and I got oscillations anyway. It surprised me. I had been wondering what the feedback connectivity would be, and thought it might be a big problem. But as it turned out, the only thing I needed to do to get oscillations, was to apply inhibition. And that wasn’t even very hard. I just connected inhibition everywhere. Then turned the inhibition up and down until it didn’t immediately kill activation, and activation didn’t blow up, and the network oscillated.

That might be naive. It may turn out that connectivity is not enough. For instance, when words are not represented by a single node, but an… SDR… I think that will code for distance of connection too (was it you who was pointing out that would be necessary?), and it might distinguish feedback paths and kill them somehow.

But you learn a lot when you try things. Things I thought would be hard turn out not to be, and you find out what isn’t working and needs to change. Next step, I think what I really want to do is figure out a way to break down the spike time patterns from the raster plot into hierarchies. Then it should be possible to play around and figure out where the ideas are still naive.

1 Like

Any type of hash algorithm before a weighted sum allows the weighted sum to efficiently store <vector, scalar> responses. With reservoir computing the hash algorithm is a locality sensitive hash with various amounts of non-linearity thrown in.
The hash algorithm could have binarized outputs or even, as a generalisation, continuously variable outputs.
Unfortunately the information storage capacity of the weighted sum is really poorly understood, for such a basic foundational thing.
For example used under-capacity there is repletion code type error correction with a peculiar bias toward the weight vector.

1 Like

Thanks for the comment @Cong_Chua_toc_may I’m also learning about reservoir computing. I think starting as a reservoir computer is a plausible evolutionary path for a system which codes cause and effect. And I like that it introduces the idea that chaotic systems might have useful predictive effect for all that they are chaotic.

But what interests me more than a raw reservoir computer, is how an organism might have evolved to enhance this prediction mechanism.

The idea I’m pushing here is that cognition might have evolved to enhance the initial crude prediction mechanism of a reservoir computer/echo state machine, by clustering or stacking events which occurred in similar contexts. In that case events which share observed predictions could “stack” on top of each other, and generalize to predict new sequences, very much like generalizing to a grammar, but dynamic this time, and capable of context sensitive variation, change, and even contradiction, because the combinations are chaotic.

And that perhaps this mechanism could be as simple as by finding oscillation resonances in a sequence network.

In that case it would no longer be a simple hash or lookup. It would be actively structuring a network.

1 Like

I have expertise in programming, and per my experience and understanding, OOP doesn’t systematically prevent you from making contradictions in crafting software, while FP does. Within main-stream programming languages, Haskell (Idris, Agda likely but more niche) is the one most close to math (mostly category theory, but also its math smelling syntax/semantics in general), Monads is develop around it, then people find Monads doesn’t compose (well), and I would suggest that’s actually semantics of formal “effects” doesn’t compose well. OOP doesn’t solve the composition problem, it just tolerate / don’t-reveal it.

FP is rather contradiction averse than OOP (or more traditional Procedural programming, compared to Mathematical programming), FP is overkill in simpler software products, but necessary after the overall complexity exceeds anyone’s control, e.g. Windows™ Kernel, you can see Linux kernel is introducing Rust, which is rather more FP than C/C++.

So it’s actually OOP more contradiction-friendly than FP, I would bet. But you’ll have to think about complexity management after embraced contradictions.

1 Like

Impressive! As a native Chinese, my favorite translations (plus its next clause):

https://pages.ucsd.edu/~dkjordan/chin/LaoJuang/DDJTenTranslations.html

道,可道,非常道。
名,可名,非常名。

无名天地之始﹔有名万物之母。

Translation 9

YÁNG Lìpíng 杨立平
2005 The Tao inspiration: essence of Lao Zi’s wisdom. Singapore: Asiapac. P. 14.

Tao, if articulable, is not the eternal Tao.
The name, if can be named, is not the eternal name.

Heaven and earth start with no name.
The named is the mother of everything under the sun.

Translation 4

WU John C.H.
1961 Lao Tzu: Tao Teh Ching. New York: St. Johns University Press. P. 3.

Tao can be talked about, but not the Eternal Tao.
Names can be named, but not the Eternal Name.

As the origin of heaven-and-earth, it is nameless.
As “the Mother” of all things, it is nameable.

And I’d regard following a great explanation rather than simply a translation:

Translation 5

BAHM, Archie J.
1958 Tao Teh King by Lao Tzu interpreted as nature and intelligence. New York: Frederick Ungar. P. 11.

Nature can never be completely described, for such a description of nature would have to duplicate Nature.

No name can fully express what it represents.

It is Nature itself, and not any part (or name or description) abstracted from Nature, which is the ultimate source of all that happens, all that comes and goes, begins and ends, is and is not.

But to describe Nature as “the ultimate source of all” is still only a description, and such a description is not Nature itself.
Yet since, in order to speak of it, we must use words, we shall have to describe it as “the ultimate source of all.”

The “naming (名)” part well concerns language, math included, I’d suppose.

1 Like

Well, I would guess the “contradiction-friendly” aspect you see, is that fp can flexibly generate contradictions. So you don’t notice they are contradictions. They are just the right structuring of the data for the problem being addressed.

I’m sure I’ve seen a statement attributed to Rich Hickey, “It’s the data, stupid!” Emphasizing that only the raw data can express its own full complexity. Compare this with thread of “embodiment” in AI.

My ideas about grammar are also “embodied”. They say that grammar is only fully “embodied” in the body (corpus) of text.

The contrast between my argument and yours might come down to an interpretation of the word “contradiction” as being something you notice or don’t notice. If it’s only a contradiction when you notice it, then you might associate it more with OOP. If you’re forced to resolve your code into objects, the contradictions will be clear in the object. If you’re not, they will just be different ways of ordering the data, and you might see that as fp having fewer contradictions.

I wish I could find the first talk I heard by Rich Hickey on this. It really struck me at the time that he was saying no resolution of code into objects could be complete. That struck me hard, because it was a theme I had come to myself for natural language grammar. I recall the Hickey talk being one contrasting “simple” and “easy”. That “simple” was hard, and “easy” too often led to complexity. But he may have given many talks on that theme.

Nice.

无名天地之始﹔有名万物之母, as “The named is the mother of everything under the sun”?

I hadn’t seen that. I like it. It’s an interesting continuation I wasn’t aware of.

It reminds me of something I came across in a Twitter thread about Hindu philosophy the other day. The self as removal from “undifferentiated oneness”:

“…for Abhinava, the act of categorizing something is the ultimate act of freedom. It is an invention, an artistic act, an act of play - of creating and separating things as an act of will. It is Shiva freely creating the world out of undifferentiated oneness.”

Maybe that is what is being said with 无名天地之始﹔有名万物之母, “The named is the mother of everything under the sun”, too.

Which might be seen as a contrast with the “cannot be named” idea above. But seen in another light it emphasizes the creativity of the naming process, the process of resolving the world into objects. So you can say the process of resolving the word into names, or objects, cannot ever be complete. But is always a creative act, and the very incompleteness of it is the well of that constant creativity.

That makes the Daoist statement about naming as perhaps more of a positive act than Translation 5 does. Naming must always fail to be complete. But it is not a failure. It is the very act of creativity. The “ultimate act of freedom” in the Abhivana statement. No less important for the fact of always being fated to be incomplete.

The Abhinava comment struck me, and stayed with me, because I saw a parallel with what I was saying about sources of creativity coming from indeterminacy in ways of structuring natural language.

I think these are important insights about creativity. And that they relate to the limitations on “learning” I’m seeing, firstly when trying to learn language grammar, but applied more broadly to cognitive categories.

We must see that “learning” can never be complete, but that this is a good thing. I’m guessing that it will turn out to be at the very core of what will come to be our understanding of creativity (and also actually freewill, and consciousness.)

But let me emphasize again that while I find these philosophical parallels encouraging, and motivating, they are not necessary to the practical problem which presents itself!

The practical details from the point of view of building a better language model, can be very simple!

2 Likes

Simple yet “hard”? :slight_smile:

My argument w.r.t. contradiction dealt by FP vs OOP would flip the two terms as you described them, but that’s way too off topic here, and anyway:

I would suggest that either OOP or FP are embodied in programming “languages”, all having incompleteness as innate property by being some “language”. Resolution into “functions” (or more general “data”, as LISPs – Hickey’s Clojure is “a” LISP – famously treat program code as data) can neither be complete, can it? A program can only be considered “complete” when the computer running the program is considered the only existence that forms a closed system. When a human programmer reads/writes the source code (even the compiled machine code), he/she is consuming/authoring language text, maybe simulating a computer’s run in his/her mind during the course, but the real, complete “program” is the process of “it get run by a computer”. Programming languages are used to talk about how a computer is supposed to proceed step by step, by human programmers and compiler tools (even including microcode in a CPU hardware), they have a similar relationship to the computer as the relationship between natural language and the physical world.

1 Like

This is not a trait associated with Functional-Programming in general, but for what I witness, is about the philosophy of Homoiconicity iconically by LISPers – LISPs are functional, but there are many other PLs though nevertheless functional, yet not a LISP.

S-expression is the only primitive syntax of LISP, it has the beauty of simplicity (in the idea of syntax, not much so in excessive parentheses you turn out have to write for a program), i.e. one “simple” (tree) structure for infinitely complex (networked) semantics. People say Truly, this was the language from which the gods wrought the Universe., may just because it’s the language of a previous hype of AI.

As a programming language designer myself, I would say the semantics is much more crucial than the (surface) syntax, you usually trade simplicity in syntax for pragmatics, people do the same spontaneously in forming natural languages.

Syntactical contradictions are glitches, it’s semantical contradictions that can actually fail you. I’m afraid a LISP language just refuses to formalize any semantics at the language level (i.e. leaving that for lib/app programmers to coordinate themselves), as so giving a fake feeling that (though merely syntactically) there is no contradiction.

At large, no FP language does this unless it is a LISP.

1 Like

Ah, now you see, I might quite like that :slight_smile: I don’t want semantics to be formalized either.

I think that a lack of formalized semantics is the big difference between formal/programming languages, and natural language.

My hunch is that syntactical contradictions will only be glitches if you insist on using syntax symbolically. I have a hunch the Lisp pairwise hierarchy might fit my conception of how meaning creation is going on. Although my pairwise combination would be forced by overlaps of sets within pairwise elements, not by formal rules even at the pair level.

Along this line of thought, as a programming language designer, you might be interested in another project which is also resolving at the pairwise combination level for AI. Although not driven by overlaps of internal sets the way I want to do it.

That’s OpenCog. You may know that project has been around since the Arc. Through several AI “floods” anyway :-b Mostly driven by the vision of Ben Goertzel.

Ben has a maths background, and his project has retained a strong maths flavour through AI periods of symbolism, statistics, connectionism.

I think he conceives the current situation in AI as being limited by the inability of deep learning to resolve to transparent meaning representation structures. And hopes that OpenCog can be the next stage because it does focus on internal cognitive graph meaning representation, and always has.

Anyway, short version, they are totally refactoring their… data structures(?) And as part of that they are building a new programming language they are calling Hyperon (working a fundamental particle naming convention.)

As I understand it, this new programming language will be quite Lisp like. Reflecting the fundamental operation of their new code refactor, which will all be built around reducing all cognition to a “recursive discrete decision process”.

“in which the key decisions involve sampling from probability distributions over metagraphs and enacting sets of combinatory operations on selected sub-metagraphs.”

So a simple iterated pairwise “merge”, something in the style of Lisp. And I believe they are tending towards fairly List like surface syntax.

Patterns of Cognition: Cognitive Algorithms as Galois Connections Fulfilled by Chronomorphisms On Probabilistically Typed Metagraphs
Ben Goertzel

I believe they hope this code refactor will make their system faster, and catalyse the same kind of advance that GPU’s did for connectionist models, 15 years ago.

Now, I think they are wrong. I don’t think formal symbolism in the graph is necessary, or even desirable. I do agree that some kind of transparency to graph meaning representations is necessary. But I don’t think the path to it is through elaborate symbolic formulation. Rather I think the path to it is through the resolution of (chaotic) contradictions in context.

But their basic, formal syntax free(??), recursive (pairwise) combinatory operation, formulation quite fits with my graph structuring conception of the problem.

Also, interesting for me, is that Ben does see a role for some kind of chaotic recombination. In fact he wrote a book on it, Chaotic Logic, about 1994. And in principle he doesn’t disagree with me that the internal representation might appear itself as the result of a chaotic process.

In discussions last year he responded to my suggestions quite positively:

“For f’ing decades, which is ridiculous, it’s been like, OK, I want to explore these chaotic dynamics and emergent strange attractors, but I want to explore them in a very fleshed out system, with a rich representational capability, interacting with a complex world, and then we still haven’t gotten to that system … Of course, an alternative approach could be taken as you’ve been attempting, of … starting with the chaotic dynamics but in a simpler setting. … But I think we have agreed over the decades that to get to human level AGI you need structure emerging from chaos. You need a system with complex chaotic dynamics, you need structured strange attractors there, you need the system’s own pattern recognition to be recognizing the patterns in these structured strange attractors, and then you have that virtuous cycle.”

But in the short term, yeah, they are pretty much committed to this approach of refactoring to another formal programming language.

Anyway, as I say, if you work on programming language design, you might find the design of their new cognitive graph manipulation language interesting.

2 Likes

Oh, I didn’t see this. Ha ha. Yes, probably “hard” in the sense it’s taken me 30 years to see the simplicity!!

Yes, it might be getting off topic.

Or at least far from the possibly greater simplicity and more pressing need of implementation!

I understand you to be making an argument for “incomplete” to mean something inherent to the fact of being a language. And that depending on a definition of “language” as something that refers to something else. So we’re stacking technical definitions to make a point here. It may be possible to argue that. It depends what technical definition of “incomplete” you choose.

I think in maths there is one technical definition, which is not always satisfied. So at least in the sense that a formal system is a language, there may be a sense it can be “complete”, providing it is simple enough! I believe the technical definition in Goedel’s proof was only satisfied for sufficiently powerful formal systems. Which is to say that it was the sufficiently powerful ones which were incomplete by that technical definition. So by that definition, a sufficiently simple formal system might be a language, and yet also be complete.

But that’s a technical definition centering on proof within a system.

I think my definition can be the same. But to see how it can be the same you probably have to reformulate it in the sense of having no proof because of a randomness in the system.

To my mind the mathematical absense of proof comes down to a randomness in the system. A lack of constraint. Like the example of parallel lines. You can’t prove parallel lines never meet, not because your reasoning isn’t powerful enough. It’s because this is a choice you get to make. So it’s a richness, actually.

I suppose I’m saying that locking your code into objects, locks more such choices. Removes more choices. Removes any number of choices on that level. Just as fixing a set of axioms shackles you to a particular set of choices in maths.

So I guess when I’m talking about incompleteness, I’m talking about losing the power to make certain kinds of choices. And OOP, because it imposes more structure, inherently removes more of those choices.

I don’t think a limitation centering on removal of choices in that way need be general to all languages. Though it may be a limitation general to all programming languages, yes. So in that sense you may be right. Both OOP and FP paradigms of programming languages may have degrees of “incompleteness” in that sense. They all impose some structure. And to the extent they impose structure, they’re limiting the choices in the way I associate with incompleteness. Indeed, I’ve often thought that retaining the power to make those choices, and actually totally restructure your language on the fly, will be one way to see the distinction between programming languages and natural language, as well as “meaning” in the fullest human sense.

There may be a way in the future to make a programming language too, that functions not by attaching symbolism to structure, and allows the language itself to totally restructure on the fly (perhaps that’s a way to see what Domas was doing Chaos/reservoir computing and sequential cognitive models like HTM - #24 by robf) Then that programming language might also be “complete” in the same sense I see for natural language. In the sense of retaining all the choices for different ways of structuring itself (although there may be a remaining sense of incompleteness in the choice of structuring parameter which restructures the language on the fly! That may be one way even that kind of language will restrict our choices too, yes.)

2 Likes

And probably, yes, “hard” in the sense that there are coding challenges to implement it.

Hard in the sense the most obvious path forward defeats my enthusiasm to learn enough skills in WPF (Windows Presentation Foundation) C# GUI programming, to try and add some kind of hierarchy breakdown for the raster plot using Brainsim II.

Or finding another neurosimulator platform with more easily modifiable functionality.

Or hard in the sense I fear a realistic implementation may require parallel hardware on the multiple 10s of 1000s of processors.

Being the issues which are most holding me up at the moment.

If I can get some money together, I might pay someone to do the necessary .NET, WPF, C#, GUI stuff on Brainsim II.

And I’m always open to trying out other platforms. I’d particularly like to try some of the current spiking hardware betas, like Intel Loihi.

But perhaps because of cost from their side too, Intel Loihi don’t seem very approachable.

So the next step might be getting some money together, and paying someone to hack .NET, WPF, C#, GUI on Brainsim II.

3 Likes

This may a digression too far, but one point I thought György Buzsáki was trying to make was the deflation of an ‘object model’ in human vision and thought processing.
The fact that ML is approached by programmers generally, I think makes this problem worse (maybe due to earlier UML and OOP influences).
His view appeared closer to Batesons “the difference that makes a difference” - which is just splitting the data/world anyway that works.
This chimes with your view of living with the contradictions.

1 Like