Chaos/reservoir computing and sequential cognitive models like HTM

Notice also though that it is the way they handle novelty which is the big issue with transformers.

This is also my focus. How to deal with novelty. How can we constrain novelty to be “meaningful”? What makes novelty “meaningful”?

I think novelty is at the core of it. Transformers just interpolate between learned forms. At the limit (“not a lot of words said about the thing I am asking”, not enough data points to get a good similarity overlap?) their interpolation is random.

By contrast, I make novelty central. It’s not the point at which similarity metrics start to fade and become random anymore. “Meaning” is no longer only by reference to what is already known, equated to a similarity metric with a standard/label. The essential operation isn’t comparison, but recombination. I don’t need a lot of existing data points to specify a concept. I actively generate new “concepts” as new combinations of contexts. I replace “learning” with the construction of novel (even chaotic) groupings. And I do that by dynamically projecting out different groupings of contexts.

The fact that shared context is in itself meaningful, makes all these new combinations of contexts, inherently meaningful, even though they are new.

2 Likes

I observed that as well, but it quickly shifts context as it makes things up. Still impressive though.

1 Like

I feel this very inspiring! Yet the problem definition may/shall shift to “to make meaning of machine-models converge to human-compatible comprehensions” ?

Novel utilities are great things so long as they are for human purposes, at least comprehensible / controllable by humans.


Apparently we can label things per our knowledge, though I’m afraid we don’t really know “how” such knowledge is represented/possessed in our mind, thus to grant nonhuman mechanisms the same ability “to know” the “label” per specific inputs.

Numeric methods (among which back-propagation is classic) can make machines mimic our reasoning per trained from labels we gave them, then nevertheless, it seems “similarity” should be inherently multi-dimensional, and each “scalar numeric distance metric” reflects one particular aspect of similarity (as far as an observing subject concerns), then can I say:

“higher intelligence” can discover/know the “similarity-metric-dimensions”, while “lower intelligence” just computes each dimension’s “scalar-metric-function” ?

“what to metric” w.r.t. similarity would ultimately fall onto “purpose making” ? I.e. what are real concerns, what’s not, e.g. survival, pleasure, joy.

1 Like

True. But dealing with language helps here. Because with language you are driven to find groupings which are meaningful for humans. I would say that’s because language is the data which is (most directly?) generated by the brain itself. So it makes sense that its structure, matches the way the brain likes to structure data.

Also, the brain is structuring this stuff, language, with the express purpose of making it meaningful to another individual. Another reason for it to structure language in the way the brain finds meaningful.

I don’t think there is anything particularly mysterious about how the brain finds meaning in things, anyway. Or perhaps it seems that way to me because I’m used to the way language manifests structure. Either way, it seems obvious to me that the brain structures the world in ways which help it predict cause and effect: grouping things which share contexts/predictions.

We don’t know. But I’m guessing “sets of things that share cause effect predictions.”

It works for language anyway. The sets it generates are quite meaningful. Lots of words with the same or similar meaning, etc.

The only trick which has foxed us historically, is that those sets turn out to contradict! That observation broke linguistics in the 1950s! It’s still in pieces.

Cause effect prediction seems to fit a plausible high priority “purpose making” quite well to my mind.

Anyway, it works as a principle to structure language in ways that match “meaningful” groupings that humans make over the same data.

Yes. I agree. Mere mimicry is a low level of intelligence. If it is intelligence at all. And that is what our tech is doing at the moment. Intelligence proper must be connected with novelty, multi-dimensionality, and especially finding new dimensions. It is my feeling that this multi-dimensionality is somehow connected to the generation of contradictions. I think these contradictions will turn out to be a feature not a bug. If we didn’t have them then at some point all meaning might be complete, and there would be nothing more to know. But you can’t “learn” this, or especially you can’t discover/know new “dimensions” unless you have a relational principle. Well… I don’t know. Perhaps you could “learn” the relational principle. But if it generates contradictions, then “learning” is going to meet a problem. Things will wash out. Unless you keep enormous amounts of context information to separate all the contradictions. And you certainly couldn’t “learn” all such sets, if you just kept on finding new ones!

1 Like

The foundation of language is grounded in the brain’s structures that are most closely associated with motor-sensory aspects of objects. The Chomsky “built-in” language features are certainly built around this grounding. While you are at it, look at figure two of the linked Pulvermuller paper and reflect on how it suggests an RNN architecture.

Very nice paper Mark. Thanks for that. That’s the most interesting paper I’ve read in a long time! : - ) I’ve seen very few (if any!) which map language mechanisms to neural connectivity.

I wouldn’t argue with where semantic meaning is stored, especially in the sense of qualia, and the “feeling of being there” which @complyue described as the subjectively intuitive quality of elements we “recombine” when we “think”.

It’s how they recombine which interests me.

Actually, what I’m arguing has a very close analogue in that paper. Exactly figure 2. which you cite as having a resemblance to an RNN below:

Very nice. That figure 2 is work taken from exactly what I identified as the most interesting paper for me cited in that review.

Discrete combinatorial circuits emerging in neural networks: A mechanism for rules of grammar in the human brain? Friedemann Pulvermüller, Andreas Knoblauch

Difficult to find without a paywall, everywhere has scumbag “publishers” trying to clip a ticket… but I finally found it in… Russia! (Bless them for this less violent example of international delinquency!)

https://sci-hub.ru/10.1016/j.neunet.2009.01.009

They say:

“In this present work, we demonstrate that brain-inspired networks of artificial neurons with strong auto-associative links can learn, by Hebbian learning, discrete neuronal representations that can function as a basis of syntactic rule application and generalization.”

Similar to what I’m saying.

And how do they do that:

“Rule generalization: Given that a, b are lexical categories and Ai, Bj lexical atoms
a = {A1, A2, . . . , Ai, . . . , Am}
b = {B1, B2, . . . , Bj, . . . , Bn},
the rule that a sequence ab is acceptable can be generalized from a
set of l encountered strings AiBj even if the input is sparse”

So, I understand this to say, working from the other direction, that given an observed set of encountered strings AiBj, even if it is “sparse”, so not observed for all i and j, you might support other combinations between AiBj, even if they are not observed.

This appears to be is just my AX from {AB, CX, CB} taken from my discussion in this thread above (oh, actually not in this thread? In @JarvisGoBrr 's “elaborator stack machine” thread: The elaborator stack machine - #5 by robf)

It’s all very nice. I hadn’t seen a neural mechanism conjectured on this basis before. And it is very similar to me. (Which is actually not surprising, because the whole distributional semantics idea has been the basis of attempts to learn grammar for years. It’s in their glossary.)

The difference from me seems to be that they conjecture a neural coding mechanism for abstracted categories based on sequence detector neurons. Observed sequences are explicitly coded in sequence detector neurons between word representations, and then categories are abstracted by forming interconnections between the sequence detector neurons. So an abstract syntactic category is represented physically in interconnected sequence detector neurons. This might indeed have a close RNN analogue.

By contrast, while I agree that syntactic categories can be generated from overlapping sets of their AiBj type, this is just distributional semantics, I think those overlapping sets will contradict. That’s the big difference. I think the sets will contradict. So it will not be possible to represent them using fixed sets of sequence detector neurons. Instead I say the “rule” will need to be continually rebuilt anew by projecting out different overlaps between the sequence sets which vary according to context (plausibly, using oscillations to identify closely overlapping sets.)

I would justify that constant re-organization idea by citing evidence that neural activations for sequences of words are not static combinations of their individual word activations and any mutual activation between a static set of sequence detector neurons. My favourite example is in work discussed by Tom Mitchell here, the complete reorganization of activation patterns. Surprising! Indicating that patterns of neural activation for word combinations seem to completely re-organize individual word activation patterns:

Prof. Tom Mitchell - Neural Representations of Language Meaning

But the basic idea of a separate “combinatorial semantics” area, where new categories are synthesized by combinations of sets of observed sequences is the same in my thesis and the RNN like mechanism described in this paper. Very nice. Good to see we’re on the same track.

To repeat, the only difference with what I am suggesting, and what is happening in this paper, and probably happening in transformers, is that I say the sets will contradict. So we must model them by finding context appropriate sets associations at run time. Perhaps by as simple a mechanism as setting the sequence network oscillating, and seeing which sets of observed sequences synchronize.

Nobody has imagined contradictory meaning sets before. And they still don’t. Which is holding us back. But trying to abstract language structure drives you to it. If you have eyes to see it. And once you’ve seen these contradictions appear in language structure, you start finding them all over the place. (For instance there’s an analogue in maths. Chaitin: “Incompleteness is the first step toward a mathematical theory of creativity…” An Algorithmic God | Gregory Chaitin | Inference)

1 Like

It’s still rather mysterious to me, I know language DOES manifest structure, but am feeling those structures quite concealed to me, looking forward to hear more about “the way” from you!

I think Buddhism is at an extreme in believing cause and effect (Karma), as the relationship not appearing perfectly absolute and exact, if only seen from one man’s entire lifetime, they turn to samsara for an explanation, which has no scientific proof to date.

Besides the relief of certain mental pains by such a belief, I wonder if Buddhists are actually correct about cause&effect being an essential primitive of the conceivable world, leaving science to catch up, maybe after hundreds or thousands of years?

Buddhism is at least so great in settling contradictions happened mentally.

I perceive this as a really insightful description of how our AIs are doing today. They are kinda free of mathematically-strict semantics implemented by digital computers, which in turn are based on contradiction-averse-math, but still rely on such computer systems to prepare/store/process their input/label/output data.

Maybe contradictions can only be a “feature” when represented by data/structures other than digital forms?

1 Like

An Algorithmic God | Gregory Chaitin | Inference.

Information and computation are better than matter and energy.

Matter and energy have a universal, definitive “interpretation” (i.e. laws of physics), provided by the god of our universe.

Information and computation are up to various interpretations, however the “designer” / “programmer” like it to be.

To interpret pieces of information, to carry out computations, we would effectively “play god” as of the simulated universe, which come into existence so in this way.

Should we feel good or bad about this?

1 Like

yeah, so much that I coulnt even grasp what that looks like.

if I get what you mean correctly then that would mean that some of the unseen combinations AiBj are false in reality yet we deem them “correct” and express them in language anyways?

it kinda looks like an attempted groking with insuficient data?

1 Like

I’m not aware of a cause-effect basis in Buddhism. But certainly these ideas of inherent contradictions in “meaning” align well with themes in Eastern philosophy. I’m thinking mostly of Daoism, Yin/Yang.

Western philosophy is catching up. Subjective truth has been an increasing theme for the last 200 years there too (to the detriment of our societies! One thread of this has just been to equate subjective “truth” with political power! We desperately need to anchor truth in physical reality again!)

And also in physics, chaos, QM, and mathematics, as I say.

That, I don’t think at all. It may be kind of the opposite. Computation may be the only firm foundation for meaning! It’s very easy to generate contradictions in sets. In fact, that they appear naturally has been a big problem for maths! This is at the core of the famous Russell’s Paradox, and then leading to Goedel’s proof. Which was based, to the extent I’ve gone into the details, on “diagonalization” of matrices, I think just summing things in different ways. (It’s not hard to understand. Intuitively we know that if you take a group of people and order them by height, you’ll disorder them wrt say, whatever… golf handicap, and ordering by gold handicap will disorder them wrt height, etc.)

Indeed, Chaitin traces Goedel’s proof to the foundation of computational theory. In a sense it may make computation more fundamental than maths. Maths is subjective (depending on the axioms you choose.) Computation is more flexible. It deals with different ways of ordering elements. It’s more grounded in the physical. In that sense it may be more fundamental than maths.

Here’s a very nice talk by him where he traces the idea that Goedel’s proof was the invention of computer programming languages:

A Century of Controversy Over the Foundations of Mathematics

G.J. Chaitin’s 2 March 2000 Carnegie Mellon University School of Computer Science Distinguished Lecture.
http://arxiv.org/html/nlin/0004007

All of which can be seen as a philosophical support for basing cognition on sets which can contradict.

But while tracing philosophical implications is nice, I don’t want to detract from the simplicity of implication for a cognitive model. Some people may be put off by all the woo woo of philosophy. But at a practical level this can be extremely simple.

At a practical level it says that all we may need to do to more forward with our cognitive models, is accept that different possible orderings of sets may contradict.

And HTM is well positioned to implement that. Because it’s application of networks to cognition is not trapped within a tradition of “learning”.

2 Likes

I missed this bit.

It’s hard for me to know which bit is mysterious. You know people have traditionally structured language in grammar.

Here’s a proposal I put together for Ben Goertzel’s Singularity.Net initiative a few months ago (rejected because they were excluding research!) Perhaps that might provide some clarity on the structure problem:

2 Likes

Ha. Just different orderings @JarvisGoBrr . Here’s another presentation I made years ago which might help:

Among examples I attempted there, I see, were “strong tea”/“powerful tea”. “Strong” and “powerful” will share many contexts, so you might put them in a single semantic class for many purposes. But they don’t share all contexts. “Tea” is one context they don’t share. So ordering the contexts of “powerful” one way, will put it in the same class with “strong”. But ordering them another way will not. The orderings contradict.

Other examples I’ve used over the years… A guy Peter Howarth had a nice analysis of “errors” made by non native learners of English. It said things to me about how we generalize word classes. This paper, I think Phraseology and Second Language Proficiency. Howarth, Peter. Applied Linguistics , v19 n1 p24-44 Mar 1998 (though my examples come from a pre-print.)

What interested me was his analysis of two types of collocational disfluencies he characterized as “blends” and “overlaps”.

By “overlaps” he meant an awkward construction which was nevertheless directly motivated by the existence of an overlapping collocation:

“…attempts and researches have been done by psychologist to find…”

*do an attempt
DO a study
MAKE an attempt/a study


e.g. Howarth’s example:

*pay effort
PAY attention/a call
MAKE a call/an effort

Trying to express that as a network:

            attention
          /
      pay
    /     \
(?)        a call
    \     /
      make
          \
            an effort

What the data seems to be saying, is that beginning speakers often analogize constructions based on shared connectivity like that with “a call”.

They seem to be grouped in a category because of a shared prediction.

“pay” predicts “a call”, “make” predicts" “a call”, and if you hypothesize a grouping based on that shared connectivity, then that might explain why beginning speakers tend to produce constructions like “pay effort”. As they do in Howarth’s data.

You might take that as an example where the word “pay” shares the context “a call” with “make”, but it doesn’t share context “an effort”, and “make” doesn’t share “attention”.

(Blends" by the way, were mix ups based on more fundamental semantic crossover:

‘*appropriate policy to be taken with regard to inspections’

TAKE steps
ADOPT a policy

The point Howarth was making was actually that overlaps were more common early errors than “blends”. Which supports the basic overlapping set theory, as opposed to say, shared embodied reference, but that’s a slightly different point.

They are not false, they just depend on context. It’s not false to say that “pay” and “make” share contexts. It is just that they share some contexts and not others. So you can’t “learn” a single class for them. You have to keep all the observations, and then at run time pick out groupings based on the contexts you have at the time.

1 Like

Per my understanding of Daoism, “ever-change” is at the central position, as well as contradiction (Yin vs Yang), but the actual “cause” (of change) is never suggested to be visible-to / perceivable-by beings of the world. “Changes” are per some higher (physical to humans) rules, not “effects” of priori facts or actions by anyone. So the best you can do is to follow and possibly to leverage if you can, just be well informed that there’s always “the other side” of one thing or affair.

Buddhism sees all existences / phenomena the effects of priori causes, and as the relationship is fixed, it’s up to a mind’s active decision about what to do. Thus Buddhism persuades any mind to behave causing manners for desirable effects to come back.

Isn’t theoretical computation “a” math by itself?

Theoretically, with a (single-threaded, so as to be at par with a universal Turing machine) computer and one or more programming languages to write its software, you can create a closed, mathematically-well-defined-formal world, having its own verification / judgement standards, taking no input/output other than its tape/disk/RAM.

But such a computer system is as useless as any math, to average daily people in-real-life. Simon Peyton Jones (who is a core creator of Glasgow Haskell Compiler, which is de-facto the only living Haskell compiler today) jokefully says Haskell is useless for its lacking of “effects”.

Well, “effects” today include to amaze / interact with your user(s), send/receive network packets to communicate with other computers (cellphones, smart wearables etc. included), and even to coordinate multiple execution cores/threads within a single CPU to prevent race-conditions those undesirable by the users as well as the designers.

So “effectful” programming languages (C serving the stereotype) are never mathematically sound, in the sense of having a self-contained, well-defined set of semantics, just too many things are “undefined”, yet programs written in them are at run massively today.

And majority of computer (exclude supercomputers here please) applications – videos, musics, social comm etc., those by average people are never-the-less misuse, if viewed from the mathematical perspective.

I’d regard computers today more of automation tools, visual/acoustic vessels, than agents of computation.

I don’t know set theory well enough, do sets have to be ordered? I know well about relational data model which enables relational databases today, there you always sort/order data records “at runtime” per expressed intent (or don’t care about the ordering by expressing none).

As a computer software engineering myself, I’m quite used to contradiction-free pieces, e.g. most data structures & algorithms developed today. So I would feel quite unnatural in understanding you when talking about contradiction with commonplace sounding.

My software mind would feel contradictions only happening by violating some well-defined rules, and usual data structures as in the CS domain don’t do that for granted.

1 Like

Yes, the key statement of Daoism I like is:

“the one true Dao is the Dao which cannot be known”, etc. (道可道,非常道?)

“Ever-change” would fit. This says “meaning” is a process, not an artifact. There is also a “process” physics, and even a “process” biology now, which speaks to the same idea.

In the linguistics space you have Paul Hopper, Emergent Grammar, also talking about this ever changing “process”:

“The notion of emergence is a pregnant one. It is not intended to be a standard sense of origins or genealogy, not a historical question of “how” the grammar came to be the way it “is”, but instead it takes the adjective emergent seriously as a continual movement towards structure, a postponement of 'deferral” of structure, a view of structure as always provisional, always negotiable, and in fact as epiphenomenal, that is, at least as much an effect as a cause."

https://journals.linguisticsociety.org/proceedings/index.php/BLS/article/viewFile/1834/1606

In philosophy, closest to grounding in the physical, might be Thomas Kuhn:

Structure of Scientific Revolutions, p.g. 192 (Postscript)
“When I speak of knowledge embedded in shared exemplars, I am not referring to a mode of knowing that is less systematic or less analyzable than knowledge embedded in rules, laws, or criteria of identification. Instead I have in mind a manner of knowing which is misconstrued if reconstructed in terms of rules that are first abstracted from exemplars and thereafter function in their stead.”

Though Wittgenstein comes close, shifting to a basis for meaning in “games” later in his life. Quoted by Kuhn here:

Thomas Kuhn, The Structure of Scientific Revolutions, p.g. 44-45:
(Quoting Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe, pp 31-36.)

'“What need we know, Wittgenstein asked, in order that we apply terms like ‘chair’, or ‘leaf’, or ‘game’ unequivocally and without provoking argument?”

‘That question is very old and has generally been answered by saying that we must know, consciously or intuitively, what a chair, or a leaf, or game is. We must, that is, grasp some set of attributes that all games and only games have in common. Wittgenstein, however, concluded that, given the way we use language and the sort of world to which we apply it, there need be no such set of characteristics. Though a discussion of some of the attributes shared by a number of games or chairs or leaves often helps us learn how to employ the corresponding term, there is no set of characteristics that is simultaneously applicable to all members of the class and to them alone. Instead, confronted with a previously unobserved activity, we apply the term ‘game’ because what we are seeing bears a close “family resemblance” to a number of the activities that we have previously learned to call by that name. For Wittgenstein, in short, games, and chairs, and leaves are natural families, each constituted by a network of overlapping and crisscross resemblances. The existence of such a network sufficiently accounts for our success in identifying the corresponding object or activity. Only if the families we named overlapped and merged gradually into one another–only, that is, if there were no natural families–would our success identifying and naming provide evidence for a set of common characteristics corresponding to each of the class names we employ.’

In philosophy you can find it all over the place. Even H. G. Wells!

“…My opening scepticism is essentially a doubt of the objective reality of classification.”

https://www.marxists.org/reference/archive/hgwells/1905/modern-utopia/appendix.htm

I can go on and on along the philosophy thread of this! As I say, after I noticed this for what was happening when I tried to learn grammar, it started popping up all over the place.

Better stop there. As I say, I don’t want to detract from the simplicity of its application to AI. The application is very simple. It might be better to focus there.

No doubt. It always reminds me of the preamble I remember from many physics lectures: let us assume the system is linear!! If you make the right assumptions, you can always avoid inconvenient truths!

If you want to see this contradictory ordering dynamic playing out in computer science space, though, you might look at the drift from OOP to functional programming.

Why has functional programming come to dominate object oriented programming in recent years?

Rich Hickey has given some nice talks on why object models are always imperfect, and that has led to a renewed emphasis on ad-hoc orderings of raw data in functional programming.

There’s also this series by Bartosz Milewski which goes into the relationship of programming theory to the mathematical field of category theory:

“Maybe composability is not a property of natures”
Category Theory 1.1: Motivation and Philosophy
Bartosz Milewski

Continuing the category theory theme, in the compositional semantics space, the first other work I came across expressing similar ideas was Bob Coecke. Also a category theory guy.

(Category theory, BTW, being invented to deal with the incompleteness/contradictory character of mathematics demonstrated by Goedel.)

From quantum foundations via natural language meaning to a theory of everything

"In this paper we argue for a paradigmatic shift from ‘reductionism’ to ‘togetherness’.

Being a maths guy, Coecke is very tied up in the parallel to the mathematical abstractions of category theory. And he’s taken the parallel to QM maths so far as to be building a company to analyse language using quantum computing! I don’t think we need to go that far. I think Coecke is squeezing language into a QM formalism, and then using quantum computing to pick it out again!

But the insights of subjectivity of category on the environment, shared with QM, I think are valid.

You can find the QM maths parallel being drawn elsewhere. For instance in this talk with my namesake the famous neurobiologist Walter Freeman:

NONLINEAR BRAIN DYNAMICS AND MANY-BODY FIELD DYNAMICS
Walter J. Freeman and Giuseppe Vitiello

I can go on and on along this angle too.

But like I say, the application to cognitive modeling may be very simple. It might be best to concentrate on that simplicity.

2 Likes

humm, sounds a lot like the ideas from HTM but it makes me think we need information to flow both ways in time, not just from past to future in order for it to work.

which reminds me… doesnt the rat hippocampus replay events backwards sometimes.

1 Like

Yeah, feedback. You’re right. I thought I would need some kind of feedback with the experiment I did too.

But what I found was that networks of word sequences like:

          attention
          /
      pay
    /     \
(?)         a call
    \     /
      make
          \
            an effort

Actually loop back around naturally. They’ll be followed by common words like “the”, which then in the network also precede all those words. So activation naturally fed back, and I got oscillations anyway. It surprised me. I had been wondering what the feedback connectivity would be, and thought it might be a big problem. But as it turned out, the only thing I needed to do to get oscillations, was to apply inhibition. And that wasn’t even very hard. I just connected inhibition everywhere. Then turned the inhibition up and down until it didn’t immediately kill activation, and activation didn’t blow up, and the network oscillated.

That might be naive. It may turn out that connectivity is not enough. For instance, when words are not represented by a single node, but an… SDR… I think that will code for distance of connection too (was it you who was pointing out that would be necessary?), and it might distinguish feedback paths and kill them somehow.

But you learn a lot when you try things. Things I thought would be hard turn out not to be, and you find out what isn’t working and needs to change. Next step, I think what I really want to do is figure out a way to break down the spike time patterns from the raster plot into hierarchies. Then it should be possible to play around and figure out where the ideas are still naive.

1 Like

Any type of hash algorithm before a weighted sum allows the weighted sum to efficiently store <vector, scalar> responses. With reservoir computing the hash algorithm is a locality sensitive hash with various amounts of non-linearity thrown in.
The hash algorithm could have binarized outputs or even, as a generalisation, continuously variable outputs.
Unfortunately the information storage capacity of the weighted sum is really poorly understood, for such a basic foundational thing.
For example used under-capacity there is repletion code type error correction with a peculiar bias toward the weight vector.

1 Like

Thanks for the comment @Cong_Chua_toc_may I’m also learning about reservoir computing. I think starting as a reservoir computer is a plausible evolutionary path for a system which codes cause and effect. And I like that it introduces the idea that chaotic systems might have useful predictive effect for all that they are chaotic.

But what interests me more than a raw reservoir computer, is how an organism might have evolved to enhance this prediction mechanism.

The idea I’m pushing here is that cognition might have evolved to enhance the initial crude prediction mechanism of a reservoir computer/echo state machine, by clustering or stacking events which occurred in similar contexts. In that case events which share observed predictions could “stack” on top of each other, and generalize to predict new sequences, very much like generalizing to a grammar, but dynamic this time, and capable of context sensitive variation, change, and even contradiction, because the combinations are chaotic.

And that perhaps this mechanism could be as simple as by finding oscillation resonances in a sequence network.

In that case it would no longer be a simple hash or lookup. It would be actively structuring a network.

1 Like

I have expertise in programming, and per my experience and understanding, OOP doesn’t systematically prevent you from making contradictions in crafting software, while FP does. Within main-stream programming languages, Haskell (Idris, Agda likely but more niche) is the one most close to math (mostly category theory, but also its math smelling syntax/semantics in general), Monads is develop around it, then people find Monads doesn’t compose (well), and I would suggest that’s actually semantics of formal “effects” doesn’t compose well. OOP doesn’t solve the composition problem, it just tolerate / don’t-reveal it.

FP is rather contradiction averse than OOP (or more traditional Procedural programming, compared to Mathematical programming), FP is overkill in simpler software products, but necessary after the overall complexity exceeds anyone’s control, e.g. Windows™ Kernel, you can see Linux kernel is introducing Rust, which is rather more FP than C/C++.

So it’s actually OOP more contradiction-friendly than FP, I would bet. But you’ll have to think about complexity management after embraced contradictions.

1 Like

Impressive! As a native Chinese, my favorite translations (plus its next clause):

https://pages.ucsd.edu/~dkjordan/chin/LaoJuang/DDJTenTranslations.html

道,可道,非常道。
名,可名,非常名。

无名天地之始﹔有名万物之母。

Translation 9

YÁNG Lìpíng 杨立平
2005 The Tao inspiration: essence of Lao Zi’s wisdom. Singapore: Asiapac. P. 14.

Tao, if articulable, is not the eternal Tao.
The name, if can be named, is not the eternal name.

Heaven and earth start with no name.
The named is the mother of everything under the sun.

Translation 4

WU John C.H.
1961 Lao Tzu: Tao Teh Ching. New York: St. Johns University Press. P. 3.

Tao can be talked about, but not the Eternal Tao.
Names can be named, but not the Eternal Name.

As the origin of heaven-and-earth, it is nameless.
As “the Mother” of all things, it is nameable.

And I’d regard following a great explanation rather than simply a translation:

Translation 5

BAHM, Archie J.
1958 Tao Teh King by Lao Tzu interpreted as nature and intelligence. New York: Frederick Ungar. P. 11.

Nature can never be completely described, for such a description of nature would have to duplicate Nature.

No name can fully express what it represents.

It is Nature itself, and not any part (or name or description) abstracted from Nature, which is the ultimate source of all that happens, all that comes and goes, begins and ends, is and is not.

But to describe Nature as “the ultimate source of all” is still only a description, and such a description is not Nature itself.
Yet since, in order to speak of it, we must use words, we shall have to describe it as “the ultimate source of all.”

The “naming (名)” part well concerns language, math included, I’d suppose.

1 Like