Chaos/reservoir computing and sequential cognitive models like HTM

Ha. Just different orderings @JarvisGoBrr . Here’s another presentation I made years ago which might help:

Among examples I attempted there, I see, were “strong tea”/“powerful tea”. “Strong” and “powerful” will share many contexts, so you might put them in a single semantic class for many purposes. But they don’t share all contexts. “Tea” is one context they don’t share. So ordering the contexts of “powerful” one way, will put it in the same class with “strong”. But ordering them another way will not. The orderings contradict.

Other examples I’ve used over the years… A guy Peter Howarth had a nice analysis of “errors” made by non native learners of English. It said things to me about how we generalize word classes. This paper, I think Phraseology and Second Language Proficiency. Howarth, Peter. Applied Linguistics , v19 n1 p24-44 Mar 1998 (though my examples come from a pre-print.)

What interested me was his analysis of two types of collocational disfluencies he characterized as “blends” and “overlaps”.

By “overlaps” he meant an awkward construction which was nevertheless directly motivated by the existence of an overlapping collocation:

“…attempts and researches have been done by psychologist to find…”

*do an attempt
DO a study
MAKE an attempt/a study


e.g. Howarth’s example:

*pay effort
PAY attention/a call
MAKE a call/an effort

Trying to express that as a network:

            attention
          /
      pay
    /     \
(?)        a call
    \     /
      make
          \
            an effort

What the data seems to be saying, is that beginning speakers often analogize constructions based on shared connectivity like that with “a call”.

They seem to be grouped in a category because of a shared prediction.

“pay” predicts “a call”, “make” predicts" “a call”, and if you hypothesize a grouping based on that shared connectivity, then that might explain why beginning speakers tend to produce constructions like “pay effort”. As they do in Howarth’s data.

You might take that as an example where the word “pay” shares the context “a call” with “make”, but it doesn’t share context “an effort”, and “make” doesn’t share “attention”.

(Blends" by the way, were mix ups based on more fundamental semantic crossover:

‘*appropriate policy to be taken with regard to inspections’

TAKE steps
ADOPT a policy

The point Howarth was making was actually that overlaps were more common early errors than “blends”. Which supports the basic overlapping set theory, as opposed to say, shared embodied reference, but that’s a slightly different point.

They are not false, they just depend on context. It’s not false to say that “pay” and “make” share contexts. It is just that they share some contexts and not others. So you can’t “learn” a single class for them. You have to keep all the observations, and then at run time pick out groupings based on the contexts you have at the time.

1 Like