Here are a series of questions/thoughts/analogies that may lead nowhere, but here goes:
A point attractor has the following properties, it has a basin of attraction in which any starting point eventually ends up at the fixed point. From a starting point in the basin there is a continuous trajectory that is traveled to get to the fixed point, where the trajectory stops. A backprop neural net that converges on a solution is an example of traveling a trajectory of weights. A generative vision model that is not in learning mode, but just recognizing an object, will travel a trajectory of neuron activations until it settles on a guess of what it is seeing. Numentaâs object recognition model also has some similarities to a point attractor. If you are feeling a cup starting at a particular point on the cup, it will take several movements followed by touch to realize you are touching a cup. The pattern of neural activations that represent a cup is reached after several movements. Like a basin of attraction, there are several possible starting points on the cup, each with a different neural pattern that corresponds, and each eventually ends up (by a process of intersection with predictions) at the pattern for a cup.
Iâm wondering  what is the difference between this type of convergence to one pattern, and convergence to a point attractor? Could one use techniques from attractor phase diagrams to depict the convergence to a solution? Would such diagrams be pointless, since you can use a sequence of intersection diagrams to show the convergence?
There are other types of attractors too, such as âstrange attractorsâ, and one can speculate that a train of thought would hop from object to object, or attractor to attractor.
I also think of Numentaâs convergence as a game of 20 questions  for instance, âis it redâ, âis it roundâ, âis it softâ, might result in a sparse pattern for a âtomatoâ assuming all the answers were âyesâ. When you start out with âis it redâ, an affirmative answer might still result in many possibilities, like an âappleâ, a red balloon, etc. Further questions (intersections), narrow it down.
A final thought  real neurons fatigue after a period of firing What would this do to a sparse pattern that represented an object?
This allows competing âalmostâ recognized patterns to take the stage for two highly overlapping patterns.
As far as your question about trajectories and basins, this applies to continuous valued representations.
Plug in chunky binary values in the scenarios you propose and see what changes.
Try starting from the point of view of a basket of features representing an object with pattern completion and see how things work differently. You have clouds of learned points with varying degrees of overlap on the input pattern. The set with the highest overlap wins the competition to complete the pattern and represent the object. The attractor in this scenario is the completed pattern.
Oh, on the pattern completion thing; this is how the lateral voting ends up working in the more recent versions of HTM.
I donât know attractor field theory well, but can I ask a few questions? Maybe this might help to compare?

What happens to the fixed point when parameters change on the fly? The neocortex should still be able to recognise the apple when it changes colors (in new lightning for instance) or is made of different materials (like a statue, âsuspended by artâ), or becomes a conceptual idea (a description of components), or even completely abstract (as in a cubist paining). Or breaks down even. An apple with a few bites taken out is still recognised as an apple. Would the attractor field still resolve to the same fixed point in such a case?

Does attraction theory work with continuously streaming input? Would it work with a video?

What happens to the fixed point when the object youâre observing becomes part of an aggregate, especially when the different subparts become entangled? If I make a tail out of putty, and attach it to the putty dog I made earlier, I still recognise the tail and still know the fact that it is made of putty.
The way I see this, is like a string of asynchronously blinking christmas lights. While some lights are on and others are off, I always see the string. Even if the string is swinging in the wind.
In answer to one of your questions, remember the coffee cup example of Numentaâs? The logo can be an object on its own, or it can be part of an aggregate (the coffee cup), but its sparse representation doesnât change. (I think). As far as a partial pattern, like an apple with some bites taken out of it. I have seen some articles where attractors are described as autoassociative, so they can complete a pattern. That doesnât account for the apple looking different (with the bites), but it would at least identify it as an apple.