I’m trying to figure out how does RefFrames and Grids helps.
The problem in general is beside Sensor-Motor and Spatial tasks I can hardly find out problems that can be shoehorned into that Framework.
So I’m looking for problems that can use CC algo which are not Sensor-Motor and Spatial tasks ?
Can you get me one or two !?
To start you up with I’ll show you one :
GOAL: Solving equations in the form ax+b=0
GRID:
Location: Transformed expression
Ex.: (ax-b=0)--[+b]-->(ax=b)--[/a]-->(x=b/a)
Ex.: (3x-15=0)--[+15]-->(3x=15)--[/3]-->(x=5)
Ex.: (3x-15=0)--[+15]-->(3x=15)--[*3]-->(9x=45)--[/9]-->(x=5)
Ex.: (2x+x=15)--[sumby(x)]-->(3x=15)--[/3]-->(x=5)
Every expr signifies a location on the grid
Landmark/Feature: Reward i.e. was the transformation good or bad ??
Actions : {+,-, *bottom, Ax + Bx = (A+B)x }
So we have our Movement commands the Actions.
We have the Grid where a Location is a transformed expression.
Still there is slight problem even with this example : the Locations are too abstract to be represented by Grid modules. Second they are not known in advance you need do the Calculation/Movement first to see the Location.
Probably this is to abstract of a problem ?
So can you give me an example which is just ONE level of abstraction above Sensor-Motor task that will fit ?
Could you give an example of a spatial task that you are able to solve? It seems from the framing of your question, that you are looking for goal-oriented behavior (solving an equation, in your example), which would require some type of reward/punishment mechanism. Current HTM theory does not have that capability yet, so just checking if you’ve gotten that far in your experiments. Framing your framework’s current capabilities will help with defining a task that is less spatial in nature which still fits those capabilities.
1 Like
I’m still researching, so currently have only ideas… this time I’m taking more time playing with ideas until I see clearer path… last couple of years tried so many stuff but always piecemeal.
As for the Spatial tasks, they are by default working in 2D and 3D space, so they are easily translated into Ref Frames and Grid/metrics.
What I’m looking is how to translate non spatial problems to Frame-spaces, so that I can use it as INVARIANT representation … once I do that I can tailor CC-like algo.
Trying to connect the dots from bottom-up and top-down … I know that it probably needs many layers of abstractions, but if I can find simple non-spatial tasks, can start to fill the abstraction GAP slowly
Perhaps you are trying to move too quickly here. Have you defined the space in which this conceptual task exists? What does each state (location) look like? How do you identify the allowable transitions between states?
Probably the most general representation for the problem as a whole is a bi-directional graph (Bidirectional assumes that an inverse operation exists for any operation you apply to the expression). Each node in the graph corresponds to a snapshot of the expression you are manipulating, and each edge is a transition or operation applied to the expression. But what does this snapshot look like?
Probably the most general representation for a mathematical expression is the parse tree (see for instance the shunting-yard algorithm).
Now you have two levels of navigation tasks for your agent to explore. The first task involves the agent simply looking around the room (the current state) trying to identify familiar patterns. Certain patterns could potentially have an association with an allowable transformation operation. This would be akin to the agent identifying the doors in the room.
The second task involves the agent choosing which transformation to apply (which door to walk through). Once the choice is made, the environment updates itself according to the logic of the state space graph. If the selected transition is not valid, it would be like the agent walking into a wall: no change to the environment, and maybe some small feedback signal letting it know the outcome was not desirable. If the selected transition is allowed, then the agent will find itself in a new room with a new parse tree structure to examine. At which point the sensory task can begin again.
I hope this helps you with making the problem specification a little more concrete. It’s too easy to get bogged down in thought experiments. Start with something simple and see if you can get an agent that can recognize the expression state (2+2) and correctly predict that the applySummation operation will allow it to transition to the expression state (4).
5 Likes
thanks, good advice’s.
Yeah I just began with the definition of the state space graph for this task.
Do you have other examples, different than equations.
You could do something similar for navigating through web-pages. Back in the day, we used to call those web-crawlers. It’s the same basic principle, but with DOM trees as your local structure. The transitions are hyperlinks or other interactive elements on the page. Either way, the result should be a response which modifies the current DOM tree or a whole new DOM altogether.
There probably many other examples that I could contrive. What you have to ask yourself is: what problem do I really want to solve? Once you have a well defined problem, then you can start asking questions about how to best represent that problem in a way that might be amenable to a solution using the tools that you have available to you.
3 Likes