Concurrent HTM experiment in Pony: htm.pony

Hi, as I’ve mentioned in other topics, for similar reasons why people have written their implementations of HTM, I’ve started my own, however, with a couple of constraints:

  • concurrency should be “easy”, which brings me to the Actor Model and Pony
  • I have limited time, so progress will be slow, and at first, I’ll try to reimplement an existing library with sufficient tests: the Go implementation https://github.com/htm-community/htm/

Here’s the repo: https://github.com/d-led/htm.pony

Where it takes off from there, we’ll see. Anyone willing to learn Pony, play around or contribute are welcome.

The issues and the readme reflect the current state of the implementation. There are no actors yet, as as of the time of writing I’ve just passed the scalar encoder. However, as soon as there’s some flow of data to multiple objects (SP?), actors can step in. It’s possible to start with classes and just embed them in actors to run the classes concurrently. The language takes care of the safety.

In the Actor Model each concurrent activity can be modeled as an actor, which will potentially be run in parallel to other actors. Actors are lightweight, so millions of actors on one machine are no problem. Actors themselves are run sequentially, so their state is never accessed in parallel from multiple threads. This should align with a model of a brain well, as cells are physically parallel and exchange various kinds of messages. To do it computationally efficient, a sensible subdivision of the problem is to be found,

The Actor Model unifies the APIs for concurrency and distribution, as there are no locks: actors communicate via messages. These translate to simple method calls in Pony

Playground

actor Thing
  let _env : Env
  var _counter: U64 = 0

  new create(env: Env) =>
    _env = env
    
  be jump() => 
    _counter = _counter + 1
    _env.out.print(_counter.string() + ": jumping")
  
  
actor Main
  new create(env: Env) =>
    let t = Thing(env)
    t.jump()
    env.out.print("Hello, world!")
    t.jump()

Hello, world!
1: jumping
2: jumping

By the way, Pony syntax is similar to Python. Also, there are no globals in Pony, thus one can see the safe environment to print something being passed around.

Other topics for more context:

3 Likes

It worked!

Pony seems to require one actor per cell (or subpopulation/clique/place/minicolumn/column of cells) else it defeats the parallelism and other benefits of the language. Placing and networking the required cells into arrays might be easier by including migration and connection behaviors then let the actors on their own find their proper places. Your thoughts?

1 Like

I don’t see any restrictions the language puts on its use in that sense. What you described sounds like a bit of magic to me :grin:, which means, someone will first have to implement it. I’d start small - by separating parallel chunks of computation into actors. That implementation can hopefully be seen as a parallelism experimentation workbench to see what works. If millions of parallel “things” is ok on one machines, these can easily be Pony Actors, whatever they represent

I was primarily thinking of the “attractor network” I am modeling where each (population of cells) somehow has to be placed then connected, and later account for neural plasticity. The hexagonal math and subroutines are a chore that is best to eliminate. See second and third project shown.

This will help test how well the network works using autonomous places, running in parallel. HTM based prediction can later be calculated by each place/actor in the network.

@Gary_Gaulin this sounds definitely interesting and could be programmed with the Actor Model, although, the subject is beyond my focus and abilities :smile:

I don’t want to interrupt work on your HTM example. I’m hoping you will have something valuable by the end of this weekend.

While thinking about the simplest possible application that applies to neural biology this experiment came to mind. There is only one memory unit, requires only one or two sensory bits to sense warm/cold and humid/dry conditions, and one or two bits to indicate bodily state of the one cell.

Another set of experiments suggests that slime molds navigate time as well as space, using a rudimentary internal clock to anticipate and prepare for future changes in their environments. Tetsu Saigusa of Hokkaido University and his colleagues—including Nakagaki—placed a polycephalum in a kind of groove in an agar plate stored in a warm and moist environment (slime molds thrive in high humidity). The slime mold crawled along the groove. Every 30 minutes, however, the scientists suddenly dropped the temperature and decreased the humidity, subjecting the polycephalum to unfavorably dry conditions. The slime mold instinctively began to crawl more slowly, saving its energy. After a few trials, Saigusa and his colleagues stopped changing the slime mold’s environment, but every 30 minutes the amoeba’s pace slowed anyway. Eventually it stopped slowing down spontaneously. Slime molds did the same thing at intervals of 60 and 90 minutes, although, on average, only about half of the slime molds tested showed spontaneous slowing in the absence of an environmental change.

Because the slime mold cannot rely on its slime for this trick, Saigusa speculates that it instead depends on an internal mechanism of some kind, perhaps involving the perpetually pulsating gelatinous contents of its one cell, known as cytoplasm. The slime mold’s membrane rhythmically constricts and relaxes, keeping the cytoplasm within flowing. When the amoeba’s membrane encounters food, it pulsates more quickly and expands, allowing more cytoplasm to flow into that region; when it stumbles onto something aversive—such as bright light—its palpitations slow down and cytoplasm moves elsewhere. Somehow, the slime mold may be keeping track of its own rhythmic pulsing, creating a kind of simple clock that would allow it to anticipate future events.

If the Pony model works then we could ask these researchers for their opinion of the behavior and what happens when various brain cells are similarly disturbed. So I was not kidding about hoping for something valuable to be possible in a few days, or less. You’ll then like I was describing have HTM inside a single cell. A neural model with (as Jeff has described) 1000 Brains capability built right in, or at the very least novel model of the brain of a social amoeba.

2 Likes

I found a discussion forum for the Pony Language:

As you can see I was able to find a think of a typical application where for all languages it’s only a matter of time before someone asks that question. I’ll now be patient.

I also found help with some of the biological details:

Each cell (thus subpopulation) in the cell has its own associative memory. Easiest way to model it is connect each sensory bit to the address input of a RAM array. To make the system come to life each unique experience it can possibly have has its own two bit motor control data and two bit confidence level that increases when nothing bad happens and decreases when something goes wrong after trying a random (or better as in HTM predicted will work OK) motor action. Trial and error learning. You will then have combined the model I have been most experimenting with and the HTM model Numenta is working on, for one (not yet mobile) cell.

Anything else needed for HTM to function can be assumed to be inside the cell. After (from your simple as possible example) knowing exactly what to look for it would be easy for others to search for evidence of being biologically true.

To be useful to have in the system the prediction only has to be better than a random generator generated guess response. If the HTM part easily beats only being able to take random guesses then you’re done. The one that is most important is what to do when when something new is experienced while traveling at a fast speed where best guess must be to continue using current motor settings, or else it crashes. In my implementations the two bit left/right and forward/reverse motor data bits notch up a throttle representing applied muscle force, speed. In biology there is a pulse train like this to control muscles. I purposely made the test critters able to achieve speeds that would for us be hard to fully control too.

What a fully random system right away needs is a prediction mechanism that is able to figure out that when it has a new experience (memory is all zeros and first time addressed) use current motor settings as guess what to next do then in the next step always used to set motor settings even though sometimes they are repeated. What you are modeling can be expected to have the same type of thing happening, to maybe help combine a typical associative memory system with HTM.

What you are modeling is not yet mobile but where you have it right adding that in might take a few hours or less. Basic circuit does not change, only have more sensory and motors connected to it. After 28 bits of address space a personal computer can run out of memory, but since almost all locations then never gets addressed there are ways to add code (though will slow down execution speed) for an as-needed structured memory.

@Gary_Gaulin perhaps, let’s open a separate thread on cells and actors to keep this one focused on pony. I think, it is a good match, which should be explored.

Side remark: perhaps, notably, the first paper on the actor model and the subsequent research by Carl Hewitt seems to come from the Artificial Intelligence background.
From the first 1973 paper:

FW: @jordan.kay :smile:

1 Like

Pony is a general purpose language, so, given an existing system algorithm which includes independent/concurrent actors, it should be easy to model each such object with an actor. What the algorithm should describe is what kind of messages can be exchanged between actors and how do they know each other. Let’s start a separate thread on schooling behavior (I’m not sure, this forum is appropriate, as it should probably focus on HTMs) and try to see how it could be modeled. However, I’m sorry to have very limited time and won’t be able to go deep into details.

Given the Actor Model, at first, it doesn’t matter which language is used. Pony or Erlang/Elixir are just the current implementations. Erlang/Elixir could even be a better match for simulations due to preemptive scheduling.

People are doing it: Neuroevolution in Elixir, Erlang-based Desynchronised Urban Traffic Simulation, Boids, where each bird is a process

1 Like

I think you should stay focused on a simple HTM example for Pony. There might already be someone else working on a swarming example for us.

1 Like