Programming Environment Options

Hey all,
I’ve been choosing the right language to design my new implementation of HTM around for the past few weeks. I have a fair set of diagrams to lay out the whole project, and as I get more and more demands/specifications I’m trying to pin down the best way to develop things.

My current suggestion is to avoid Java and Python in favor of Common LISP for its lightweight memory operations (including bit vectors) - though I’m not sure if this is a simulated effect or not - and to use CUDA to use (read: abuse) the GPU to handle the heavy parallelism in the regions.

I’m really new to development, having only worked with large projects for the last year. I don’t know the ins and outs of choosing a language. But I do enjoy design, and with a solid blueprint I feel capable of generating a strong machine learning package for computers instead of for software.

So, what do you think of using LISP and CUDA together instead of the current languages? Will HTM written in C remain a dominant force or would there be benefits to moving in a different direction?
Thanks,
Sam Gallagher

If you’re going to be writing an HTM in LISP, you would definitely check out @floybix’s Comportex, written in clojure.

1 Like

Matt,
There is little point to writing something that has been written! Thanks for the great link. I’m diving in and seeing what it’s like.
Sam Gallagher

1 Like

Don’t forget the amazing visualizations provided by Sanity!

1 Like

Hi Sam. It’s cool that you’re thinking about design of HTM systems. I’d be happy to chat about that, perhaps privately.

I’d advise against putting much work into optimisation (as I imagine CUDA would involve) because the theory and algorithms are still not worked out. The current state of HTM can’t do much of real value so there’s little point in making it go faster. One reason I like Lisp here is that its expressivity allows rapid experimentation with different algorithms.

If you start reading the Comportex code you might find it hard going. I am planning to adopt clojure.spec which should clarify the data flow through functions.

1 Like

Conversely, there’s no reason not to reach for the low hanging fruit of GPU implementations while exercising one’s knowledge of current HTM theory. Eventually the theory will need a parallel platform and it won’t hurt to start coding it now. I would think simple parallel programming could help with research purposes, or at least add some diversity to HTM implementations. Although, I’m also planning to write a simple CUDA implementation of HTM using C, which is why I’m a bit bias towards this approach.

2 Likes

I’m doing my implementation in python, the main reason was because it is known by most of ppl in machine learning communities and there is boatload of additional tools, which you would need at some point (numpy,bitarray,spacy, …).
If you asked me which I would prefer I would have loved to use Elixir instead (the problem is missing good large-bitarray implementation) … I’m coming to a point where I would need to do communication between modules, for which Elixir/Erlang Actor model is perfect. Especially if you are implementing the system as standalone neuron/cells you get anachronicity and scalability for free.
Currently evaluating Python Pulsar, but it is very adhoc solution.
As for Java, I would prefer any of the scripting languages over it. Even that I’m using it at work, Java have never reflected the way I think when I’m solving problem ;(., it is so awkward.
For short time I was thinking of using Perl, but it is not so popular in this area.

My bottleneck is fast BitArray solution. Most of the research is targeted towards int/real arrays (GPU, CUDA).

1 Like