Just wanna share that I’ll be presenting Etaler’s tensor subsystem at SITCON 2020 in August. It’s a local non-academic conference for students to share projects and ideas. I think it’s a great chance to spread HTM around a bit. (I’m bad at words… so) Any suggestion to advertise(?) HTM gently?
The two things I think are memorable are the realtime path anomaly detection and Esperanto NLP I did way back. Are there applications of TBT or other advance theory right now?
@Bitking FYI, our internal project, a high performance HTM-like system, has demonstrated one-shot learning with resistance to catastrophic forgetting for a classification application (non-images so far). We are in the final stages for getting approval for public release of the software and we are looking for to sharing with the community.
We’d also like to collaborate with @marty1885 on how we can improve our projects separately or jointly since we seem to have similar goals, but took slightly different approaches in a C/C++/Python framework. In particular, @marty1885 made use of multi-threading and architectural separation of the frontend and backend, which is something we haven’t done yet, although we’ve tried our best to make room for its eventual implementation.
We also discontinued our GPU implementation since it was slower than running directly on a CPU core. I think we would need to rewrite our previous OpenCL implementation to make more effective and optimal use of the GPU bus and cores. However, this change made installing and running our software as easy as installing a python module with pip.
Anyway, I look forward to hearing more about this presentation, although it seems like it will be in Chinese
In this paper they do 3D object recognition, showing that more sensors (fingers) sharing information lead to faster recognition (fewer touches) than 1 sensor alone. There’s even a video illustrating the concept (4:30 total).
You might consider extracting your C++ tensor implementation into a separate library. That way, we could play with it a little more or even include it into our own projects.
Your PPT slides were very helpful in understanding your tensor work and why you would use it. Normally, my eyes glaze over when I see anything templated related in C++ code
Marty, this is really awesome and great work on the slides. I’m wondering about extending the tensor concept to bitarrays (i.e. an array of 32-bit integers where each element in the array represents 32 bit elments). We’ve had a lot of performance gains by applying bit operations to a simple C BitArray class for our BrainBlocks algorithms, but it’s only flat 1D bitarrays.
I’m thinking about that too… You are welcomed to use Etaler’s tensor system in it’s current from while I figure out how to separate the two parts. (DM me if you have any idea. I’m kinda stuck on a few parts)