I am looking for people (C++, Py,…) who would like to participate in the community Reviewers team. I think it’s important to see the code with more eyes (not only for bugs, but for design, usability). But I also hope for quick feedback and rapid developement in the community repo.
I’ve set “rules” for merging that at least one review is required to get the new code in.
All of this discussion is good. But we must keep in mind what this library is for. This is not a product or production application. This is a framework for experimentation.
From my viewpoint this library is intended for people who want to experiment with their own implementations of any of the components.
So the first priority is understandability of how the algorithms in the library work.
The second priority is flexibility in how the parts can be connected.
The third priority is flexibility on platform and programming language of the user.
Performance is important but not at the expense of the other three goals.
We should assume that the number of algorithms and variations on those algorithms will continue to grow as people find new things that work…after all that is the purpose of the library. Consequently do not expect any of the API to remain constant. As new ideas are presented we should try to incorporate them. For example, if someone discovers a new way to do SP and has a working implementation in Python we should help them port that to C++ and add it to the library if they are unable to do that themselves. I would not expect to see polished code being offered and that should be ok. Every module in the library must have a corresponding unit test module but don’t expect offered modules to have one so we must help them provide one.
@breznak your thing seems to be C++ optimization…that is great. You can help optimize the submitted modules as long as it does not make it harder to understand or loose flexibility.
Having said that, the actual layout of the library should be focused on how easy it is to understand even by someone that is not a professional programmer. In my opinion it is not mandatory for the community core library to be a clone of nupic’s production core as long as we can identify the changes they have made so we can incorporate them into our library. I expect there to be considerable deviation.
Thank you for your points, will def. add them to the poll.
This is not a product or production application. This is a framework for experimentation.
I agree, in a way. I’m open for more rapid and extreme changes, but on the other hand, I’d like to have a fork that is “an actively developed continuation of the Numenta’s repositories”. So that Numenta can try to sync once in a while, if they wish, and people who build their apps on top of it can continue to use a fixed and developed descendant. So I’ll also add
compatibility (more or less) with the current Numenta API
rapid (vs conservative) development (API breakage)
unit-test coverage (each new feature is tested)
keep c++ / Py feature+API parity (vs the repos can separately diverge and live on their own)
Note, I’m collecting ideas to ask here, not that I’d agree with all the points I’m listing here.
library should be focused on how easy it is to understand even by someone that is not a professional programmer
I’m not sure about this one. Either they are scientists and focus mainly on the papers/NeuroSci, or programmers who focus (and know) the internal workings, or application users, who use products based on HTM (Grok, HTM schools, …)…imho
I would not really care about whitespaces and coding-style so much (always Matt had to punch me to do that )
…if someone discovers a new way to do SP and has a working implementation in Python we should help them port that to C++ and add it to the library if they are unable to do that themselves
Careful with this, of course we’ll do it if we like that or it’s uber cool feature; but you might soon end up porting code you are not interested at.
More ponts, ideas for the poll, what you want from future nupic?
Great job coordinating, you all. It would be a Very Good Thing to get everyone working on the same forked codebase with a set of objectives. You seem to be doing in the right direction.
Just be careful about letting “new algorithms” into the project. When this happens, be very clear about where they originated, whether they are biologically inspired or not (cite papers). It will help in the future.
@rhyolight Ah, yes I agree. These should be HTM algorithms. I was thinking in terms of some of the variations of the HTM modules listed in the API specifications…like backtracking TM, and perhaps some more encoders and classifiers or even some monitoring tools. Hopefully some new things will eventually come out of Numenta’s current research that we can add.