HTM.Java test failures because of file permissions

Hi ,

I am doing research on HTM (CLA) in my MS. Looking forward to its implementation i have download HTM java code in my eclips. I have also configure Gradle in the system and execute “gradle -Pskipbench check” on the HTM java project. The build failed with exceptions on 2 tests. I have tried to resolve these exception but i am unable to really understand why these exceptions occurred.

Usama Furqan
MS student
Comsats Lahore Pakistan

Hi @Usama_Furqan… Thanks for inviting me, @rhyolight,

The first test assumes the user has permission to write to their home directory (a Java assumption), but it looks like you need “write” permissions on that directory. The second problem probably stems from the same issue. Have a look at the directory permission or the permissions for the user you are running as?

Welcome aboard! :slight_smile:


Hi @cogmission
You are correct , the first exception was due to write permission issue. I have resolved this issue modifying function ensurePathExists in persistence class. i.e ensurePathExists check that if this is FastRandomSerializationTest then it set path to my C: directory.

Second exception which is in TestMain seems to be not associated with directory permissions because there are no such read or write operations at the point of exception. The code is seems to be comparing.

Can you suggest me a solution?
Is there a possibility of error due to a jar ?


Hi @Usama_Furqan,

I’m confused by the first solution? If your “home” directory is not writable, then all of the Serialization tests that involve writing to a file, should fail - and not just the FastRandom… test? Execution of HTM.Java’s tests depends on the user being able to write their own directory. If you are executing this code on your own machine, I would think that you would want to correct the directory permissions, not alter the test?

What operating system are you running?

These tests run on Mac and Linux operating systems, and I would assume Windows because I haven’t heard from any Windows users that there are any problems? Can any Windows users out there verify this? I would appreciate hearing from any Windows users about this?

Since the builds run fine, I would look for OS or user account subtleties which could alter the output of these tests rather than change the tests. As far as I have seen, there are no problems at all with the tests? The second test takes-over the System OutputStream (PrintStream), and maybe there is an account setting which prohibits this in some way on the machine or on the account you are using? Also if you look at the ComparisonFailure of the 2nd test above, you see the the two comparisons are comparing two Arrays and not two integers? It’s as if the types are altered on your machine somehow? The output stream seems to be converting the types (though they are exactly the same: 83200[] == 83200[] ). --> converting longs to integer arrays?

This is what is producing the output:

long s = 2858730232218250L;
long e = (s >>> 35);
System.out.println("e = " + e);

…as you can see, the value outputted concatenates a long on to a String ("e = "), and at that point on your OS, it seems to be converting the type?

This test is not important and if the other 600 or so tests all pass, then I wouldn’t worry about this one because the ability to co-opt the OutputStream is not necessary in order to run HTM.Java, that is just a edge case that I was testing.


Hi @cogmission

I am also surprised by such strange behavior. If i comment the mentioned comparison line in TestMain then successfully builds.

But i am unable to understand that where can i see HTM build output or results related to provided data in\build\resources.
I have noticed that jmh (\build\resources\jmh\jmh_defaults.txt) define output at /reports/jmh/ but both file (human.txt , results.txt) are empty.
Please help that how can i give csv input to HTM and get output for solving a simple machine learning problem like classification , association , anomaly detection.
Please also suggest that if working on windows 7 for HTM is not suitable or preferred approach then i should try it on ubuntu 14 ?


1 Like

Hi @Usama_Furqan,

Hmmm… According to this, Windows 7 SP1 is compatible with Java SE 8, but obviously there are some issues? Linux might be a better bet?

For help with HTM.Java use for a particular ML problem, please write to the NuPIC lists as any advice they give you will be directly “translatable” to HTM.Java. I can’t help with actual applications, but I can help translate implementation details to HTM.Java if necessary - however general usage for both should be exactly the same as HTM.Java is a port of NuPIC.

As far as specific input, If you look at the Network API tests, you will see examples of files being used as input using a FileSensor, or you can feed the data in using a PublisherSupplier… Please hold on while I find specific examples…



Using a FileSensor (which allows you to point to a CSV file on your drive):

Usage example for FileSensors:

Network net = getLoadedHotGymNetwork_FileSensor();
        net.observe().subscribe(new Observer<Inference>() { 
            @Override public void onCompleted() {}
            @Override public void onError(Throwable e) { e.printStackTrace(); }
            @Override public void onNext(Inference inf) {
                    // Do your work in the subscriber here  - for any line by line output reactions...

Usage Example for ObservableSensors (feeding data in programmatically):

Using an ObservableSensor (which allows you to programmatically push data into the Network):

Network network = getLoadedDayOfWeekNetwork();
Publisher pub = network.getPublisher();

int cycleCount = 0;
for(;cycleCount < NUM_CYCLES;cycleCount++) {
    for(double j = 0;j < INPUT_GROUP_COUNT;j++) {
        pub.onNext("your,csv,string,data,here"); <-- maybe reading from a list or file?
    network.reset();  <-- if resetting for input groups of data...

You will need to configure the details of the parameters according to your needs…

There are many subtle variations in the tests in the Network package, I would refer to those in order to get a “feel” for how to vary aspects of the setup for your needs. But as usual, I’m available for questions. For general HTM applicability, the NuPIC lists are your best bet…

For Anomaly usage, remove the “alterParameter” line which allows configuration of classifiers… but for “prediction” Networks, you will need that setting.


Thank you so much. Your explanation is easily understandable and highly precious for me (y) .
I have started working on Ubuntu and will input my csv data as described.
Thanks again.