Failures running tests with py.test

It was 1GB…got resolved by increasing the RAM size of VM to 2GB.

Now the installation seems to be success but the test result is giving a couple of errors

I ran py.test tests/unit

================================================================================================= FAILURES =================================================================================================
____________________________________________________________________________ SpatialPoolerTest.testComputeParametersValidation _____________________________________________________________________________

self = <tests.unit.nupic.algorithms.spatial_pooler_cpp_unit_test.SpatialPoolerTest testMethod=testComputeParametersValidation>

    def testComputeParametersValidation(self):
      sp = SpatialPooler(inputDimensions=[5], columnDimensions=[5])
      inputGood = np.ones(5, dtype=uintDType)
      outGood = np.zeros(5, dtype=uintDType)
      inputBad = np.ones(5, dtype=realDType)
      inputBad2D = np.ones((5, 5), dtype=realDType)
      outBad = np.zeros(5, dtype=realDType)
      outBad2D = np.zeros((5, 5), dtype=realDType)
    
      # Validate good parameters
      sp.compute(inputGood, False, outGood)
    
      # Validate bad parameters
      with self.assertRaises(RuntimeError):
>       sp.compute(inputBad, False, outBad)
E       AssertionError: RuntimeError not raised

tests/unit/nupic/algorithms/spatial_pooler_cpp_unit_test.py:195: AssertionError
___________________________________________________________________________________ DateEncoderTest.testHolidayMultiple ____________________________________________________________________________________

self = <tests.unit.nupic.encoders.date_test.DateEncoderTest testMethod=testHolidayMultiple>

    def testHolidayMultiple(self):
      """look at holiday more carefully because of the smooth transition"""
      # use of forced is not recommended, used here for readability, see
      # scalar.py
>     e = DateEncoder(holiday=5, forced=True, holidays=[(12, 25), (2018, 4, 1), (2017, 4, 16)])
E     TypeError: __init__() got an unexpected keyword argument 'holidays'

tests/unit/nupic/encoders/date_test.py:164: TypeError
______________________________________________________________________________ AnomalyLikelihoodRegionTest.testSerialization _______________________________________________________________________________

self = <tests.unit.nupic.regions.anomaly_likelihood_region_test.AnomalyLikelihoodRegionTest testMethod=testSerialization>

    @unittest.skipUnless(
      capnp, "pycapnp is not installed, skipping serialization test.")
    def testSerialization(self):
      """ test to ensure serialization preserves the state of the region
            correctly. """
      anomalyLikelihoodRegion1 = AnomalyLikelihoodRegion()
      inputs = AnomalyLikelihoodRegion.getSpec()['inputs']
      outputs = AnomalyLikelihoodRegion.getSpec()['outputs']
      parameters = AnomalyLikelihoodRegion.getSpec()['parameters']
    
      # Make sure to calculate distribution by passing the probation period
      learningPeriod = parameters['learningPeriod']['defaultValue']
      reestimationPeriod = parameters['reestimationPeriod']['defaultValue']
      probation = learningPeriod + reestimationPeriod
      for _ in xrange(0, probation + 1):
        inputs['rawAnomalyScore'] = numpy.array([random.random()])
        inputs['metricValue'] = numpy.array([random.random()])
        anomalyLikelihoodRegion1.compute(inputs, outputs)
        score1 = outputs['anomalyLikelihood'][0]
    
      proto1 = AnomalyLikelihoodRegionProto.new_message()
>     anomalyLikelihoodRegion1.write(proto1)

tests/unit/nupic/regions/anomaly_likelihood_region_test.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../.local/lib/python2.7/site-packages/nupic/regions/anomaly_likelihood_region.py:139: in write
    self.anomalyLikelihood.write(proto)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <nupic.algorithms.anomaly_likelihood.AnomalyLikelihood object at 0x7ff722c59990>
proto = <nupic.regions.AnomalyLikelihoodRegion_capnp:AnomalyLikelihoodRegionProto buil...eriod = 0, learningPeriod = 0, reestimationPeriod = 0, historicWindowSize = 0)>

    def write(self, proto):
      """ capnp serialization method for the anomaly likelihood object
    
        :param proto: (Object) capnp proto object specified in
                              nupic.regions.AnomalyLikelihoodRegion.capnp
        """
      proto.iteration = self._iteration
    
      pHistScores = proto.init('historicalScores', len(self._historicalScores))
      for i, score in enumerate(list(self._historicalScores)):
        _, value, anomalyScore = score
        record = pHistScores[i]
        record.value = float(value)
        record.anomalyScore = float(anomalyScore)
    
      if self._distribution:
>       proto.distribution.name = self._distribution["distributionParams"]["name"]
E       KeyError: 'distributionParams'

../.local/lib/python2.7/site-packages/nupic/algorithms/anomaly_likelihood.py:321: KeyError
========================================================================================== pytest-warning summary ==========================================================================================
WC1 /home/sanjukta/nupic/tests/unit/nupic/engine/network_test.py cannot collect test class 'TestNode' because it has a __init__ constructor
WC1 /home/sanjukta/nupic/tests/unit/nupic/frameworks/opf/htmpredictionmodel_classifier_helper_test.py cannot collect test class 'TestOptionParser' because it has a __init__ constructor
WC1 /home/sanjukta/nupic/tests/unit/nupic/regions/knn_anomaly_classifier_region_test.py cannot collect test class 'TestOptionParser' because it has a __init__ constructor
WC1 /home/sanjukta/nupic/tests/unit/nupic/support/decorators_test.py cannot collect test class 'TestParentException' because it has a __init__ constructor
WC1 /home/sanjukta/nupic/tests/unit/nupic/support/decorators_test.py cannot collect test class 'TestChildException' because it has a __init__ constructor
============================================================= 3 failed, 708 passed, 17 skipped, 8 xfailed, 5 pytest-warnings in 732.31 seconds =============================================================
sanjukta@sanjukta-ubuntu16:~/nupic$

Try running the tests using scripts/run_nupic_tests.py -u

Thank you Matt for your timely response.

I ran scripts/run_nupic_tests.py -u , it is showing couple of failures.

3 failed, 708 passed, 17 skipped, 8 xfailed, 5 pytest-warnings in 291.17 seconds

The error log is link is https://gist.github.com/millan123/e52f59aa1a3527c265bc13b41c8fd820

I have also another query…Am I supposed to run any other tests? Is there any test covering mysql connectivity?

You don’t need MySQL unless you run swarms, so no. There is a /scripts/test_db.py script that will test your MySQL setup before swarming.

One of those test failures indicates you are running the tests from the master branch, but you have an older stable version of NuPIC installed. If you want those tests to run and pass, you’ll need to install from local source, not binary. You can do this hopefully by running python setup.py develop --user from the NuPIC checkout directory.

Yes, they are integration tests, you can run them like this:

./scripts/run_nupic_tests.py -i

The -u is unit and -i is integration.

I ran thispython setup.py develop --user and it went well without any error.
After that I ran./scripts/run_nupic_tests.py -i but it is showing the exact same error as of yesterday …no changes.I compared the yesterday’s and today’s log…both are same.

Just to be clear, when you ran python ./scripts/run_nupic_tests.py -i, you got unit test output in the logs? That does not happen when I run it. My output starts out like this:

python scripts/run_nupic_tests.py -i
============================= test session starts ==============================
platform darwin -- Python 2.7.10, pytest-3.0.7, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /Users/mtaylor/nta/nupic, inifile:
plugins: xdist-1.16.0, cov-2.5.0
collected 114 items

tests/integration/nupic/algorithms/extensive_tm_cpp_test.py::ExtensiveTemporalMemoryTestCPP::testB1 <- tests/integration/nupic/algorithms/extensive_tm_test_base.py PASSED

Check the path of the tests in the logs?

Matt…I am also getting it in the same way as below. But I have explicitly redirected the output to a file to compare if there is any changes happened .
Sorry for the previous post typo , i ran the unit test not integration

sanjukta@sanjukta-ubuntu16:~/nupic$ ./scripts/run_nupic_tests.py -u
=========================================================================================== test session starts ============================================================================================
platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.5.3, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/sanjukta/nupic, inifile:
plugins: xdist-1.16.0, cov-2.5.0
collected 736 items 

tests/unit/nupic/serializable_test.py::SerializableTest::testABCProtocolEnforced PASSED
tests/unit/nupic/serializable_test.py::SerializableTest::testReadFromAndWriteToFile PASSED
tests/unit/nupic/utils_test.py::UtilsTest::testEquals PASSED
tests/unit/nupic/utils_test.py::UtilsTest::testMovingAverage PASSED
tests/unit/nupic/utils_test.py::UtilsTest::testMovingAverageInstance PASSED
tests/unit/nupic/utils_test.py::UtilsTest::testMovingAverageReadWrite PASSED
tests/unit/nupic/utils_test.py::UtilsTest::testMovingAverageSlidingWindowInit PASSED
tests/unit/nupic/utils_test.py::UtilsTest::testSerialization PASSED
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py::ArtificialAnomalyTest::testCaseContinuousBunchesOfSpikes PASSED
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py::ArtificialAnomalyTest::testCaseIncreasedAnomalyScore PASSED
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py::ArtificialAnomalyTest::testCaseIncreasedSpikeFrequency PASSED
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py::ArtificialAnomalyTest::testCaseMissingBunchesOfSpikes <- ../.local/lib/python2.7/site-packages/unittest2/case.py SKIPPED
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py::ArtificialAnomalyTest::testCaseMissingSpike <- ../.local/lib/python2.7/site-packages/unittest2/case.py SKIPPED
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py::ArtificialAnomalyTest::testCaseSingleSpike PASSED

Ok, I think we are mis-communicating. I was just telling you that there are MySQL tests if you run -i. I can’t tell if you were successful running them. Did the integration tests pass? If not, please paste the failures.

There were 4 errors during the integration tests…but after fixing the mqsql connectivity by changing root user plugin from auth_socket to mysql_native_password…the integration test went well without any failure.

integration test result:-
=================== 108 passed, 6 skipped in 379.44 seconds ====================

However, for unit test… the no of errors has increased to 45.
unit test result:-
45 failed, 666 passed, 17 skipped, 8 xfailed, 5 pytest-warnings in 312.94 seconds

Should I be worried about these failures in unit test?

If you have 45 failing unit tests there is usually something wrong with your “test” environment (py.test version? multiple versions installed?).

Matt…I am not sure how to get the version number , please guide me…Are you looking for below output?

sanjukta@sanjukta-ubuntu16:~/nupic/scripts$ git log -1
commit a9b9f9f1c74196bd800b3528c0ffd2e1b13ab165

sanjukta@sanjukta-ubuntu16:~$ pip freeze | grep nupic
nupic==1.0.3
nupic.bindings==1.0.0
sanjukta@sanjukta-ubuntu16:~$

Update on unit test result:- After discarding the changes in working directory files, now the unit test result is showing 3 errors. Please suggest how to rectify these 3 errors.

Below link contains the unit test output.

Read: How to check what version of NuPIC is installed

But it looks like you installed via pip, which gets you the latest stable release: 1.0.3. But you are running tests from tip of master in the source code repository, which is version 1.0.4.dev0.

To properly run the tests, you’ll need to install from source code. There are instructions in the readme for how to do this.

Hi,

I am new to this, so please bear with me.

I am facing the same issue here. I think, as a new user, what’s confusing is that in the video which guides through the installation process (on mac) Matt first installs using pip, then clones the repo from the github and runs the tests. I am guessing that at the time when the video was made, both versions were the same.

But it looks like you installed via pip, which gets you the latest stable release: 1.0.3. But you are running tests from tip of master in the source code repository, which is version 1.0.4.dev0.

To properly run the tests, you’ll need to install from source code. There are instructions in the readme for how to do this.

What are the pros and cons of doing this? Why would one need to install it first using pip, and then install it from the source?

Alternatively, can an older version (1.0.3 in this case) be cloned? What are the pros and cons of doing this?

Thank you,
L

I don’t know which video you’re talking about, but it’s old.

You only need to install once. Stable versions installed with pip install nupic are binaries remotely compiled and shipped to your system. If you want to run tests against that specific stable version, you can check out the source code at the tag you want and run tests from there. Those should pass.

I don’t know which video you’re talking about, but it’s old.

I was talking about this video:
https://www.youtube.com/watch?v=6OPTMDO17XI

And thank you, now it works!