Eye.py & ChannelEncoder testing

Trying to work through eye.py from htm.core’s retina_encoder branch. Really cool stuff about topology allowing neighboring bits to redundantly encode info. It also uses openCV’s ‘contrib’ (community?) optional packages, specifically cv2.bioinspired for the Retina feature.

Trouble is, I can’t seem to import it correctly. I’ve been copying and piecewise executing the code in a notebook, classes / methods define well enough. When I run the test code:

eye = Eye(output_diameter=200, # create Eye object

it crashes:

AttributeError                            Traceback (most recent call last)
<ipython-input-38-00d5fc620ff0> in <module>
     14           sparsityParvo=0.2,
     15           sparsityMagno=0.02,
---> 16           color=True)
     17 for img_path in images:
     18     eye.reset()

<ipython-input-37-1a3bf6127c1a> in __init__(self, output_diameter, sparsityParvo, sparsityMagno, color)
--> 193         self.retina = cv2.bioinspired.Retina_create(
    194             inputSize            = (self.retina_diameter, self.retina_diameter),
    195             colorMode            = color,

AttributeError: module 'cv2.cv2' has no attribute 'bioinspired'

I copied the same import code from eye.py: import cv2 # pip install opencv-contrib-python (after running the pypi install as such), but this gives me the above error.

When I try adding from cv2 import bioinspired it throws
ImportError: cannot import name 'bioinspired' from 'cv2.cv2' (/Users/mark/opt/anaconda3/lib/python3.7/site-packages/cv2/cv2.cpython-37m-darwin.so)
It seems to be trying to extract bioinspired from that oddly named .so “darwin” file in my site-packages/cv2 (I’ve attached an image of my filetree).

I’ve been checking through the openCV Bioinspired docs, which point me to installing openCV’s extra packages as pip install opencv-contrib-python after uninstalling any prior openCV packages. This left me with the same issue. Wonder if my install’s simply broken.

Has anyone done any work with the Retina or ChannelEncoders?

Found a very silly error where I didn’t properly define the ChannelEncoder class. The base class demonstration runs fine now. Started running the tests on eye_test.py, seems really cool.

Analyzing this image of ronja_the_cat.jpg (because computers are primarily about cats, as is known) creates a parvo and magno SDR, the latter of which lands at exactly the sparsity we want:

It opens some windows that scan different areas (10 via the test code i copied) of the image, generating “regions of interest” correlating to the parvo retinal representation (top left):

I don’t get how/why it scans certain areas (or why it ends up on focusing on/around the cat’s eye), or how this plays into generating a more balanced SDR for the whole image. Semantically it would make sense that certain areas of an image are more “of interest” than other areas, but I’d reach this conclusion after seeing several frames of a security camera feed, for instance - I know what the ‘areas of interest’ are based on where people move, while pixels focusing on a wall or exit sign don’t vary much between images.

The magno SDR has ~0.02 sparsity, however, so next I’ll probably run a basic image classification test (maybe see how it performs on MNIST since we already have an HTM example that assigns the average values of image_array to sdr.dense).

Wicked cool stuff so far, I encourage anyone to check out the source code of eye.py and how it integrates cv2’s bioinspired module - you’ll probably understand more than I have.

1 Like

The “region of interest” simply refers to the current field of view of the eye-encoder. It does not imply that there is any thing interesting at that location, merely that the eye is looking at that area. The eye-encoder should have parameters to control where in the image the eye-encoder is looking. There are methods on the encoder to look at the center of the image, random points in the image, but ultimately its up to the user to point the encoder at any specific place.

I can’t help but notice a probable bug in your code:
In class Eye, method compute, you have the lines:

        # apply field of view (FOV), rotation
        self.roi = self.rotate_(self.image, self.orientation) 
        self.roi = Eye._crop_roi(self.roi, self.position, self.retina_diameter, self.scale)

Surely these statements should be reversed? First crop the large image down to just the section you are looking at, and then rotate it around. I’m guessing this bug is also in the original version of the code too…

1 Like