Encoding Vision: OpenCL RGC Mimicry: Some Findings

I’ve been playing with trying to mimic the visual system for a while now, and I’m currently programming OpenCL code to mimic the function of the Retinal Ganglian Cells in the eyes. I think I’ve come across a few useful finds:


(Note: this is optimized so I can see things, not for HTM, yet.)

Both of those filters above are programmed to mimic RGCs. The one on the right mimics ‘midget’ RGCs with small input areas. It computes top hat and black hat filters, like the RGC center-surround/surround-center activations, and adds or subtracts that from grey (so I can see what’s happening). I don’t think it beats any of the other current edge detectors, but I had fun making it.

Interestingly, it’s not just edge information, but some lightness information: a darker object on a light background (like my hair), will have a dark line on the inside, and a white line on the outside, while light objects on dark backgrounds will be opposite. This might not seem too interesting, but it’s one property that seems to stay consistent across different RGCs, so it should be useful for combining data.

I find the larger, color RGCs a little more interesting though.

The blurry image on the left runs the same algorithm as the one on the right, but with the center and surround sizes much greater, with larger differences in size, and color isn’t optimized out. It’s blurry due to the large cell size, but it seems to keep relative color information rather than total: if I put a color filter over it, roughly the same video is shown on the screen before and after. A certain optical illusion also works on it:

Okay, I might need to tune things a little, but those two inner squares are the same color.

I haven’t tried much larger RGCs yet, because at this point I’d need to change the algorithm to just select any pixels at random, but I imagine they’re good for detecting overall lighting, whether we have a color filter on our face, or whether it’s nighttime.

Now I just need to figure out some good algorithms for detecting line orientation like the V1 does…

Edit: improved the color RGC algorithm:

1 Like

This is cool and I’m also interested in the vision problem as well. Mind explaining little more how you 'mimic’ed RGCs? I’m a little unfamiliar as to how center-surround/surround-center activations work.

Sure!

There are actually several types of RGCs: midget, parasol, and bistratified, photosensitive, giant, and so on. I focused on midget and parasol cells.

The main differentiations of those types of cells is the reach of their dendritic trees, and how often they occur in the retina. Midget cells have a relatively small reach (hence the midget name), but account for 80% of the RGCs, while parasol cells have a large reach (hence the parasol name), and account for 10% of the RGCs.

At the time, I didn’t know about bistratified cells and their different responses to different colors, so I hypothesized the midget and parasol RGCs somehow connected to different cones or rods, so one set of RGCs would connect to green only, one red only, and so on. I’m still not sure whether this is the case or not.

However, I also hypothesized that many of the midget cells wouldn’t be able to do this if their reach for input rods was small enough. TO see why, here’s an image representing the distribution of different color rods in the eye:

Since those cones are randomly distributed, with some colors occurring more than others, it would require a relatively large input size to get, say, the color blue, while red would require less input. However, to get just brightness, you could compare neighboring cones. Some of the LGN, which these RGCs often send output to, is colorblind, so I assumed many of these RGCs simply combined different cones before sending image information there.

With all that in mind, I designed a few OpenCL scripts to transform images so each pixel represented an RGC. Of course, my laptop couldn’t handle each pixel checking hundreds or thousands of surrounding pixels, so I just chose a small, random selection from those surrounding pixels.

Now that I think about it, I made a lot of assumptions. I’m curious what someone studying RGCs would think of those assumptions…

Anyway, I was able to replicate another optical illusion with this:

I tried the Pacman illusion later, and added a time filter representing cell exhaustion. You should be able to see the yellow dots moving around the circle without focusing on the center for 30 seconds.

If you want a look, the Github repository is here, though it’s still a little messy.