Input:
light sensor (analog photo-electric diode)
MPU-6050 (GY-521 MPU-6050 6-axis Accelerometer Gyroscope Sensor Module 16 Bit AD Converter Data Output I2C)
BMP180 ( Barometric Pressure, Temperature and Altitude Sensor )
DHT22 ( Temperature and humidity sensor, higher accuracy than the cheaper DHT11 and possibly above chip)
ELP Stereo camera:
- encoding the rectified, normalized disparity map for depth info
- blurring most of the edge of the images, except for the center core (similar to human eyes), for both right/left camera
Microphone (analog signal)
Output:
several servos
motors
variable-brightness LED
speaker (with a simple DAC attached … might be overkill and not amount to anything, but might be interesting as well).
I haven’t really dug into whether or not NuPIC has encoders already… I probably won’t for a while anyway. The reason for this, is that when I first got into deep learning, one the first series of exercises was coding neural networks from scratch, using minimal libraries, so that I was forced to understand on the lowest level what each algorithm was doing, how data was being handled, weight distributions were being updated, etc.
I’m looking to do something similar here with HTM, maybe create a jupyter notebook tutorial along the way. I’m also going with the long-term intent of using some of the newer embedded SoCs (such as the STM32 line of chips with 512KB+ memory and 128-256Mhz processors) to do initial sensor SDR encoding and compression, with a more empowered board simply concatenating then processing on that. I might be wrong, but I have some doubts that NuPIC is aiming code at embedded modules, whereas embedded chips are specifically an interest of mine. Also considering using FPGAs as well… those are coming down in price.
I get the general sense that overall processor speed isn’t as important here as having adequate parallel processing and IO. Those aspects are negatively impacting scaling of HTM. One of my assumptions for this is the fact that our eyes are able to believe that relatively slow refresh rates on a television or monitor as really moving objects. This states that even our brain has a certain speed limit in its IO processing, so that it’s the parallel nature as well as low-level prediction/anticipation of input that really empowers us. Pre-processing all that parallel work before sending it to the main computer, I suspect, would go a long way to helping this overall system.
I also want to try to carry over some tricks used by deep learning for efficiency as well (such as storing known pre-computed values and using hashing to route common input combinations into that lookup table/hashmap/dictionary).
Finally, I want to try to keep the maths as simple as possible, both for the sake of computational understanding for newcomers, and reducing number of clock cycles required to compute all this. Only after getting to an efficient place in python, would I bother to implement in C++… might even be worth writing a converter or trying Cython at that point, but that’s down the road.
(I worked with distributed parallel compute systems at a previous job, so I feel familiar with some of the methodologies involved, and think it would be a fun challenge to try this using IoT devices)