Hi all.
I have been experimenting and applying HTM to real problems recently. And now I want to use HTM in my final project for my machine learning class in my collage. Which could be anything regarding ML. Well… I’m honestly out of new ideas. I can’t think off anything besides anomaly detection, sequence prediction and data classification (by using SP and SDRClassifer).
Writing a proper audio encoder for HTM, the one that mat did way back on youtube lacked a feature that is present in the cochlea and that would be the “heat map of intensity” of a particular tone.
let me give you a clue of how this might look in a list form:
20hz range: [0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0] - low intensity
20hz range: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] - high intensity
this form of representation is superior to the classic dial thats sort of similar to a dial you would find on a music mix set found on the scalar encoder
This saves the semantic of the tone while embedding the intensity of the tone that can be later used as a location compass in other projects. Unfortunately you would need to use some Fourier transforms for this kind of encoding. I imagine the human (or any other animals) ear uses an aproximator of that kind to represent soundbytes
Also it would be interesting to see results done both using and not using topology
Need to address that my addition also adds intensity, think of it as “volume” and it gives HTM additional attack vectors, one of them being orientation on which it will build new patterns while still remaining on that frequency region
I think just changing the encoding from what they already have built to what i’ve added is basically 10-15 min work
One of the things I wanted to do was to try and get the computer to build a model of the world and make predictions about what will happen. I was thinking of having a simple 2D world with a ball that falls and reacts with an environment.
I don’t really know if it would work or what the results would be but I was thinking of once the agent had learned a model of the world I could use it to then drive the simulation and get the ball to fall and react in a realistic way using the HTM prediction as the next frame of the simulation and recursively repeating the prediction.
There is an OpenAI “gym” for some simple video games, like Super Mario
The API lets you run the video game emulator, and sends you a simplified ‘tile’ representation of the game,
which might be easy to feed to an HTM.
I’d like to see something which predicts what will happen if one of the ‘goombas’ is seen, or if you move the player towards one of them, predicts that your player will die.
The sensor data is available as a 2D array of ‘tiles’, each one represents what object is at a location on a maybe 40 x 20 grid or something.
I don’t know the best way to encode that into HTM sparse data inputs, but it would be nice to separate it out into X and Y velocity of objects or something which reasonably closely matches what a real visual system of an animal might be putting out.
I had a few ideas regarding how a primitive sensory-motor inference could be applied without any neuroscience to back it up
In practice its going to look very shoehorned in
But my problem is representing displacement cells, any ideas how could that be done?