I realized that I never actually introduced myself on the forum – I just rudely joined to post a question about hierarchy 
I don’t have any background in neurosciences – I’m a computer programmer. I taught myself from a young age, and prior to my current job I really only did it as a hobby. I have had experience in a lot of computer programming subjects, though: I have designed video games, built interactive web pages, designed databases and interfaces, experimented with AI (mainly from the perspective of video games), and worked with both 2D and 3D graphics in numerous computer languages and libraries.
One of my more notable accomplishments was writing the 3D Sound System library that was used in Minecraft. I even used to be in the credits, before the Microsoft acquisition. https://www.youtube.com/watch?v=6VWsq1JXVWY&start=537
Another notable accomplishment was porting Mupen64Plus (a Nintendo 64 emulator) to Android. All N64 emulators currently on Google Play are forks of that original port.
My interest in HTM happened rather randomly after a conversation with a colleague about the theoretical “Technological Singularity”. The argument my colleague was making, which I disagree with, was that the singularity would follow almost immediately after human-level artificial general intelligence was created (i.e. as soon as a machine were to become self-aware). I argued that there is nothing magical about human-level intelligence which would automatically lead to the singularity (there are already 7 billion human-level intelligences in the world now, right?)
My own argument about there being nothing magical about human-level intelligence got me thinking, though. What level of intelligence would be necessary for a recursive trans-mutative routine to function? Would cockroach-level intelligence be enough? What aspects of the “Technological Singularity” theory might be used to generate an AI with a higher level of intelligence than what it initially started out with? Could such a system be used to produce robust utility AIs that could later be plugged into other systems (a toy robot, for example)?
Can AI be applied to the problem of improving on its own code? This question led me to researching AI strategies to see what would best apply. I wanted a technology that could be applied to a diverse range of problems (not specialized to a single problem), since there are may aspects to the concept of “intelligence”. During my research, I stumbled across one of Jeff’s videos on the Principles of HTM and from there to several other videos about HTM. I was immediately hooked – the general-purpose concepts of HTM like SDRs, semantics, context were exactly the types of capabilities I was looking for.
I originally joined the forum to ask questions and fill gaps in my understanding of the core concepts of HTM. I have since gained a fair understanding, and have even been able to use HTM concepts in my job (so its not just a hobby any more)