Hello everyone, I’m a software engineer at a small dsp company. I have always been interested in the mind and how it learns. Luckily I ran into On Intelligence about 6 months ago. I’ll be slowly going though the tutorials and trying to run my own data as I have a year left in my MSEE in DSP. I look forward to the advancement in this field.
What’s up players?! I’m Roberto. Found On Intelligence around 2004 and it blew my mind. An unsuccessful Masters and a successful sales career later, I’m diving back into the frey. Very excited.
Hi,
Just as enthusiastic as everyone here, I am in love with AI and the concept of HTM. Hope to have fun along the way with everyone. Cheers!
Hi all,
My research is in the area of Music Information Retrieval and Machine Learning on symbolic music data. I am really interested in the application of Nupic algorithms to music prediction and anomoly detection in music. I also am finishing my PhD in University Technology, Sydney and am have created the interactive music search engine, stelupa.com and I trying to implement some nupic stuff in this application.
If anyone else is into music data data analytics machine learning stuff, would let me know!
Jamie
biodigitaljazz.com
4 posts were split to a new topic: HTM for music analysis
My name is Jos Theelen. I built with some of the nupic.core files a 32-bits C+±library, so I could run (parts of) nupic on my old linux-computers. I only use the Spatial Pooler and the Hierarchical Temporal Memory from nupic.core. Other things like encoders I wrote myself.
At jgab3103: One of my HTM-projects is an attempt to let the computer pick chords for a given melody. I took the chorals of the Weinachts-Oratorium from Bach and simplified each of them to a melody with chords. Those note/chords combinations I put in a SP and HTM, so it learned all those combinations. Then I took a new melody and looked which note/chord could be used and which of them had the lowest anomaly value. The results weren’t very good. The chords fitted with the notes of the melody, but the sequence of chords didn’t sound good. My next idea is to program an extra SP and HTM, which uses the output of the first SP and HTM as input. It should somehow learn little pieces of melody, a sequence of note/chord combinations and remember them. With a given melody, I could try to find a sequence of those little pieces of melody.
Got introduced to NUPIC early this summer. It has been a summer of fascinating learning. To learn more deeply about the theory, I decided to implement the algorithm in matlab. I now have a version of the code that produces fairly close anomaly scores to numenta’s version on the NAB benchmark. Some fine tuning still needs to be done.
I benefited greatly from the openness that is present in this community in terms of the willingness to share code, ideas, as well as to document their failures and frustrations.
I would to hear from others who have tried a matlab implementation.
I am a computer vision researcher with more than 25 years of algorithm development. My next goal is to use HTM idea for video analysis.
Sudeep
Hi all,
My name is Rikkert Koppes. Just signed up. I am a software engineer with a background in physics and cognitive ergonomics. I started actively studying everything around AI some 1.5 years ago. Followed the MIT AI course by Patrick Winston and the Oxford Deep Learning course by Nando de Freitas and Hinton’s course on Coursera. Read quite a few books, from the very early ML topics (svm, knn etc) to some more modern views. Around last year I read “how to create a mind” by Kurzweil
Somehow, I completely missed HTM
I have been playing around with languages like Clojure and Prolog, I like how they seem closer to what I feel seems to be the way forward. Some thoughts I have are:
- I feel AI should at least be able to solve the principles from CRUM: rules, logic, concepts, analogies, images, networks (from Thagard, Mind: Introduction to Cognitive Science).
- I feel AI should have some method of introspection. Capture the pattern in a pattern so to say. I don’t see how ideas thus far could do that. What appeals to me in a language like clojure is that the language itself is a data structure. I feel a mind should be similar.
- I like Hinton’s thought vectors, which may not be that dissimilar in concept from SDR’s
- I like the hallucination efforts from DNN’s
- I like how natural HTM theory sounds
I didn’t look too much into the forum yet, but I will. As an HTM newbee, I have a lot to read still. I might play around with some concepts in nodejs (my preferred platform)
Cheers!
Hi,
I’m Mo holding BSc in Electrical Engineering from UofK (Sudan). I started with NN at school, get into OCR then Speech Recognition. Since 15 years ago, I started digging into biology, mathematics and AI topics as much as I could; I found On Intelligence and the Redwood Institute at the same time. My first (and still valid) goal is to understand (and possibly model) the way a biological brain works. I’m tracing it in biology in the order it happens (no reverse engineering!!); slowly moving forward however learned a lot and recently got some interesting results.
Hope my work will meet HTM in the middle one day.
Mo Daboara
Hi all.
I am an embedded systems designer, both hardware and software.
Many years ago I wanted to learn about the soul and did much self-studies on neurology and the brain. At some point I noticed that I had accounted for everything that I could think of that makes us human - memory, emotion, intentionality & volition, all of the senses. It was a letdown that nothing was leftover to be a soul.
When I started to study electronics in the 1970’s (and got an associate’s degree) I was drawn to the sexy new field of microprocessors. I work with those critters to this day and love tinkering with them; ARM processors are amazing!
Between reading science fiction and some interesting articles in Scientific American I drifted into neural networks and AI in general. My take on AI has always been viewed through the lens of my earlier studies in neurology. I have read a few dozen books on neural networks and AI and understand the technology pretty well. I have been forming a general model of the brain over these years and always thought that the various neural network models were missing at least two key elements - hierarchical organization, and what I have been calling “song memory” - sequential processing.
I read the On Intelligence book shortly after it came out and was impressed but the dependence on the lower brain structures for sequential memory did not match up with what I knew and put me off a bit.
Time passed.
With the big splash of deep learning and the easy availability of Tensor Flow and the MS Computational Toolkit I was excited to see that the technology was producing interesting results and moving into alignment with what I have been thinking about how the various maps in the cortex work together.
I remembered that On Intelligence had been one of the first places to really evangelize a hierarchical organization of the cortical maps so I went back and read it again - this time prepared to receive it with an open mind. Digging into what Jeff has been doing since he wrote the book - BAM! SDRs models match up with real dendrites better than anything else I have seen. Sequential memory is built in and biologically plausible; feed-forward, feed-back, pattern and sequential memory all in one package - what not to love!
I have been struggling with unlearning what I know about using traditional neural networks to build AIs and starting over with SDRs - it’s been tough sledding but I think it is totally worth the effort.
In the first month of reading several things have jumped out at me:
1: the topographical organization of the synapses is important. As the dendrites snake between the columns and pick up connections they are sampling part of the pattern that is influenced by the classic Mexican hat shape that is a well-known property AND perhaps more importantly - as the dendrite stretches in a direction from a cell body - cell bodies in “that” direction may have dendrites extending back towards the original cell body. This reciprocal connection has the interesting property that they can reinforce a pattern that they share but are influenced by the patterns that each has in dendrites extending in “other” directions away from their shared pattern. This leads to some interesting possibilities in pattern landscapes. To support this idea I am proposing a modification of the SDR dendrite model: Add a moderately sized table of canned dendrite patterns. These can be very large patterns without much computation or storage costs. In the storage structure of the individual dendrites a pointer into this pattern table gives this dendrite’s
connections as a delta position address based on the parent cell body location - assigned once during map initialization and never changed after that. The dendrite table of syntactical connections adds one low cost step of indirection through the dendrite pattern table to learn what cell body that synapse is connected to during processing. It gives a permanent list of cell bodies to examine for activity when learning new connections without the huge memory cost of recording unused connections.
2: Learning - it looks like the standard HTM model is using straight Hebbian learning. We know that patient HM learned that way without a hippocampus. Most of us have good one-shot learning. What is it that the hippocampus brings to the party and how do we bring that to the HTM model? I am spending a fair amount of time thinking about this. Good one-shot learning would go a long way towards silencing HTM nay-sayers.
Also - Jeff describes how the brain smoothly resonates with things it recognizes and somehow signals when it is having (neuro) cognitive dissonance. I propose that the reciprocal projections to the RAS (Reticular Activating System) are in an ideal place to gate on more of whatever it is that is causing the fuss in the first place - in essence “to sip from the firehose” and amp up learning and attention.
3: The Executive Function - We talk about the vision, auditory, tactile, and other senses that project to various areas around the edge of the dinner napkin. I propose that the old brain projects to the forebrain much the same way as the senses. The old brain worked fine for lizards; these older structures were good decision makers and pattern drivers. The older brain has always directed activity through much of the evolutionary path - I don’t see any reason why it ever would have stopped. It senses the body needs and can project that as a sort of a goal directed sensory projection to the front edge of the napkin-the forebrain. A point to support this assertion - I go back to the proposal that the cortex is the same everywhere; I don’t see anything that suggests the cortex does anything but remember and through sequence memory - predict.
4: Local dendrite control of sparsity and synapse maintenance - There is no need to do this through a global function. In a dendrite maintenance phase - metabolism and chemical signaling should be enough to establish spacing, density of connections, and pruning.
5: The H in hierarchy reconsidered: The perceptron was shown to have serious limitation in the book “perceptions.” Making it part of a larger system dramatically enhanced its function.
Layers have been the breakthrough that given deep leaning some of its spectacular successes.
In much the same way - when I am reading through some of the details of the current implementations it seems to me that there is some tweaking to make it work that would not be needed if more consideration was given to layers of interacting maps. The “filling in” / auto-completion function that are a large part of the consideration of authors like Calvin and Marr are a natural thing if you have a functioning hierarchy.
I have some more ideas but I am curious to see what people think of the items I have put out here.
Hello,
Born 1972, degree in digital electronics, but have always had a passion for software and indeed currently work in software.
I have been fascinated with the notion of AI from being very young, AI is a kind of hobby for me something I spend spare time on. I wrote my own fast C++ lib for ANN experiments, a fairly simple multilayer perceptron builder. Have also experimented with genetic algos for evolving perceptrons weights and topology (layer/neuron/dendrite, growing/pruning).
I am thrilled to have found HTM, I feel it opens up many paths of research and will pave the way for more conscious machines, at least machines that will be recognized as conscious by majority of humans conversing long term with natural language.
Michael Grazzianos Attention Schema sounds viable and probable to me:
Consciousness is somewhat an illusion, the sense of self helping greatly with survival when modelling future possibilities. And that does not make consciousness any less awesome to me; it makes it all the more fascinating.
I have not read all available info on HTM (or even watched all available videos) yet, but I am proceeding to do so in my spare time. From what I have seen already, the inherent adaptive nature, it’s great.
Thank you very much Numenta for making this open source. It’s very exciting to be able to play with it, even as a hobbyist.
I’m a Behavior Specialist at Youth Escape Arena,Inc. and also Chief Instructor/Mentor at Machine Learning Mentor dot com where I research on the act of learning. It is said that people will forget what you said and what you did, but they will never forget how you made them feel. It turns out that machines are the opposites but it can be bridged.
Hi everyone. My name is Jake Bruce, and I’m a graduate student in robotics and computational neuroscience interested in producing robust goal-directed behavior in machines. I’ve followed this community for a long time, and I used to contribute frequently to the mailing list but stopped posting when the forum was set up. Lately my interest has been rekindled by some of the excellent work being done here. I’d like to thank everyone for their contributions, and I look forward to taking part again.
Hi all,
I’m Steve Schremp and I have 2017 set aside for an MA in HTM. I got an AA in multimedia and internet marketing thinking it might replace the manufacturing software sales telecommuting job I’d enjoyed for a couple of decades. I couldn’t find the right program to continue with so I decided on a business management program with my internship on the school’s Drupal CMS. I finished that in 2015 but still couldn’t find the right project to work on. A rather tumultuous 2016 closed with a good outcome to my first move in 17 years. In the last couple of months I’ve been feeling much better about what to do next and have cleared the decks to work on a project that started in September of 1977 as I walked between Chinese 1 and logic design classes and it struck me that Chinese is a hexadecimal language. It’s not that simple of course but the deconstruction of the Chinese language in terms of computers led me to the same conclusion that Jeff came to via deconstructing the neocortex. This is not the place to go into details but I intend to fill it in soon. Just wanted to start the year off right.
Happy New Year!
I’m Jim Bowery, originator of the idea for the Hutter Prize for Lossless Compression of Human Knowledge. Here’s my linkedin profile.
My interest in HTM is avocational. Mainly, I’m interested in seeing HTM compete in the Hutter Prize or the Large Text Compression Benchmark.
See the Vimeo video by Shane Legg given at the Singularity Summit, 2010 for why lossless compression is the gold standard benchmark for universal intelligence. Shane Legg, one of Marcus Hutter’s PhD students went on to found Google DeepMind.
What led me to become interested in HTM:
For a while I’ve been intrigued by Robert Hecht-Nielsen’s confabulation theory of the neocortex because it shows natural language grammar as an emergent phenomenon of a simple formula that degenerates to Aristotelian logic when there is certainty. However, like my interest in George Spencer Brown’s Laws of Form’s imaginary logic states (self-negating feedback logic) to model time in digital systems, it bothered me that this simple mathematics didn’t start with time. As the confabulation learning rule was Hebbian co-occurrence counting, making heavy reliance on interconnection (one count per connection), I looked around for anyone who had pursued recurrent Hebbian learning for the most primitive information processing structure that might be the unit replicated across the neocortex.
I found “Unsupervised Hebbian learning by recurrent multilayer neural networks for temporal hierarchical pattern recognition” by James Ting-Ho Lo. When I looked for papers citing it, there was almost nothing but patents by a company called Numenta.
So here I am.
Hello everyone. My name is Kyle and I work as a Software Engineer on an IT for IT application. Although,My passion is equity derivatives and I am absolutely hypnotized by the philosophical journey that the rise of AI has put me on. I don’t just want to apply this technology to outperform humans via intelligent automation, I want to help enable this technology to scale with ease and moreover, I can’t wait to finally say to someone that, “My computer told me about this idea it had. What do you think?”
I am Steve Wald. I’ve been designing ICs since the 1980’s. I am currently a doctoral candidate at at Boise State University. I am researching hardware advancements for AI such as the use of memristors as a type of synapse, and using reconfigurable devices (advanced FPGA’s) for algorithm plasticity. I started a project to try to shoehorn some part of the HTM core C++ implementation into a Xilinx development kit by using the Vivado-HLS (high level synthesis) tool. So far I’ve been blocked by HLS errors claiming incompatibility with the Clang compiler.
I’d dearly love to hear from anyone working with HLS on anything similar!
I’m not new here, but I thought I’d introduce myself.
I’m a college freshman, and I’ve been interested in HTM since 9th grade, so I’ve been learning pretty slowly for a while. I haven’t taken many college classes relevant to HTM (just some java and intro biology). Since I want to contribute to HTM theory, I try to read neuroscience articles for at least a couple hours each day. There’s a link to a google doc with my notes somewhere on this forum, but I’m not sure if I’ll add more notes anymore.
People should be warned that, whenever I mention neuroscience, I could be wrong for many reasons.
Hello all
I’m Stephen, a software engineer and in my spare time a roboticist and student of machine learning. I partially read On Intelligence a few years ago having followed Jeff’s work in this area for some time. I’m going to read the book again, this time in full! I’m a beginner when it comes to HTM and NuPIC etc. and I’m really interested in learning more and tapping into the potential of NuPIC for my own projects. I’m currently in the NY area but I plan on heading back toward the Pacific Northwest starting this summer. I’m really glad to finally be here!