I am going to interview @subutai this Friday for the Numenta On Intelligence Podcast. I’ll be talking to him about deep learning, sparsity, continuous learning, and How We Can Be So Dense? (The Benefits of Using Highly Sparse Representations).
I thought maybe the community had some topics or questions about Numenta, its research direction, or our application of biologically-inspiration to current deep networks. Please do ask them here! I’ll bring up some of your topics in my conversation with Subutai.
Lateral connections and interactions with “local” inhibition.
Relate this to 1000 brains; we say that a single column does sequences, what is the computation with this lateral voting?
It looks like the applied part of Neumenta’s efforts is focused on adding some insides of its research to DL. Does it mean that there is believe that DL can be fixed/improved to get cortical-like properties, or is it just a marketing move with an idea to get attention from the most crowded AI-community?
On the topic of sparsity in deep learning, I’m interested in what this means for the data preparation process, the most time consuming part of deep learning.
Feature engineering typically involves applying domain knowledge and transforming the data to improve the performance of the model. What impact could sparsity (and any other HTM-based improvements) have to this process?
IMO the biggest piece missing to getting DL people excited about HTM is the lack of image/video process applications of HTM thus far.
I think that computer vision tasks are such a mainstay in benchmarking ML algorithms because the visual perception of surrounding environments is intuitively at the core of animal intelligence.
What do you think is the most direct path to encoding the visual environment into HTM networks? Maybe full on video could be short-cut with Lidar or something that generates somewhat simpler data?
So? How did the podcast go? Any estimation when it’ll be published?
I am recording it today. I had to postpone last week because of a timing conflict.
Oh man, I made a big mistake here and gathered all your questions for Subutai and then did not remember to include them in the interview. I am really sorry folks that was not my intention. I have too many things going on.
But here is the interview:
Two vids in one day? You’re spoiling us, Matt. Great interview. Thanks @subutai.
I didn’t realise how the sparsity of learning is so important. It should have been obvious, but it wasn’t for me until now.