Thanks for watching you guys. I’ll be online again tomorrow morning for the research meeting. I’m still figuring out my process and schedule. I hope once I’ve set up all my tools my streams will be less chaotic and more process-oriented and productive.
I enjoy interacting with the chatters, especially those of you actually helping with the code. I have several PRs I’ll be reviewing during my Tuesday work session. I think these are happening as a direct result of my Twitch live streams, which is very encouraging.
Live in 10m talking about pooling.
Tuesday work session live now. Reviewing community PRs on object rec projects. Doing some JS for an accordion widget.
At around 10AM this morning PDT (that’s about 1.5 hours from now), I’ll be chatting on my Twitch channel with Jeff about the common cortical circuit in preparation for AI / Neuroscience Chat next week about cortical layers.
Streaming HTM Community work on Twitch now. Answering forum posts, then working on 2D Object recognition test descriptions.
I will stream a research meeting around 10:05 or so, but it may be very short. I’m not sure anyone has anything to report. Marcus is PTO and Subutai just got back into the office.
Streaming a (likely short) research meeting in about 20 minutes.
Sorry, no meeting today. I will stream live from Numenta HQ on Wednesday.
Is there an AI / Neuroscience Chat stream later on?
Yes, you can see my schedule on my events page.
Streaming now, starting in about 10 minutes about cortical layers.
I missed it by an hour. I should kick myself! I literally kept my browser open the entire day in anticipation. So stupid. Anyway, I watched the rerun just now.
Great stream, Matt. Very helpful. I made sketches as you went along.
A few remarks:
Jeff didn’t talk about input to layers 6a and 6b because he said he (and Numenta) wasn’t too sure about how this works yet. (I think this input must come from other layers in the sensimotor region, since the rat in the grid cell experiment produced grid cells even in the dark. So it must at least partly come form muscle feedback).
If we’re building a 2D recognition project without controlling the motion of the agent, we don’t need that input for now. (Although I think it makes sense to keep a placeholder open for later).
and one question (perhaps for Jeff):
- If layer 2/3 builds a stable model by compensating for radial motion (orientation), and layer 5 does the same for lateral motion (via grid cells), would’t it make more sense to have these systems operate in paralel and combine their stable models in a final layer? In other words, the timing for the inputs to layers 6a and 6b would come in at the same time, while input to 5 (from 2/3) would come after input to 2/3 from 4. Wouldn’t that cause a delay problem?
@Falco I’ll see if we can talk a bit about this with Jeff after the research meeting tomorrow.
Thanks Matt for helping!
The outline only contains what was covered in that one video. This way there was no chance of including something that has since changed. I agree that we need Jeff’s help for remaining detail.
Ideally Motor In and Out arrows could be shown controlling (linear) speed and (rotational) direction of a bilateral motor system. It’s then in full control of where it goes, and needs to further explore.
I will talk with Jeff about this tomorrow morning on Twitch after the research meeting.
I’ve been offline about a week now, and I have not been keeping up with forum posts. On Monday at around 8:30AM PDT, I’ll be live on Twitch catching up on the forum if you want to join and chat. I might try something where I respond in my live stream, then post the clip of the response on the forum thread (maybe eventually add a transcription for search engines).
I also plan on streaming the research meeting around 10:15AM or so.
There is a research meeting on Easter Monday? You guys are hardcore! :-).
A post was split to a new topic: Responsible AI License