Matt is live streaming regularly

Fun Friday today involves some twitch integration work, 2d Obj rec PR review, and whatever tech or brain chat comes up.

Playback of your last twitch stream started with a message that some of the sound would be muted due to copyright infringement on played music. Seems like twitch has an angry AI snooping in on you. :-7.

Maybe we can test this AI by introducing some noise in the music? ;-).

1 Like

It was Kruangbin! I’ll have to be more careful. Speaking of which… I am streaming the 2nd half of Fun Friday now… working on 2D Obj Rec.

I’ve never used twitch myself, but I imagine it feels like pair programming but the sense of exposure is multiplied?

I found with pair programming I had to get over the embarrassment of things like what I googled, and how often. Also they were more exhausting days because of the social engagement layered on top of the usual fatigue, basically like being in an all-day meeting.

3 Likes

Those were the negatives anyway, of course there were many positives too.

Exposure to other methods and habits, e.g. I’ve always been a “rich IDE” guy, but paired with an Amish dev who only used vi and was militant about always writing test cases first.

On the quality side, if both devs are concentrating then you can definitely pick up bugs and produce better code, obviously at the cost of doubling up on resources.

I don’t pair anymore as I don’t code much at work these days, but perhaps I should give twitch a go for my own projects.

4 Likes

Twitch was certainly made for you Matt, and your mission. I was finally able to see how you use WebStorm and various applications in action.

I was at my day job while you were streaming, but spent most of my Friday night (into Saturday morning) watching you code. For one of the bugs you had I expected resizing of the window area to have been the problem, as it can be in VB6. I was thankful to see that you soon noticed that possibility.

You led me to thinking about how feature memory plays back related touch, taste, smell and motion data in a way that we are essentially experiencing the stimuli all over again. Could that be the primary feature data?

In the moving shock-zone environment that I use important features are something felt, like bashing into a solid wall, moving freely at full speed on a comfortable surface or one that hurts their feet and should avoid, and confidence boost from making it to an attracting location where food reward helps eliminate hunger.

The behavior of (by becoming inactive) solid boundaries relative to the rest of the environment are mapped out in a way that would make a water balloon a roundish solid membrane where water motion swirls around inside. Water may squirt or burst out when some of the places containing the area are no longer containing wave action of a freely (as in sporting event stadium waves) propagating internal area behaving as water or gasses would. Drawing area is a hexagonal grid where features are placed according to their physical properties, which depend on how each column passes, reflects, blocks, or generates wave signals to its neighbors.

I focus on generating a theater of the mind, where signals act out the properties of what is being modeled. The river area is then contained by solid-like boundaries and has directional flow with waves on top that vary in height depending on other conditions. Water may contain something that attracts us, and we look into, while also avoiding edge or we at least wet our feet. When what attracts us changes, traveling waves point out new routes around obstacles, and favors the shortest path. There is still no knowing whether this is true for biology, but the method certainly works very well for getting around in an otherwise extremely hard to pleasantly navigate environment.

A nice thing for HTM theory is the need to predict ahead by at least 1/10 of a second. Interesting new article:

Without some method of predicting outcome of its own motion a fast moving ID Lab critter will race past its food, instead of ahead of time slowing down to a stop. For my purposes I coded a distance dependent circuit that decreases confidence level of motor actions that lead to being over the required speed limit for landing. There is no direct control of motor, just a memory bit that only becomes active just before something bad like that happens. This is enough for motor memory to self-organize actions accordingly. Since HTM makes predictions there should be an easy way to use that instead, but unfortunately I’m not sure what is most biologically plausible.

If the agent in a HTM system similarly has a motor system with four or more forward speeds that roughly double as force stays applied then the first test would be to not fly off the map and maybe crash the whole program, or forever mindlessly bash into containment walls.

Live rat data for the environment I use suggests that in this task each place is roughly the size of the animal’s personal space needed to freely maneuver, or approximately body length. A +1/-1 integer movement through environment at each 10+ HZ time-step would be traveling at a high rate of speed in an environment where ~0.01 displacement precision is required to position its mouth over a virtual food pellet.

If a coffee cup were mapped in with top edges and handle sparsely plotted using attractors then one or more articulated fingertips (in a map view of what is within reach) could in one fast motion get them all there in an instant, and afterwards have fine speed control over the surface.

An external environment could be drawn in using PyGame or Canvas, but of course what is being modeled in the brain has a hexagonally arranged geometry where objects have features most easily defined by how cortical columns represent objects in space. There is then a generic border/boundary for something impassable, and depending on temperature the edges of a cup are an attract or avoid fingers could try to navigate through to grab (new attractor) something cool that just fell inside.

That’s at least my best guess for how the spatially relevant information gets mapped out then acted upon. The rest of the data would be replay of past sensory experience including touching surfaces of a given temperature, in turn causing a cup to be mapped in as an attract or avoid.

2 Likes

Thanks for watching you guys. I’ll be online again tomorrow morning for the research meeting. I’m still figuring out my process and schedule. I hope once I’ve set up all my tools my streams will be less chaotic and more process-oriented and productive.

I enjoy interacting with the chatters, especially those of you actually helping with the code. I have several PRs I’ll be reviewing during my Tuesday work session. I think these are happening as a direct result of my Twitch live streams, which is very encouraging.

Live in 10m talking about pooling.

Tuesday work session live now. Reviewing community PRs on object rec projects. Doing some JS for an accordion widget.

At around 10AM this morning PDT (that’s about 1.5 hours from now), I’ll be chatting on my Twitch channel with Jeff about the common cortical circuit in preparation for AI / Neuroscience Chat next week about cortical layers.

1 Like

Live in 5-10 with Jeff…

Streaming HTM Community work on Twitch now. Answering forum posts, then working on 2D Object recognition test descriptions.

I will stream a research meeting around 10:05 or so, but it may be very short. I’m not sure anyone has anything to report. Marcus is PTO and Subutai just got back into the office.

Streaming a (likely short) research meeting in about 20 minutes.

Sorry, no meeting today. I will stream live from Numenta HQ on Wednesday.

Is there an AI / Neuroscience Chat stream later on?

1 Like

Yes, you can see my schedule on my events page.

1 Like

Streaming now, starting in about 10 minutes about cortical layers.

1 Like

I missed it by an hour. I should kick myself! I literally kept my browser open the entire day in anticipation. So stupid. Anyway, I watched the rerun just now.

Great stream, Matt. Very helpful. I made sketches as you went along.

A few remarks:

  • Jeff didn’t talk about input to layers 6a and 6b because he said he (and Numenta) wasn’t too sure about how this works yet. (I think this input must come from other layers in the sensimotor region, since the rat in the grid cell experiment produced grid cells even in the dark. So it must at least partly come form muscle feedback).

  • If we’re building a 2D recognition project without controlling the motion of the agent, we don’t need that input for now. (Although I think it makes sense to keep a placeholder open for later).

and one question (perhaps for Jeff):

  • If layer 2/3 builds a stable model by compensating for radial motion (orientation), and layer 5 does the same for lateral motion (via grid cells), would’t it make more sense to have these systems operate in paralel and combine their stable models in a final layer? In other words, the timing for the inputs to layers 6a and 6b would come in at the same time, while input to 5 (from 2/3) would come after input to 2/3 from 4. Wouldn’t that cause a delay problem?
2 Likes

@Falco I’ll see if we can talk a bit about this with Jeff after the research meeting tomorrow.

1 Like

Thanks Matt for helping!

The outline only contains what was covered in that one video. This way there was no chance of including something that has since changed. I agree that we need Jeff’s help for remaining detail.

Ideally Motor In and Out arrows could be shown controlling (linear) speed and (rotational) direction of a bilateral motor system. It’s then in full control of where it goes, and needs to further explore.

1 Like