Matt is live streaming regularly

@rhyolight, there was a network connection problem at some point. I don’t know where it came from, but apparently other users noticed I too.

I waited a while for the connection to come back on, but apparently I had to refresh my browser for that to happen. I missed about ten minutes due to that. So this is a tip for next time.

Also, I googled how to use Python in Unity. There are some third party tools that seem to allow it. But there is also this free 3D editor/ game dev environment called Blender that uses Python natively I think.

Blender also has a large extremely enthusiastic user-base. Maybe some of those people would be interested in looking into HTM through Blender. (Just a thought. I don’t know nor use Blender).

1 Like

I dabble in blender.
LIke unity - it has a steep learning curve.
After all this time using it I keep finding new things it can do.

I don’t think that it is a good a gaming or multi-user platform as unity.
Deployment of your finished package across a wide variety of platforms seems easier in unity.

Yes, your models can be tied to code.

If python is your thing Blender has a definite edge.
https://docs.blender.org/manual/en/latest/advanced/scripting/introduction.html

1 Like

Thanks. (I don’t know either, so I’ll try to learn whichever you guys decide on using).

On a bit of a sidenote, how is this interface supposed to run? Is the complete HTM part going to be executed from within Unity (or Blender), or is there a way to interface between the Python interpreter running HTM and the 3D modeler to display the results?

Or can the interface (developed in Unity or Blender) be run as a standalone program and take inputs from a separate program like the Python interpreter?

I guess that it really comes down to figuring out what it is you want to do and setting up the best architecture to support that model.
BTW: Both blender and Unity support object and spatial modeling, reporting position of moving objects, reporting object collision, and real world physics. (Inertia/momentum, gravity, kinematics) Both are capable of being a model of and an interface to a virtual world.

1 Like

I am live-streaming now, while I’m catching up on HTM Forum conversations. Join if you feel like chatting.

I just came across this open-source engine called Godot. It’s very small and apparently very capable. This video (by a very enthusiastic vlogger) shows off some demo’s. Apparently it has a built-in scripting language based on python.

Ok, I am going to stop posting here whenever I stream on Twitch. All the important videos will be archived on this youtube playlist, so you don’t even have to use Twitch to see the videos.

1 Like

I changed my mind, and I will be posting on this thread every time I live stream on Twitch, which will be often. You know how to mute this thread if you need to.

Oh yes, I am streaming now while I respond to forum comments.

1 Like

AI Chat Episode 2 live in a few minutes.

Today I will be live-streaming my morning work session. Here are the things I might work on. Please help me decide what to do. Votes will count for a limited time, so vote RIGHT NOW. I’ll start streaming in 2.5 hours.

0 voters

Ok live streaming soon, thanks.

Andrew Hoefling’s experience coding live - https://www.andrewhoefling.com/Home/post/live-coding-on-twitch-tv-my-first-week

Including an interesting setup using NDI streams with OBS on a second PC.

1 Like

Thanks Richard… also per your suggestion, I got this in the mail yesterday…

Speaking of Twitch, I’m live. Today I’ll be working on a JavaScript project to support an upcoming blog post about Numenta’s research map.

Fun Friday today involves some twitch integration work, 2d Obj rec PR review, and whatever tech or brain chat comes up.

Playback of your last twitch stream started with a message that some of the sound would be muted due to copyright infringement on played music. Seems like twitch has an angry AI snooping in on you. :-7.

Maybe we can test this AI by introducing some noise in the music? ;-).

1 Like

It was Kruangbin! I’ll have to be more careful. Speaking of which… I am streaming the 2nd half of Fun Friday now… working on 2D Obj Rec.

I’ve never used twitch myself, but I imagine it feels like pair programming but the sense of exposure is multiplied?

I found with pair programming I had to get over the embarrassment of things like what I googled, and how often. Also they were more exhausting days because of the social engagement layered on top of the usual fatigue, basically like being in an all-day meeting.

3 Likes

Those were the negatives anyway, of course there were many positives too.

Exposure to other methods and habits, e.g. I’ve always been a “rich IDE” guy, but paired with an Amish dev who only used vi and was militant about always writing test cases first.

On the quality side, if both devs are concentrating then you can definitely pick up bugs and produce better code, obviously at the cost of doubling up on resources.

I don’t pair anymore as I don’t code much at work these days, but perhaps I should give twitch a go for my own projects.

4 Likes

Twitch was certainly made for you Matt, and your mission. I was finally able to see how you use WebStorm and various applications in action.

I was at my day job while you were streaming, but spent most of my Friday night (into Saturday morning) watching you code. For one of the bugs you had I expected resizing of the window area to have been the problem, as it can be in VB6. I was thankful to see that you soon noticed that possibility.

You led me to thinking about how feature memory plays back related touch, taste, smell and motion data in a way that we are essentially experiencing the stimuli all over again. Could that be the primary feature data?

In the moving shock-zone environment that I use important features are something felt, like bashing into a solid wall, moving freely at full speed on a comfortable surface or one that hurts their feet and should avoid, and confidence boost from making it to an attracting location where food reward helps eliminate hunger.

The behavior of (by becoming inactive) solid boundaries relative to the rest of the environment are mapped out in a way that would make a water balloon a roundish solid membrane where water motion swirls around inside. Water may squirt or burst out when some of the places containing the area are no longer containing wave action of a freely (as in sporting event stadium waves) propagating internal area behaving as water or gasses would. Drawing area is a hexagonal grid where features are placed according to their physical properties, which depend on how each column passes, reflects, blocks, or generates wave signals to its neighbors.

I focus on generating a theater of the mind, where signals act out the properties of what is being modeled. The river area is then contained by solid-like boundaries and has directional flow with waves on top that vary in height depending on other conditions. Water may contain something that attracts us, and we look into, while also avoiding edge or we at least wet our feet. When what attracts us changes, traveling waves point out new routes around obstacles, and favors the shortest path. There is still no knowing whether this is true for biology, but the method certainly works very well for getting around in an otherwise extremely hard to pleasantly navigate environment.

A nice thing for HTM theory is the need to predict ahead by at least 1/10 of a second. Interesting new article:

Without some method of predicting outcome of its own motion a fast moving ID Lab critter will race past its food, instead of ahead of time slowing down to a stop. For my purposes I coded a distance dependent circuit that decreases confidence level of motor actions that lead to being over the required speed limit for landing. There is no direct control of motor, just a memory bit that only becomes active just before something bad like that happens. This is enough for motor memory to self-organize actions accordingly. Since HTM makes predictions there should be an easy way to use that instead, but unfortunately I’m not sure what is most biologically plausible.

If the agent in a HTM system similarly has a motor system with four or more forward speeds that roughly double as force stays applied then the first test would be to not fly off the map and maybe crash the whole program, or forever mindlessly bash into containment walls.

Live rat data for the environment I use suggests that in this task each place is roughly the size of the animal’s personal space needed to freely maneuver, or approximately body length. A +1/-1 integer movement through environment at each 10+ HZ time-step would be traveling at a high rate of speed in an environment where ~0.01 displacement precision is required to position its mouth over a virtual food pellet.

If a coffee cup were mapped in with top edges and handle sparsely plotted using attractors then one or more articulated fingertips (in a map view of what is within reach) could in one fast motion get them all there in an instant, and afterwards have fine speed control over the surface.

An external environment could be drawn in using PyGame or Canvas, but of course what is being modeled in the brain has a hexagonally arranged geometry where objects have features most easily defined by how cortical columns represent objects in space. There is then a generic border/boundary for something impassable, and depending on temperature the edges of a cup are an attract or avoid fingers could try to navigate through to grab (new attractor) something cool that just fell inside.

That’s at least my best guess for how the spatially relevant information gets mapped out then acted upon. The rest of the data would be replay of past sensory experience including touching surfaces of a given temperature, in turn causing a cup to be mapped in as an attract or avoid.

2 Likes