ML and Deep Learning to automatically create AGI?

Fair warning - this is rather abstract:

I have been reading through this thread and see concepts described as unitary things.
If I understand the sensory hierarchy correctly each tranche of maps (layer? level?) compares and contrasts the stream and adds the output to the representation building in the association region. Part of the side effect of this is that the object is parsed as it traverses the maps. The features that are abstracted at a given map are clusters of whatever that maps specializes in.

I see that the end effect is a multiply connected manifold with each map-2-map connections being a possible portal, each pointing in some direction dictated by the content of what is being parsed. I know that this will sound bizare but I think of this as a big PVC pipe structure where the pipes are the fiber tracts and the joints are sort of like a channel tuner playing some program to the next joint. The finished product is a high level concept that represents the contents of perception. Early in one end of the visual pipe complex is a channel that signals red.

Somewhere in the same plumbing (a little higher up) is a texture property map. The contents of one of those nodes has been probed and visualized in this paper:


I suspect that this same basic technique can be used to probe some of the other map contents in the processing stream.
This process continues until it all terminates in the hub of the parietal lobe.

Similar things are going on in the temporal lobe and frontal lobe, with the lingua-franca being the hex-grid coding. The difference in the frontal lobe is that the high-level coding is the starting point and it is unfolded until it reaches the motor drivers along the central sulcus.

Summarizing: the contents of perception is the current collection of these parsed feature streams assembled in the association region as a stable hex-grid. The feature set that make up that representation are parsed into a distributed representation, both within a map and along the hierarchy of maps. Part of what makes that work is the learned connection patterns between maps. I see connections in this parse tree as a bi-directional so the features are reactivated by thinking of the object. The contents of the parse tree are built by experience and are not preloaded.

2 Likes