Why do members here think DL-based methods can't achieve AGI?

No video, etc. I’m not an academic type (no offence meant to anyone here) and just trying my own thing so a cross between the local village idiot and caveman, lol.

I went down the POS rabbit hole a while back and realised that the furious green sheep actually had a real message, lol.

I don’thave anything working at a level I think is any sort of proof, just conjecture research at this stage which is this post. The example is learnt feed forward, single pass.
The resulting structure (using the split in a particular way) is what I think is critical as the resulting hierarchical structure appears to merge episodic type memory learnt feed forward with predictive pattern memory capability. This then allows continuous learning to occur, which in my mind an absolute must for any system. Still a looooong way to go though and I still know nothing.