The book “perceptrons” showed convincingly that layers of units could be replaced by a suitably constructed single layer. This had limits such as being unable to perform the xor function no matter how many layers you used.
This book effectively killed research in the area for a long time.
True believers kept at it and eventually worked out how to surpass this basic limit.
Adding the limiting activation to the mix allowed transcending the limits of the classic perceptron. It was now possible to form islands of meaning between the layers.
Much the same benefits with layers of HTM modules. Adding the H to HTM radically enhances the representation and computation possibilities. It’s not the same thing then - it does more. The SDRs are now able to pool, both spatially and temporarily and these pools to be sampled to form conjunctions of these semantic meanings.