Yes it’s hard to say with certainty what will work better or worse especially since we’re not working with precise mathematical theory. Thorough testing is kind of our only hope. I know biological synapses in the CNS can grow and decay in strength very rapidly especially at the site of dendritic spines. Continuous fine-grained adjustments to synapse strengths could be an important ingredient to learning also beyond static representational capacity.
Some other thoughts:
Even with 4.7 bits of information per synapse, the activation map at each iteration would still exist as a binary SDR. Biological neurons either fire or they don’t fire because of their chemistry; there’s no such thing as a partial action potential. That much is known. What is not binary in neuroscience is the strength of synaptic transmissions and that is a place where HTM is inconsistent with biology. All HTM neuron synapse transmissions are functionally equal regardless of the corresponding synapse strength. Using floating point synapse strength information would yield a more in-depth calculation for each neuron to decide whether the input from its dendritic branches should or should not cause an action potential (talking about columns of course in the context of the SP). Instead of saying “a neuron (column) will fire if any 10 of its synapses are excited at the same time” which ignores each per-synapse strength, you could set a realistic “voltage disturbance threshold” value across each neuron (column) that is either met or not met based on the sum of voltage disturbances caused by each excited synapse who contributes with a magnitude of voltage change directly proportional to that synapse’s strength; which is consistent with the biology.
This all still leads to a binary SDR of active or non-active neurons (columns) for each timestep, though. HTM networks (and brains) are based on large distributed patterns of independently acting neurons. The contribution of a single neuron (column) to an activation pattern in an HTM network is negligible; remember the experiments in the Numenta papers where a percentage of randomly chosen neurons were turned off in a learned network and the network still recovered/performed well. Thus the precise activity of a single neuron (column) with respect to an activation pattern is functionally irrelevant with large enough size; we only need to know if it fired or didn’t fire. Fine-grained representation about the “contribution” of each fired neuron (column) to a given pattern not only does not have a biological parallel but is also functionally irrelevant according to the theory. From the perspective of number theory, binary numbers present no limitation in the amount of theoretical information they can represent compared to any other base.
I don’t know enough about the neuroscience/neurochemistry of NMDA spikes yet to comment on the relevance of floating point synapse strengths with respect to forming the predictive states of individual neurons in temporal memory. If it’s consistent with biology for predicted states of neurons to be non-binary (i.e. one predicted neuron could be more depolarized than another predicted neuron), which I suspect it is, it’d be similarly interesting to see if a floating point voltage disturbance value instead of a binary “predicted or not predicted” leads to better performance or insight.