This is paper, which tells me I can use WTA in place of Inhibition (which over-complicate the implementation).
I can even use Inhibition & WTA together w/o problem.
Right ?
Second question, I could not figure how soft-WTA work ? If somebody gets it, can u explain it ? Example ?
On the computational power of winner-take-all
http://dx.doi.org/10.1162/089976600300014827
1 Like
Wolfgang Maass is a beast. Thanks for sharing.
Reminds me of “divisive normalization” I read a review article on it a while back:
@mraptor
I believe soft-WTA means that neurons are given a “vote” or “probability” indicating their metric for being the winner. Whereas, in hard-WTA, a single neuron is selected as the winner and no analog metrics are necessary.
For instance, for three neurons a, b, and c, the results for soft-WTA and hard-WTA respectively would look like this:
soft-WTA (probability):
a: 0.7, b: 0.2, c: 0.1
soft-WTA (rank):
a: 1, b: 2, c: 3
hard-WTA:
a: 1, b: 0, c: 0
1 Like