# "⊤" Terminology used in "Avoiding Catastrophe" and "Going Beyond the Point Neuron" papers

Please explain how to interpret or understand the “⊤” notation used within equations in some of Numenta’s research papers. That’s a symbol that looks like “T” - a capital t.

For example: In “Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments” section 3.1 there is the equation (1):
tˆ=w⊤x+b

What is the best way to interpret this, especially the wTx part?

Sincerely,
MRA

2 Likes

matrix transpose

2 Likes

Thanks for the response neel_g. I know of matrix transpose. However, when used between two matricies (say matrix w and matrix x) it doesn’t seem to make sense. Any thoughts, suggestions?

1 Like

I guess they mean w transposed multiplied with x, but you haven’t provided a link to the mentioned document.

It’s standard neural network stuff, x is input, w are weight parameters and b is the bias parameters.

1 Like

Thanks for responding cezar_t. It makes tons of sense to interpret it that way. So, to confirm, another way of writing tˆ=w⊤x+b would be:

t^ = (w)(x)+b

Feedforward activation t^ equals: w transpose, multiplied by x, plus b.

The paper is on Numenta.com under research publications, and the link is: https://www.frontiersin.org/articles/10.3389/fnbot.2022.846219/full
My apologies for not including it in the original post.

2 Likes

\mathbf{w}^\top indeed means the transpose of \mathbf{w}, which is likely a vector.
So \mathbf{w}^\top\mathbf{x} results in the dot product between \mathbf{w} and \mathbf{x}.

2 Likes

Thanks Hyunsung. I’m just getting back into this after a long hiatus… so happy to clear up the annoying little things I don’t immediately ‘get’.

MRA

2 Likes