It seemed like there was a bit of confusion early on trying to define eigenvectors and eigenvalues. These concepts are not really that difficult to understand. So the professor in me feels compelled to try and explain things a bit more clearly.
One could interpret the action of a matrix times a vector as performing a mapping from one linear vector space into another linear vector space. The eigenvectors are by definition the set of directions that span the space and are invariant under this mapping. In other words, any vector aligned with an eigenvector will not change its direction (only its magnitude) under the action of this matrix-vector multiplication. The scaling of the magnitude of a vector aligned with an eigenvector is proportional to the eigenvalue associated that eigenvector.
I think this property is what Marcus was trying to explain when he was describing the action of repeatedly applying the matrix to the vector. The misunderstanding probably arose from the specific example Marcus chose to try and explain the concept (further confounded by the presence of the separate examples that immediately followed it).
I will just add here that linear combinations of eigenvectors are not always eigenvectors. Eigenvectors are mutually orthogonal by necessity. For any general vector that is a linear combination of eigenvectors, the action of the matrix-vector multiplication will scale the components of that vector aligned with each of the separate eigenvectors proportionately to the eigenvalues associated with each of those eigenvectors. Unless all of the eigenvalues are identical (uniform scaling), then the resulting mapped vector will not be aligned with the original vector, and hence not meet the requirements for being an eigenvector.
Repeated eigenvalues could indicate that the mapping operation may be acting on a subspace of the input space and mapping it to a comparable subspace in the target space. In which case the choice of eigenvectors may be somewhat flexible (i.e. any set of mutually orthogonal directions that span the subspace would be a valid set of eigenvectors).
As long as the matrix is non-degenerate (i.e. its determinant is non-zero), there will be a set of mutually orthogonal eigenvectors that completely span the input vector space. Degeneracy in the matrix implies that there is some redundancy in the matrix (e.g. some rows are linear combinations of other rows). In which case, the mapping operation could be interpreted as projecting vectors from the input space onto a lower-dimensional subspace or manifold of the target space.
Things get a little more fuzzy when you start talking about eigenvectors of functional spaces. It’s much more difficult to visualize these entities as vectors as they are typically functions distributed over space or time, and their action is less concerned with preserving specific directions than the functional relationships between local coordinates. To me it’s usually easier to switch terms and start talking about orthogonal basis functions. But before I dig myself too deep, I’ll end this post with a traditional academic dodge: Discussion of this topic is beyond the scope of this report. If there is sufficient interest, I can continue, possibly in a new thread or sub-forum.