Humans and dogs can not effectively communicate anything useful about their own internal states… why should an AI?
At best, you should expect that an AI has managed to do what humans do which is to say, associate some lossy compressed noises with some notion of observed model generalities and then use those noises to light up similarly generalized model states in other intelligences. When you say “cat”, there are millions of possible “cat” generalizations that might light up in my head formed both on previous experiences I’ve had with cats, things I’ve read about cats and whatever else happens to be impinging on my consciousness at the time. It’s not like there’s A neuron or even a consistent set of neurons that light up every time I think of a cat… there’s some cluster of a whole bunch of ideas about cats (almost all of which are also true for dogs) that may or may not light up depending on circumstances.
I have to assume an AI system would be just as ambiguous in its internal representation of the world.
As for the labeling thing… when the AI has a need to communicate something about what it’s current representation of cat might be, it should engage in communicating the word cat, but it’s cat should be as ambiguous as yours, and it should change from one instance of saying the word to the next.