I’ve been trying to learn quantum mechanics from a formal point of view, so I picked up Dirac’s book. In the fourth edition, 33rd page, starting from this:ξ|ξ′⟩=ξ′|ξ′⟩
(Where ξ is a linear operator and all the other ξ′‘s are eigen-(value|ket)s.)
,and this:ϕ(ξ)=a1ξn+a2ξn−1⋯an=0 (where ϕ is an algebraic expression)
He has deduced ϕ(ξ)|ξ′⟩=ϕ(ξ′)|ξ′⟩
I understand that the LHS is a linear operator acting on a ket, while the RHS is a ket multiplied by a number.
What I don’t get is how the step is justified. He seems to have applied ϕ to both sides. But shouldn’t that give ϕ(ξ|ξ′⟩)=ϕ(ξ′|ξ′⟩)?
This expression makes no sense as it is, as I doubt that you can apply an algebraic expression on a ket (I’m not sure of this, but to me, |A⟩2 &c makes no sense, as I don’t think that you can multiply a ket with a ket and get another ket)
So how did he deduce the expression?
The context: Dirac is proving that an eigenvalue ξ′ of ξ must satisfy ϕ(ξ′)=0 if ϕ(ξ)=0.
Oh (no need to answer this if you don’t want), and is there any reason for Dirac introducing the confusing notation that eigen-whatevers of an operator should be denoted by the same symbol? Usually, different types of variables (eg matrices, vectors, numbers) use different classes of symbols (capital letters, letters with overbars, and lowercase letters respectively).
continuing like this you see that applying any power of ξ to |ξ′⟩ just multiplies |ξ′⟩ by ξ′ to that power
So any sum of powers of ξ applied to |ξ′⟩ just ends up multiplying |ξ′⟩ by that polynomial in ξ′