I had been using the term “Heywood Case” somewhat informally to refer to situations where an online, ‘finite response’ iteratively updated estimate of the variance became negative due to numerical precision issues. (I am using a variant of Welford’s method to add data and to remove older data.) I was under the impression that it applied to any situation where a variance estimate became negative, either due to numerical error or modeling error, but a colleague was confused by my usage of the term. A google search doesn’t turn up much, other than that it is used in Factor Analysis, and seems to refer to the consequences of a negative variance estimate. What is the precise definition? And who was the original Heywood?
Answer
Googling “Heywood negative variance” quickly answers these questions. Looking at a recent (2008) paper by Kolenikov & Bollen, for example, indicates that:

” “Heywood cases” [are] negative estimates of variances or correlation estimates greater than one in absolute value…”

“The original paper (Heywood 1931) considers specific parameterizations of factor analytic models, in which some parameters necessary to describe the correlation matrices were greater than 1.”
Reference
“Heywood, H. B. (1931), ‘On finite sequences of real numbers’, Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character 134(824), 486–501.”
Attribution
Source : Link , Question Author : shabbychef , Answer Author : whuber