Identification
Traditionally, the theory of the identification of models requires a formal mathematical analysis. Moreover, most structural equation modeling computer programs can determine whether a given model is identified. It is still helpful to have rules of thumb because researchers need to know the identification status of their models before those models are estimated. Also, models that are in principle identified may not be identified in practice when they are actually estimated. The researcher needs to know whether the model is empirically identified.
The following is a set of rules that can be used to check whether a given model is identified. What follows should not be taken as a guide and not as gospel. The rules, by no means exhaustive, are helpful in determining identification.
If both the structural and the measurement models are identified, then the entire model is identified. For the entire model to be identified, the structural model must be identified. Some underidentified measurement models can be identified when the structural models is overidentified (see Condition B3b in the next section).
These rules do not consider issues of identification in models with means (go to identification of factor means) and multiple groups.
Measurement Model Identification
These rules primarily concern models in which each measure loads on only one construct. Fortunately, most estimated models are of this type. If a variable loads on more than one construct, that variable is set aside and is discussed under Condition E. For the measurement model to be identified, five conditions must hold. Conditions A and B must be satisfied by each construct, Condition C refers to each pair of constructs, and Condition D to each measure or indicator. Condition E refers to indicators that load on two or more constructs.
Condition A: Scaling the Latent Variable
Because a latent variable is unmeasured, its units of measurement must be fixed by the researcher. This condition concerns how the units of measurement of each latent variable are fixed. Each construct must have either
1. one fixed nonzero loading (usually 1.0),Condition B: Sufficient Number of Indicators per Construct
2. for causal factors, fixed factor variance (usually 1.0), or for factors that are caused, fixed factor disturbance variance (usually 1.0), or
3. in some rare cases (see an example), a fixed causal path (usually 1.0), leading into or out of the latent variable. Some computer programs, require that only strategy 1 be used, but both of the other strategies are perfectly legitimate. For pure measurement model situations or confirmatory factor analysis (no causation between latent variables), strategy 2 is often used and it reduces to the standard factor analysis model.
For each of the constructs in the model, at least one of the following three conditions must hold:
1. The construct has at least three indicators whose errors are uncorrelated with each other.Condition C: Construct Correlations2. The construct has at least two indicators whose errors are uncorrelated and either
a. both the indicators of the construct correlate with a third indicator of another construct but neither of the two indicators' errors is correlated with the error of that third indicator, or3. The construct has one indicator which meets either of the following conditions:b. the two indicators' loadings are set equal to each other.
a. its error variance is fixed to zero or some other a priori value (e.g., the quantity one minus the reliability times the indicator's variance) orb. there is a variable that can serve as an instrumental variable (see Rule C under Identification of the Structural Model) in the structural model and the error in the indicator is not correlated with that instrumental variable.
For every pair of constructs either
1. there is, at least, one pair of indicator, one of loading on one construct and one loading on the other, that does not have correlated measurement error between them orCondition D: Loading Estimation2. the correlation between the pair of constructs is specified to be zero (or some other a priori value).
For every indicator, there must be, at least, one other indicator (not necessarily of the same construct) with which it does not share correlated measurement error. If the three above conditions hold, then drop from the model all indicators that do not meet this condition and the model is still identified.
Condition E: Estimation of Double Loadings
One important model in which all variables have double loadings is the classic model for the multitrait-multimethod matrix (Campbell & Fiske, 1959). Each measure loads on a trait and method factor. Kenny and Kashy (1992) have shown that there are serious empirical identification difficulties with this model. All but the most adventurous researchers are well advised to avoid the estimation of such models. However, a subset of indicators may load on two or more factors as long as Conditions A, B, and C are met for those constructs by using indicators that load on only one construct. Consider the indicator X1 that loads on more than one construct. The errors of X1 may be correlated with the errors of other indicators, but for each construct on which X1 loads, there must be at least one singly-loading indicator that it does not share any correlated error with X1.
Summary
For most measurement models, Condition E is not relevant, and it is usually very easy to verify that C and D are satisfied. Condition A can always be satisfied, and so ordinarily the key condition to scrutinize carefully is B. Single indicator constructs are best handled by fixing their loading to one, forcing their error variance to zero, and leaving their variances free to be estimated. Of course, the assumption of zero error variance must be justified theoretically.
Empirical Identification of the Measurement Model
Condition B is critical to the empirical identification of each construct. Condition B1 requires three indicators. For these three indicators, each of the three correlations between those indicators should be statistically significant and the product of the three correlations must be positive.
Condition B2a has three indicators, two of which load on the latent variable and the third loads on another factor. As with B1, the three indicators must correlate with each other and their product must be positive. If the two indicators of one construct correlate with the one indicator of the other construct, then those two constructs must be correlated. Condition B2a requires that the construct with two indicators be correlated with at least one other construct in the model.
For two indicators which are assumed to be equal (Condition B2b), the correlation between the two must be significantly positive. If the correlation is large but negative, given theoretical justification, the loading can be forced to be equal but of opposite sign.
If there is a single indicator and instrumental variable (Condition B3b) estimation is used, the indicator must share unique variance with the instrument (see Rule C below).
If the latent variable is scaled by fixing a loading to one (Condition A1), the indicator with a loading of one must correlate with other indicators of the latent variable. If all of the loadings are free and the disturbance or residual variance is fixed, empirical identification problems can occur if the multiple correlation for that latent variable is very large. If a variable loads on two constructs (Condition E), those two constructs cannot correlate too highly. That is, there must be discriminant validity.
Identification of the Structural Model
In the structural model, there is a set of structural equations. The causal variables are called exogenous variables and the effect variable is called the endogenous variable. Unexplained variation is referred to as disturbance.
Rule A: Minimum Condition of Identifiability
Let k be the number of constructs in the structural model. Let q = k(k - 1)/2. The minimum condition of identifiability is that q must be greater than or equal to p where p equals the sum of:
a. number of paths,In virtually all models, c is zero and in many models d is also zero. Theory places restrictions on a. Generally, b should be set at the maximum value; that is, all uncaused exogenous variables should be correlated. If a structural model satisfies this condition, then the model may be identified. If it does not, the model is not identified; however, some but not all of the parameters of the model may be identified.
b. number of correlations between exogenous variables that are not caused by any other variable,
c. number of correlations between the disturbance and an exogenous variables, and
d. number of correlations between disturbance.
Rule B: An Apparent Necessary Condition
All models that satisfy the following condition appear to be identified: If between any pair of constructs, X and Y, no more than one of the following is true:
Although there is no known proof of this condition, there is no known exception. It seems likely that the rule holds. As with any identification rule, the model may still not be empirically identified.
Rule C: Instrumental Variable Estimation
This rule considers models that fail to meet the previous rule, but may still be identified. There are three conditions to be considered:
1. spuriousness: an unmeasured variable causes both the endogenous variable and an exogenous variables. Given spuriousness, the exogenous variables is a cause of the endogenous variable and is correlated with its disturbance,These models can be identified through the use of instrumental variables. (Go to a discussion of instrumental variables.) Denote X as an exogenous variable which needs an instrumental variable because one of the three above conditions apply, Y as the endogenous variable, U as its disturbance, and I as an instrumental variable. There may also be variables that cause Y but do not need an instrumental variable. The defining feature of an instrumental variable is that variable I is assumed not to directly cause Y: The path from I to Y is zero. Moreover, it must be the case that I is uncorrelated with the disturbance U and that I causes X. The zero path and correlation is given by theory, not by statistical analysis.
2. reverse causation: the endogenous variable causes the exogenous variable, and
3. measurement error: measurement error in an exogenous variable which has only a single indicator.
More formally, the following are conditions necessary for instrumental variable estimation:
1. The variable I must not directly cause Y or be correlated with U, but I must cause X.Empirical Identification of the Structural Model
2. For a given structural equation, there must be as many or more I variables as there are X variables.
3. In a feedback loop, the same variable cannot serve as the instrument for both variables in that loop. Also one of the variables need not have an instrument if the disturbance of the variables in the loop are not correlated.
Causal variables in an equation cannot be too highly correlated (multicollinearity). When there is perfect correlation between a pair of variables, the model cannot be estimated. Note that the assumption is that the theoretical variables must not have a perfect correlation. So just because the indicators between two constructs are not highly correlated does not mean that the construct has discriminant validity.
For models with instrumental variables, the conditions for empirical identification are relatively complicated. Recall that X needs an instrumental variable, I is an instrumental variable, and Z is a causal variable that does not need an instrument. After partialling out variance due to Z, the set of I variables must significantly correlate (i.e., have a large multiple correlation) with each X variable. For there to be a correlation between I and X (and Rule C1 is not violated), the following must hold:
1. When there is spuriousness, X cannot cause variable I and I cannot be correlated with the omitted variable.
2. When X is measured with error, variable I cannot be correlated with the measurement error in X, however I itself may have measurement error.
3. When there is a feedback relationship, X cannot cause I. If there is more than one X variable in the same equation, when X is regressed on I and Z, the correlation between the predicted X variables should not be too large.