**David A. Kenny
January 26, 2018**

Power analsis app MedPower.

Learn how you can do a mediation analysis and output a text description of your results: Go to mediational analysis using DataToText using SPSS or R.

View my mediation webinars (small charge is requested).

**MEDIATION**

Introduction

The Four Steps

Indirect Effect

Power

Specification Error

Additional Variables

Extensions

Causal Inference Approach

Links to Other Sites

References

** Consider a variable X that is
assumed to cause another variable Y. The variable X is called the causal
variable and the variable that it causes or Y is called the outcome.
In diagrammatic form, the unmediated model is**

**Path c in the above model is called the total
effect. The effect of X on Y may be
mediated by a process or mediating variable M, and the variable X may still
affect Y. The mediated model is**

**(These two
diagrams are essential to the understanding of this page. Please study
them carefully!) Path c' is
called the direct effect. The mediator has been called an intervening or process variable. Complete mediation is the case in which
variable X no longer affects Y after M has been controlled, making path c' zero. Partial mediation is the
case in which the path from X to Y is reduced in absolute size but is still
different from zero when the mediator is introduced.**

** Note that a mediational model is a
causal model. For example, the mediator is presumed to cause the outcome
and not vice versa. If the presumed causal model is not correct, the results
from the mediational analysis are likely of little value. Mediation is not
defined statistically; rather statistics can be used to evaluate a presumed
mediational model. The specific causal assumptions are detailed below in
the section on ****Specification Error. **

** There is a long history in the study of
mediation (Hyman, 1955; MacCorquodale & Meehl, 1948; Wright, 1934). Mediation is a very popular topic. (This page averages over 200 visitors
a day and Baron and Kenny (1986) has over 60,000 citations, according to Google Scholar, and there are four books on the topic (Hayes, 2013; Jose, 2012; MacKinnon, 2008;VanderWeele, 2015.) There are
several reasons for the intense interest in this topic: One reason for
testing mediation is trying to understand the mechanism through which the
causal variable affects the outcome. Mediation and moderation analyses
are a key part of what has been called process analysis, but mediation analyses tend to be more powerful than moderation analyses. Moreover,
when most causal or structural models are examined, the mediational part of the
model is often the most interesting part of that model.**

** If the mediational model (see
above) is correctly specified, the paths of c, a, b,
and c' can be estimated by **

**The Steps**

** Baron and Kenny (1986), Judd and
Kenny (1981), and James and Brett (1984) discussed four steps in establishing mediation: **

**Step 1:**** Show
that the causal variable is correlated with the outcome. Use Y as the criterion
variable in a regression equation and X as a predictor (estimate and test path c in the above figure). This step
establishes that there is an effect that may be mediated.**

**Step 2: ****Show that
the causal variable is correlated with the mediator. Use M as the
criterion variable in the regression equation and X as a predictor (estimate
and test path a). This step
essentially involves treating the mediator as if it were an outcome variable.**

**Step 3: **** Show
that the mediator affects the outcome variable. Use Y as the criterion
variable in a regression equation and X and M as predictors (estimate and test
path b). It is not sufficient
just to correlate the mediator with the outcome because the mediator and the outcome
may be correlated because they are both caused by the causal variable X.
Thus, the causal variable must be controlled in establishing the effect of the
mediator on the outcome.**

**Step 4: **** To
establish that M completely mediates the X-Y relationship, the effect of X on Y
controlling for M (path c') should be
zero (see discussion below on significance testing). The effects in both
Steps 3 and 4 are estimated in the same equation.**

** If all four of these
steps are met, then the data are consistent with the hypothesis that variable M completely mediates the X-Y relationship, and if the first three steps
are met but the Step 4 is not, then partial mediation is
indicated. Meeting these steps does not, however, conclusively establish
that mediation has occurred because there are other (perhaps less plausible)
models that are consistent with the data. Some of these models are
considered later in the Specification Error section. **

** James and Brett (1984) have argued that
Step 3 should be modified by not controlling for the causal variable.
Their rationale is that if there were complete mediation, there would be no
need to control for the causal variable. However, because complete
mediation does not always occur, it would seem sensible to control for X in Step
3.**

** Note that the steps are stated in terms
of zero and nonzero coefficients, not in terms of statistical significance, as
they were in Baron and Kenny (1986). Because trivially small coefficients
can be statistically significant with large sample sizes and very large
coefficients can be nonsignificant with small sample sizes, the steps should
not be defined in terms of statistical significance. Statistical
significance is informative, but other information should be part of
statistical decision making. For instance, consider the case in which
path a is large and b is zero. In this case, c = c' (the reason for this is shown later).
It is very possible that the statistical test of c' is not significant (due to the collinearity between X and M),
whereas c is statistically significant.
Using just significance testing would make it appear that there is complete mediation when in fact there is no
mediation at all. **

** Following, Kenny, Kashy, and Bolger
(1998), one might ask whether all of the steps have to be met for there to be
mediation. Most contemporary analysts believe that the essential steps in
establishing mediation are Steps 2 and 3. Certainly, Step 4 does not have to be
met unless the expectation is for complete mediation. In the opinion of
most though not all analysts, Step 1 is not required. (See the ****Power**** section below why the test of c can be low power, even if paths a and b are non-trivial.) **

** If c' were
opposite in sign to ab something that MacKinnon, Fairchild, and Fritz (2007)
refer to as inconsistent mediation,
then it could be the case that Step 1 would not be met, but there is still
mediation. In this case the mediator acts like a suppressor
variable. One example of inconsistent
mediation is the relationship between stress and mood as mediated by
coping. Presumably, the direct effect is
negative: more stress, the worse the mood. However, likely the effect of stress on coping is positive (more stress,
more coping) and the effect of coping on mood is positive (more coping, better
mood), making the indirect effect positive. The total effect of stress on mood then is likely to be very small
because the direct and indirect effects will tend to cancel each other out. Note too that with inconsistent mediation
that typically the direct effect is even larger than the total effect. **

** The amount of mediation is called
the indirect effect. Note that
the **

**total effect = direct effect + indirect effect**

**or using
symbols**

*c*** = c' + ab**

**Note also
that the indirect effect equals the reduction of the effect of the causal
variable on the outcome or ab = c - c'. In contemporary mediational analyses, the
indirect effect or ab is the measure
of the amount of mediation.**

** The equation of c = c' + ab exactly holds
when a) multiple regression (or structural equation modeling without latent
variables) is used, b) the same cases are used in all the analyses, c) and the
same covariates are in all the equations. However, the two are only
approximately equal for multilevel models, logistic analysis and structural
equation modeling with latent variables. For such models, it is probably
inadvisable to compute c from Step 1,
but rather c or the total effect should be inferred to be c' + ab and not directly computed.**

** Note also that the amount of reduction in the effect
of X on Y due to M is not equivalent to either the change in variance explained
or the change in an inferential statistic such as F or a p value. It is possible for the F from the causal variable to the
outcome to decrease dramatically even when the mediator has no effect on the
outcome! It is also not equivalent to a change in partial
correlations. The way to measure mediation
is the indirect effect.**

** Another measure of mediation is
the proportion of the effect that is mediated, or the indirect effect divided
by the total effect or ab/c or equivalently 1 - c'/c.
Such a measure though theoretically informative is very unstable and should not
be computed if c is small. Note
that this measure can be greater than one or even negative when there is **

** Most often the indirect effect is computed directly as the product of a and b. Below are discussed three different ways to test the product of the two coefficients. Imai, Keele, and Tingly (2010) have re-proposed the use of c - c' as the measure of the indirect effect. They make the claim that difference in coefficients is more robust to certain forms of specification error. It is unclear at this point if the difference in coefficients approach will replace the product in coefficients approach. It is also noted here that the Causal Inference Approach (Pearl, 2011) has developed a very general approach to measuring the indirect effect.**

** Below are described four tests of the indirect effect or ab. Read carefully as some of the tests have key drawbacks that should be noted. One key issue concerns whether paths a and b are correlated: If path a is over-estimated, is path b also over-estimated? Paths a and b are uncorrelated when multiple regression is used to estimate them, but are not for most other methods. The different tests make different assumptions about this correlation.**

**Joint Significance of Paths a and b**

** If Step 2 (the test of a) and Step 3 (the test of b) are met, it follows that the indirect effect is likely nonzero. Thus, one way to test the null hypothesis that ab = 0 is to test
that both paths a and b are zero (Steps 2 and 3). This
simple approach, called the joint test of significance, appears to work rather well (Fritz & MacKinnon, 2007), but is rarely used as the definitive test of the indirect effect. (Joint significance presumes that a and b are uncorrelated.) However, Fritz, Taylor, and MacKinnon (2012) have strongly urged that researchers use this test in conjunction with other tests. Also recent simulation results by Hayes and Scharkow (2013) have shown that this test performs about as well as a bootstrap test. Moreover, this test provides a relatively straightforward way to determine the power of the test of the indirect effect. (See the program PowMedR program.) The major drawback with this approach is that it does not provide a confidence interval for the indirect effect.**

**Sobel Test**

** A test, first proposed by Sobel (1982), was initially often used. Some sources refer to this test as the delta method. It requires the
standard error of a or s_{a} (which equals a/t_{a} where t_{a} is the t test of coefficient a) and the
standard error of b or s_{b}. The Sobel test provides an approximate estimate of the standard error of ab which equals to the square root of **

b^{2}s_{a}^{2}+a^{2}s_{b}^{2}

**Other approximate estimates of the standard error of ab
standard errors have been proposed, but the Sobel test is by far the most
commonly used estimate. (As discussed below, bootstrapping has replaced the more conservative Sobel test.)
The test of the indirect effect is given by
dividing ab by the square root of the
above variance and treating the ratio as a Z test (i.e., larger than 1.96 in absolute value is significant at the .05
level). Kristopher J. Preacher and Geoffrey J. Leonardelli have an
excellent webpage that can help you calculate these test (**

** The derivation of the Sobel standard
error presumes that the estimates of paths a and b are independent,
something that is true when the tests are from multiple regression but not true
when other tests are used (e.g., logistic regression, structural equation
modeling, and multilevel modeling). In such cases, the researcher ideally
provides evidence for approximate independence. The Sobel
test can be conducted using the standardized or unstandardized
coefficients. Care must be taken to use the appropriate standard errors
if standardized coefficients are used.**

** The Sobel test
is very conservative (MacKinnon, Warsi, & Dwyer, 1995), and so it has very
low power. The main reason for the test
being conservative is that the sampling distribution of ab is highly skewed. If ab is positive, there is positive skew
with many small estimates of ab and
few very large ones. Because the Sobel
test uses a normal approximation which presumes a symmetric distribution, it
falsely presumes symmetry which leads to a conservative test. **

**Bootstrapping**

** Recently, Fritz, Taylor, and MacKinnon (2012)
have raised concerns that bias-corrected bootstrapping test is too liberal with alpha being around .07.
Actually not doing the bias correction seems to improve the Type I error rate. Hayes and Scharkow (2013) recommended using the bias corrected bootstrap if power is the major concern, but if Type I error rate is the major concern, then the percentile bootstrap should be used.**

** Hayes and
Preacher have written SPSS and SAS macros that can be downloaded for tests of
indirect effects (****click here to get the Hayes and Preacher macro). Also Mplus and Amos can be used to
bootstrap (click here for an Amos tutorial). If one has more
than one mediator and is using Amos, one should consult for details Macho and
Ledermann (2011) on how to compute separate confidence intervals for each
indirect effect.**

**Monte Carlo Method**

**Effect Size of the Indirect Effect and the Computation of Power**

** The indirect effect is the product of two effects.
One simple way, but not the only way, to determine the effect size is to measure the
product of the two effects, each turned into an effect size. The standard effect size for paths a and b is a partial correlation; that is, for path a,
it is the correlation between X and M, controlling for the covariates and any
other Xs and for path b, it is the correlation between M and Y, controlling for
covariates and other Ms and Xs.
One possible effect size for the indirect effect would be the product of the two partial correlations.
(Preacher and Kelley (2011) discuss a similar measure of effect size which they refer to
as the completely standardized indirect effect, which uses betas, not partial correlations.)**

** There are two different strategies for determining small, medium, and large
effect sizes.
(Any designation of small, medium, or larger is fundamentally arbitrary and depends on the particular application.**)
**First, following Shrout and Bolger (2002), the usual Cohen (1988) standards of .1
for small, .3 for medium, and .5 for large could be used.
Alternatively and I think more appropriately because an indirect effect is a product of two effects, these values should be squared or rr. Thus, a small effect size would be .01, medium would .09, and large would be .25. Note that if X is a dichotomy, it makes sense to replace the correlation for path a with Cohen’s d.
In this case the effect size would be dr and a small effect size would be .02,
medium would .15, and large would be .40. **

** One strategy to compute the power of the
test of the indirect effect is to use the joint test of significance . Thus, one computes the power of test of paths a and b and then multiply their power to obtain the power of the test of the indirect effect. One can use the app that I have written called **

**Distal and Proximal Mediation**

** To demonstrate mediation both
paths a and b need to be present. Generally, the maximum size
of the product ab equals a value near c, and so as
path a increases, path b must decrease and vice versa. Hoyle and
Kenny (1999) define a proximal mediator as a being greater than b (all variables standardized) and a distal mediator as b being greater than a.**

** A mediator can be too close in time or in the
process to the causal variable and so path a would be relatively large and path b relatively small. An example of a proximal mediator is a manipulation check. The use of a very proximal mediator creates strong **

** Alternatively, the mediator can be
chosen too close to the outcome and with a distal mediator path b is large and path a is small. Ideally in terms of power, standardized a and b should be comparable in size. However, work by Hoyle and
Kenny (1999) shows that the power of the test of ab is maximal when b is
somewhat larger than a in absolute
value. So slightly distal mediators result in somewhat greater power than
proximal mediators. Note that if there is proximal mediation (a > b), sometimes power actually declines as a (and so ab) increases.**

**Multicollinearity**

** If M is a successful mediator, it is
necessarily correlated with X due to path a. This correlation, called
collinearity, affects the precision of the estimates of the last
regression equation. If X were to explain all of the variance in M, then
there would be no unique variance in M to explain Y. Given that path a is nonzero, the power of the tests of
the coefficients b and c’ is lowered. The effective
sample size for the tests of coefficients b and c’ is approximately N(1 - r^{2}) where N is
the total sample size and r is the
correlation between the causal variable and the mediator, which is equal to
standardized a. So if M is a strong mediator (path a is large), to achieve equivalent
power, the sample size to test coefficients b and c' would have to be larger
than what it would be if M were a weak
mediator. Multicollinearity is to be expected in a mediational analysis and it cannot be avoided. **

**Low Power
for Steps 1 and 4**

** As described by Kenny and Judd (2014), as well as others, the tests of c and c’ have relatively low power, especially in comparison to the indirect effect. It can easily happen, that ab can be statistically
significant but c is not. For instance, if a = b = .4 and c’ = 0, making c = .16, and N = 100, the power of the test of path a is .99, the power of the test of path b is .97 which makes the power of ab equal to about .96, but the power of the test that c is only .36. Surprisingly,
it is very easy to have complete mediation, a statistically significant
indirect effect, but no statistical evidence that X causes Y.**

** Because of the low power in the test of c’, one needs to be very careful about any claim of complete mediation based on the non-significance of c’. In fact, several sources (e.g., Hayes, 2013) have argued that one should never make any claim of complete or partial mediation. It seems more sensible to be careful about claims of complete mediation. One idea is establish first that is sufficient power to test for partial mediation: The power of the test of c’. Also if the sample size is very large, then finding a significant value for c’ and so "partial" mediation is not very informative. More informative, in the case of large N, is knowing the proportion of the total effect that is mediated or ab/c.**

** Mediation is a hypothesis about a causal
network. (See Kraemer, Wilson, Fairburn, and Agras (2002) who attempt to
define mediation without making causal assumptions.) The conclusions from
a mediation analysis are valid only if the causal assumptions are valid (Judd
& Kenny, 2010). In this section, the three major assumptions of
mediation are discussed. Mediation analysis also makes all of the
standard assumptions of the general linear model (i.e., linearity, normality,
homogeneity of error variance, and independence of errors). It is strongly advised to check these assumptions
before conducting a mediational analysis. Clustering effects are discussed in the Extensions section. What follows are sufficient conditions. That is, if the assumptions are met, the mediational model is identified. However, there are sometimes special cases in which an assumption can be violated, yet the mediation effects are identified (Pearl, 2014).**

**Reverse
Causal Effects**

** The mediator may be caused by the outcome
variable (Y would cause M in the above diagram), what is commonly called a feedback model. When the causal
variable is a manipulated variable, it cannot be caused by either the mediator
or the outcome. But because both the mediator and the outcome variables
are not manipulated variables, they may cause each other. **

** Often it is advisable to interchange the
mediator and the outcome variable and have the outcome "cause" the
mediator. If the results look similar to the specified mediational pattern
(i.e., the c' and b are about the same in the two models), one would be less
confident in the specified model. However, it should be realized that the direction of causation between M
and Y cannot be determined by statistical analyses. **

** Sometimes reverse causal effects can be
ruled out theoretically. That is, a causal effect in one direction does
not make sense. Design considerations may also weaken
the plausibility of reverse causation. Ideally, the mediator should be measured
temporally before the outcome variable. **

** If it can be assumed that c' is zero, then reverse causal effects
can be estimated. That is, if it can be assumed that there is complete
mediation (X does not directly cause Y and so c’ is zero), the mediator may cause the outcome and the outcome
may cause the mediator and the model can be estimated using instrumental
variable estimation. **

** Smith (1982) has developed another
method for the estimation of reverse causal effects. Both the mediator
and the outcome variables are treated as outcome variables, and they each may
mediate the effect of the other. To be able to employ the Smith approach,
for both the mediator and the outcome, there must be a different variable that
is known to cause each of them but not the other. So a variable must be
found that is known to cause the mediator but not the outcome and another
variable that is known to cause the outcome but not the mediator. These
variables are called instrumental variables. For
such a model, mediation can be estimated and tested with feedback. **

**Measurement
Error in the Mediator**

** If the mediator is measured with less
than perfect reliability, then the effects ( b and c')
are likely biased. The effect of the mediator on the outcome (path b) is likely
underestimated and the effect of the causal variable on the outcome (path c') is likely over-estimated if ab is positive (which is typical). The
over-estimation of c' is exacerbated
to the extent to which path a is
large. In a parallel fashion, if X is measured with less than perfect
reliability, then the effects (b and c') are likely biased. The effect of the
M on Y mediator on the outcome (path b)
is likely over-estimated and the effect of the causal variable on the outcome
(path c') is likely
under-estimated. Moreover, measurement
error in X attenuates the estimate of path a and c. Measurement error in Y does not bias
unstandardized estimates, but it does bias standardized estimates, attenuating
them.**

** To remove the biasing effect of
measurement error, multiple indicators of the variable can be used to tap a
latent variable. Alternatively for M, instrumental
variable estimation can be used, but as before, it must be assumed that c' is zero. Also possible is to
fix the error variance at the value or one minus the reliability quantity times
the variance of the measure. If none of these approaches is used, the
researcher needs to demonstrate that the reliability of the mediator is very
high so that the bias is fairly minimal. **

** In this case, there is a variable that
causes both variables in the equation. (These variables are called confounders in some literatures and the assumption can be stated more formally and generally, **

** Note that if the
causal variable, X, is randomized, then omitted variables do not bias the
estimates of a and c. However, in this case, paths b and c' might be biased if there is an omitted
variable that causes both M and Y. Assuming
that this omitted variable has paths in the same direction on M and Y and that
ab is positive, then path b is over-estimated and path c' is underestimated. In
this case, if the true c' were zero,
then it would appear that there was inconsistent mediation when in fact there is complete mediation. **

** Sometimes the source of correlation
between the mediator and the outcome is a common method effect. For instance,
the measuring scale of the two variables is the same. Ideally, efforts
should be made to ensure that the two variables do not share method effects
(e.g., both are self-reports from the same person). A latent
variable analysis might be used to remove the effects of correlated
measurement error.**

** Alternatively, an instrumental
variable estimation can be used to remove the effects of confounding variables. One possibility is that c' is zero, which makes X the instrumental variable. Alternatively, c' is estimated and another variable or variables are used as instrument(s). The instrument must cause M but not Y. Note that M serves as a perfect mediator of the instrument to Y relationship. Instruments must be chosen on the basis of theory not empirical relationships. Alternative strategies for dealing with omitted variables are being developed within the Causal Inference Approach.**

** **** As stated above, the absence of omitted variables is not a necessary condition (Pearl, 2014). **** Consider the case that there is an unmeasured confounding variable, C, that causes M and Y. The first situation (called the backdoor condition) is to find a measured variable Z which is either a complete mediatior of the C to M or the C to Y relationship either M ← Z ← C → Y or M ← C → Z → Y . By controlling for Z, one can remove the confounding effect of C.**

** Brewer, Campbell, and Crano (1970) argued that in some cases when X is not manipulated, it might be that single unmeasured variable can explain the covariation between all the variables. In fact, unless there is inconsistent mediation, a single latent variable can always explain the covariation between X, M, and Y (i.e., a solution that converges with no Heywood cases).**

** ****The
Combined Effects of Measurement Error and an Omitted Variable**

**The
Mediator as also a Moderator**

** Baron and Kenny (1986) and Kraemer et al. (2002) discuss the possibility that M might
interact with X to cause Y. Baron and
Kenny (1986) refer to this as M being both a mediator and a moderator and
Kraemer et al. (2002) as a form of mediation. The X with
M interaction should be estimated and tested and added to the model if present. Judd and Kenny (1981) discuss how the meaning of path b changes when this interaction is present. Also the Causal Inference Approach begin with the assumption that X and M interact.
**

** One of the best
ways to increase the internal validity of mediational analysis is by the design
of study. Key considerations are
randomizing X (i.e., randomly assigning units to levels of X), the timing of
measurement of M and Y, and obtaining prior values of M and Y. By randomizing X, it is known that both M and
Y do not cause X. By measuring M after X,
and Y after M, it is known that M does not cause X and that Y does not cause X
or M. Finally by obtaining prior
measures of M and Y and control for them, we can reduce and perhaps eliminate
the effects of omitted variables. The
reader should consult Cole and Maxwell (2003) about the difficulties of
estimating mediational effect using a cross-sectional design. Also as mentioned earlier, it is possible to
randomize X, M, and Y (Smith, 1982).
**

** As discussed above, it is usually assumed for mediation that there is perfect reliability for X and M, no omitted variables for the X to M, X to Y, and M to Y relationships, and no causal effects from Y to X and M and from M to X. It is possible to determine what would happen to the mediational paths one or more of these assumptions is violated by conducting sensitivity analyses. For instance, one might find that allowing for measurement error in M (reliability of .8) that path b would be larger by 0.15 and that c' would be less by 0.10. Alternatively, one might determine what was the value of reliability that would make c' equal zero.**

** One way to conduct a sensitivity analysis is to estimate the mediational model using Structural Equation Modeling. One fixes the additional parameter to the value of interest. For example, an omitted variable is added that has a moderate effect on M and Y. One can then use the estimates from this analysis in the sensitivity analysis. See the Sensitivity Analyses webinar (small charge) that I have created which has more details. A convincing mediation analyses should be accompanied by a sensitivity analysis.**

** Rarely in
mediation are there just the three variables of X, M, and Y. Discussed in this section is how to handle
additional variables in a mediational model.**

**Multiple
Mediators **

** If there are
multiple mediators, they can be tested simultaneously or separately. The
advantage of doing them simultaneously is that one learns if the mediation is
independent of the effect of the other mediators. One should make sure
that the different mediators are conceptually distinct and not too highly
correlated. [Kenny et al. (1998) consider an example with two mediators.]
**

** There is an interesting case of two mediators (see below) in which the indirects are opposite in sign. The sum of
indirect effects for M1 and M2 would be zero. It might then be possible
that the total effect or c is near zero, because there
are two indirect effects that work in the opposite direction. In this
case "no effect" would be mediated. I suggest that the case in which there are two indirect effects of the same effect that are approximately equal in size but opposite in sign be called opposing mediation.**

**The Hayes and
Preacher bootstrapping macro can be used to test hypotheses about the linear
combinations of indirect effects: For example, it can be if they are equal or if they sum to zero.**

**Multiple
Outcomes**

** If there are multiple outcomes, they can
be tested simultaneously or separately. If tested simultaneously, the
entire model can be estimated by structural equation modeling. One might want to consider combining the
multiple outcomes into one or more latent variables.**

**Multiple
Causal Variables**

** In this case there are multiple X
variables and each has an indirect effect on Y. The Hayes and Preacher bootstrapping macro can be used to test
hypotheses about the linear combinations of indirect effects: For example, are
they equal? Do they sum to zero? One can alternatively treat the multiple X
variables as a formative variable and
so if a single “super variable” can be used to summarize the indirect
effect. As seen below, the formative
variable X “mediates” the effect of X on M and Y. The model can be tested and it has k - 1 degrees of freedom where k is the number of X variables. Thus, the degrees of freedom for the example
would be 1.**

**Covariates**

** There are often variables that do not
change that can cause or be correlated with the causal variable, mediator, and
outcome (e.g., age, gender, and ethnicity); these variables are commonly called covariates. They would generally be included in the M and Y equations, A covariate
would not be trimmed from one equations unless it is dropped from all of the
other equation. If a covariates interacts with X or M, it would be called
a moderator variable. **

**Mediated
Moderation and Moderated Mediation**

** Moderation means that the effect of a
variable on an outcome is altered (i.e., moderated) by a covariate. (To read
about moderation ****click here.****) Moderation
is usually captured by an interaction between the causal variable and the
covariate. If this moderation is mediated, then we have the usual pattern
of mediation but the X variable is an interaction and the pattern would be
referred to as mediated moderation. All the Baron and Kenny
steps would be repeated with the causal variable or X being an interaction, and
the two main effects would be treated as "covariates." We could compute the total effect or the
original moderation effect, the direct effect or how much moderation exists
after introducing the moderator, and the indirect effect or how much of the
total effect of the moderator is due to the mediator. **

** Sometimes, mediation can be stronger for
one group (e.g., males) than for another (e.g., females), something called moderated mediation. There are two major
different forms of moderated mediation. The effect of the causal
variable on the mediator may differ as a function of the moderator (i.e., path a varies) or the mediator may interact
with the moderator to cause the outcome (i.e., path b varies). It is also
possible that the direct effect or c’ might
change as a function of the moderator.**

** Papers by Muller, Judd, and Yzerbyt
(2005) and Edwards and Lambert (2007) discuss mediated moderation and moderated
mediation and examples of each. Also Preacher, Rucker, and Hayes have
developed a macro for estimating moderated mediation (click
here). **

** Some or all of the mediational variables
might be latent variables. Estimation
would be accomplished using a structural equation modeling (SEM) program (e.g.,
LISREL, Amos, Eqs, or MPlus). Some programs provide measures and tests of
indirect effects. Also such programs are quite flexible in handling
multiple mediators and outcomes. The one complication is how to handle
Step 1. That is, if two models are estimated, one with the mediator and
one without, the paths c and c’ are not directly comparable because the
factor loadings would be different. It is then inadvisable to test the
relative fit of two structural models, one with the mediator and one
without. Rather c, the total
effect, can be estimated using the formula of c' + ab. Most SEM programs give this estimate.**

** If
there are multiple mediators, Amos does not compute indirect effects for each
mediator. The reader should consult Macho
and Ledermann (2011) for a method that does decompose the total indirect effect
into separate effects.**

** One
advantage of a latent variable model is that correlated measurement error in X,
M, and Y might be modeled. For instance,
if some of the measures are self-report, their errors might be correlated. **

**Dichotomous
Variables**

** If either the mediator or the
outcome is a dichotomy, standard methods of estimation should not be used. (Having the causal variable or X be a dichotomy is not
problematic.) If either the mediator or the
outcome is a dichotomy, the analysis would likely be conducted using
logistic regression. One can
still use the Baron and Kenny steps. The Sobel test is problematic in that it assumes that a and b are independent, which may not be the case. The one
complication is the computation of indirect effect the degree of mediation
because the coefficients need to be transformed. (To read about the
computation of indirect effects using logistic or probit regression click here.) With dichotomous outcomes, it is advisable to
use a program like Mplus that can handle such variables.**

**Clustered
Data**

** Traditional mediation analyses presume
that the data are at just one level. However, sometimes the data are clustered in that persons are in
classrooms or groups, or the same person is measured over time. With clustered data, multilevel modeling
should be used. Estimation of mediation
within multilevel models can be very complicated, especially when the mediation
occurs at level one and when that mediation is allowed to be random, i.e., vary
across level two units. The reader is referred to Krull and MacKinnon
(1999), Kenny, Korchmaros, and Bolger (2003), and Bauer and Preacher (2006) for
a discussion of this topic. Recently, Preacher, Zyphur, and Zhang (2010) have proposed that multilevel structural equation methods or MSEM can be used to estimate these models. Ledermann, Macho, and Kenny (2011) discuss
mediational models for dyadic data.**

**Over-Time
Data**

** Over-time data can be treated as clustered data, but there are further complications due to temporal nature of the data. Among such issues are correlated errors, lagged effects, and the outcome causing the mediator. One might consult Bolger and Laurenceau (2013) for guidance. Also, Judd, Kenny, and McClelland, (2001) discuss a generalization of repeated measures analysis of variance test of mediation.**

**Using an SEM Program Instead of Multiple Regression**

** Traditionally the mediation model is estimated by estimating a series of multiple regression equations. However, there are considerable advantages to estimate the program using a Structural Equation Modeling (SEM) program, such as Amos or Mplus. First, all the coefficients are estimated in a single run. Second most SEM programs provide estimates of indirect effects and bootstrapping. Third, SEM with FIML estimation can allow for a more complex model of missing data. Fourth, it is relatively easy to conduct sensitivity analyses with a SEM program.**

** Normally with SEM, one computes a measure of fit. However, the mediational model is saturated and a no usual measure of fit is possible. However, one can adopt the following strategy. Use a fit statistic an "Information" measure like the AIC or BIC. I actually prefer the SABIC which, at least in my experience, performs better than the other two. With these measures a fit value can be obtained for a saturated model. Then one can compute the measure for each of the following models: no direct effect, no effect from causal variable to the mediator, and no effect from the mediator to outcome. Then the best fitting model is the one with lowest value.**

**Causal Inference Approach to Mediation**

** ****(I want to thank Tom Loeys and Haeike Josephy who reviewed an early version of this section, Pierre-Olivier Bédard who spotted a typographical error, and especially Judea Pearl who made several very helpful suggestions. **** Go to Pearl's blog discussion of this section.)**

** A group of researchers have developed an approach that has several emphases that are different from the traditional SEM approach, the approach that is emphasized on this page. The approach is commonly called the Causal Inference approach, and I provide here a brief and relatively non-technical summary in which I attempt to explain the approach to those more familiar with Structural Equation Modeling. Robins and Greenland (1992) conceptualized the approach and more recent papers within this tradition are Pearl (2001; 2011) and Imai et al. (2010). Somewhat more accessible is the paper by Valeri and VanderWeele (2013). Unfortunately, SEMers know relatively little about this approach and, I believe also that Causal Inference researchers fail to appreciate the insights of SEM.**

** **** The Causal Inference Approach uses the same basic causal structure (see diagram) as the SEM approach, albeit usually with different symbols for variables and paths. The two key differences are that the relationships between variables need not be linear and the variables need not be interval. In fact, typically the variables of X, Y, and M are presumed to be binary and that X and M are presumed to interact to cause Y.**

** Similar to SEM, the Causal Inference approach attempts to develop a formal basis for causal inference in general and mediation in particular. Typically counterfactuals or potential outcomes are used. The potential outcome for person i on Y for whom X = 1 would be denoted as Y _{i}(1). The potential outcome of Y_{i}(0) can be defined even though person i did not score 0 on X. Thus, it is a potential outcome or a counterfactual. The averages of these potential outcomes across persons are denoted as E[Y(0)] and E[Y(1)]. To an SEM modeler, potential outcomes can be viewed as predicted values of a structural equation. Consider the "Step 1" structural equation:**

**Y _{i}= d + cX_{i} + e_{i}**

**If for individual i for whom X _{i} equals 1, then Y_{i}(1) = d + c + e_{i} equals his or her score on Y. We can determine what the score of person i would have been had his or her score on X_{i} been equal to 0, i.e., the potential outcome for person i, by taking the structural equation and setting X_{i} to zero to yield d + e_{i}. Although the term is new, potential outcomes are not really new to SEMers. They simply equal the predicted value for endogenous variable, once we fix the values of its causal variables. **

** The Causal Inference approach also employs directed acyclic graphs or DAGs, which are similar to, though not identical to, path diagrams. DAGs typically do not include disturbances but they are implicit. The curved lines of path diagrams between exogenous variables are also not drawn but are implicit.**

**Assumptions**

** Earlier, the assumptions necessary for mediation were stated using structural equation modeling terms. Within the Causal Inference approach, there are essentially the same assumptions, but they are stated somewhat differently. Note that the term confounder is used where earlier the term omitted variable was used.**

** Condition 1: No unmeasured confounding of the XY relationship; that is, any variable that causes both X and Y must be included in the model.**

** Condition 2: No unmeasured confounding of the MY relationship.**

** Condition 3: No unmeasured confounding of the XM relationship. **

** Condition 4: Variable X must not cause any confounder of the MY relationship. **

**Note that if Condition 2 is met, then Condition 4 must be met. However, this fourth condition is added because certain effects can be estimated without making this assumption and other effects require this assumption. Note also that these assumptions are sufficient but not necessary. That is, if these conditions are met the mediational paths are identified, but there are some special cases where mediational paths are identified even if the assumptions are violated (Pearl, 2013). For instance, consider the case that M ← Z _{1} ← Z_{2} → Y but Z_{1} and not Z_{2} is measured and included in the model. Note that Z_{2} is a MY confounder and thus violates Condition 2, but it is sufficient to control for only Z_{1}.**

** The Causal >: These are analyses that ask the question such as, "What would happen to the results if there was a MY confounder that had both a moderate effect on M and Y?" SEMers would benefit by considering these analyses more often.**

**Definitions of the Direct, Indirect, and Total Effects**

** Because effects involve variables not necessarily at the interval level and because interactions are allowed, the direct, indirect, and total effects need to be redefined. These effects are defined using counterfactuals, not using structural equations. **** Recall from above that for person i, it can be asked: What would i's score on Y be if i had scored 0 on X? That value, called the potential outcome, is denoted Y_{i}(0). The population average of these potential outcomes across persons is denoted as E[Y(0)].**

**E[Y(1)] - E[Y(0)]**

**This looks strange to an SEMer, but it is useful to remember effects can be viewed as a difference between what the outcome would be when the causal variable differs by one unit. Consider path c in mediation. We can view c as the difference between what it would be expected that Y would equal when X was 1 and equal to 0, the difference between the two potential outcomes, E[Y(1)] - E[Y(0)]. **

** In the Causal Inference approach, there is the Controlled Direct Effect or CDE for the mediator equal to a particular value, denoted as M (not to be confused with the variable M):**

**CDE(M) = E[Y(1,M)] - E[Y(0,M)]**

**where M is a particular value of the mediator. Note that it is E[Y(1,M)] and not E[Y(1|M)], the expected value of Y given that X equals 1 "controlling for M." If X and X interact, the CDE(M) changes for different values of M. To obtain a single measure of the direct effect, several different suggestions have been made. Although the suggestions are different, all of these measures are called "Natural." One idea is to determine the Natural Direct Effect as follows**

** NDE = E[Y(1,M _{0})] - E[Y(0,M_{0})]**

where M_{0} is M(0) which is the expected value on the mediator if X were to equal 0 (i.e., the potential outcome of M given X = 0). Thus, within this approach, there needs to be a meaningful "baseline" value for X which becomes its zero value. For instance, if X is the variable experimental group versus control group, then the control group would have a score of 0. However, if X is level of self-esteem, it might be more arbitrary to define the zero value. The parallel Natural Indirect Effect is defined as

** NIE = E[Y(1,M_{1})] - E[Y(1,M_{0})]**

where M_{1} is M(1) or the potential outcome for M when X equals 1. The Total Effect becomes the sum of the two:

**TE = NIE + NDE = E[Y(1,M _{1})] - E[Y(1,M_{0})] = E[Y(1)] - E[Y(0)]**

**Some might benefit from Muthén (2011).
**

** **Note that both the CDE and the NDE would equal the regression slope or what was earlier called path *c'* if the model is linear, assumptions are met, and there is no XM interaction affecting Y, the NIE would equal *ab*, and the TE would equal *ab* + *c'*.** In the case in which the specifications made by traditional mediation approach (e.g., linearity, no omitted variables, no XM interaction), the estimates would be the same. **

** Here I give the general formulas for the NDE and NIE when X is an intervally measured based on **Valeri & VanderWeele, (2013)**. If the XM effect is added to the Y equation, that equation can be stated as**

**Y = i _{Y} + c'X + bM + dXM + E_{Y}**

**and the intercept in the M equation can be denoted as i _{M}**

**NDE = [c' + d( i _{M} + aX_{0})**

** ****and the NIE**

**NIE = a(b + dX _{1})**

** where X _{0} is a theoretical baseline score on X or a "zero" score and X_{1 }is a theoretical "improvement" score on X or "1" score. **

** **When X is a dichotomy, it is fairly obvious what values to use for **X _{0} and X_{1}. However, when X is measured at the interval level of measurement, there is no consensus as to what to use for the two values. Perhaps, one idea is to use one standard deviaiton below the mean for X_{0}**

**A powerpont presentation that summarizes much of this webpage.**

**Doing a mediation
analysis and output a text description of the results using SPSS.**

**A description of an R mediation program by Tingley, Yamamoto, and Kosuke Imai that is especially useful for non-normal variables.
**

**A conference on mediation with links to talks.
**

**Doing a mediation
analysis and output a text description of the results using R****.
**

Please suggest new links!

**Baron, R.
M., & Kenny, D. A. (1986). The moderator-mediator variable distinction
in social psychological research: Conceptual, strategic and statistical
considerations. Journal of Personality and Social Psychology, 51,
1173-1182. **

**Bauer, D.
J., Preacher, K. J., & Gil, K. M. (2006). Conceptualizing and testing
random indirect effects and moderated mediation in multilevel models: New
procedures and recommendations. Psychological Methods, 11, 142-163.**

**Bolger, N., & Laurenceau, J.-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research. New: York, Guilford Press.**

**Bollen, K. A., & Stine, R., (1990).
Direct and indirect effects: Classical and
bootstrap estimates of
variability. Sociological Methodology, 20,
115-40.**

**Brewer, M., Campbell, D. T., & Crano, W. (1970) Testing a single-factor model as an alternative to the misuse of partial correlations in hypothesis-testing. Sociometry, 33, 1-11.**

**Cohen, J. (1988). Statistical power
analysis for the behavioral sciences (rev. ed.).
**

**
Cole,
D. A., & Maxwell, S. E. (2003). Testing mediational models with
longitudinal data: Questions and tips in the use of structural equation
modeling. Journal of Abnormal Psychology,
112, 558-577.**

**Edwards,
J. R., & Lambert L. S. (2007). Methods for integrating moderation and
mediation: A general analytical framework using moderated path analysis. Psychological
Methods, 12, 1-22. **

**Frazier,
P. A., Tix, A. P., & Barron, K. E. (2004). Testing moderator and mediator
effects in counseling psychology research. Journal of Counseling Psychology,
51, 115-134. **

**Fritz, M. S., Kenny, D. A., & MacKinnon, D. P. (2017). ****The opposing effects of simultaneously ignoring measurement error and omitting confounders in a single-mediator model. Multivariate Behavioral Research, in press.**

**Fritz, M. S., & MacKinnon, D. P. (2007). Required
sample size to detect the mediated effect. Psychological
Science, 18, 233-239**

**Fritz, M. S., Taylor, A. B., & MacKinnon, D. P. (2012). Explanation of two anomalous
results in statistical mediation analysis. Multivariate Behavioral Research, 47, 61-87.**

**
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York: Guilford Press.**

**Hayes, A. F**., & Scharkow, M. (2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter?** Psychological Science,
24, 1918-1927.**

**Hoyle, R.
H., & Kenny, D. A. (1999). Statistical power and tests of
mediation. In R. H. Hoyle (Ed.), Statistical strategies for small
sample research.
**

**Hyman, H.
H. (1955). Survey design and analysis.
New
York:
**

**Imai,
K., Keele, L., & Tingley, D. (2010). A general approach to causal mediation
analysis. Psychological Methods, 15,
309-334. **

**James, L.
R., & Brett, J. M. (1984). Mediators, moderators and tests for
mediation. Journal of Applied Psychology, 69, 307-321. **

**Jose, P. E. (2013). Doing statistical mediation and moderation. New York: Guilford Press.**

**Judd, C.
M., & Kenny, D. A. (1981). Process analysis: Estimating mediation in
treatment evaluations. Evaluation Review, 5, 602-619. **

**Judd, C. M., & Kenny, D. A. (2010). Data analysis.
In D. Gilbert, S. T. Fiske, & G. Lindzey
(Eds.), The handbook of social psychology (5th ed., Vol. 1, pp. 115-139),
**

**Judd, C. M., Kenny, D. A., & McClelland, G. H. (2001). Estimating and testing mediation and moderation in within-subject designs. Psychological Methods, 6, 115-134. **

**Kenny, D. A., & Judd, C. M. (2014). Power anomalies in testing mediation. Psychological Science,
25, 334-339.**

**Kenny, D. A., Kashy, D. A., & Bolger, N. (1998). Data
analysis in social psychology. In D. Gilbert, S. Fiske, & G. Lindzey
(Eds.), The handbook of social psychology (Vol. 1, 4th ed., pp.
233-265).
**

**Kenny, D.
A., Korchmaros, J. D., & Bolger, N. (2003). Lower level
mediation in multilevel models. Psychological Methods, 8, 115-128. **

**Kraemer H.
C., Wilson G. T., Fairburn C. G., & Agras W. S. (2002).
Mediators and moderators of treatment effects in randomized clinical trials. Archives
of General Psychiatry, 59, 877-883. **

**Krull, J.
L. & MacKinnon, D. P. (1999). Multilevel mediation modeling in
group-based intervention studies. Evaluation Review, 23, 418-444. **

**Ledermann, T.,
Macho, S., & Kenny, D. A. (2011). Assessing mediation in dyadic data using
the Actor-Partner Interdependence Model. Structural
Equation Modeling, 18, 595-612.**

**Macho, S., & Ledermann, T. (2011). Estimating,
testing, and comparing specific effects in structural equation models: The
phantom model approach. Psychological
Methods, 16, 34-43.**

**MacCorquodale,
K., & Meehl, P. E. (1948). On a distinction between hypothetical constructs
and intervening variables. Psychological Review, 55, 95-107. **

**MacKinnon, D. P. (2008). Introduction to statistical mediation analysis. New York: Erlbaum.**

**MacKinnon,
D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual
Review of Psychology, 58, 593-614. **

**MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99-128.**

**MacKinnon,
D. P., Warsi, G., & Dwyer, J. H. (1995). A simulation study of
mediated effect measures. Multivariate Behavioral Research, 30, 41-62. **

**Muller,
D., Judd, C. M., & Yzerbyt, V. Y. (2005). When moderation is mediated and
mediation is moderated. Journal of Personality and Social Psychology, 89, 852-863.**

**Muthén, B. (2011). Applications of causally defined direct and indirect
effects in mediation analysis using SEM in Mplus. Download at www.statmodel.com/download/causalmediation.pdf.**

**Pearl J. (2001). Direct and indirect effects." In Proceedings of the Seventeenth Conference
on Uncertainty in Artificial Intelligence, pp. 411-420. Morgan Kaufmann, San Francisco, CA.**

**Pearl, J.
(2011). **T**he causal mediation
formula -- A guide to the assessment of pathways and mechanisms. Prevention Science, 13, 426-436.**

**Pearl, J.
(2014). ** **Interpretation and identification of causal mediation. *** Psychological Methods, 19,* 459-481.

**Preacher,
K. J., & Kelley, K. (2011. Effect size measures for mediation
models: Quantitative strategies for
communicating indirect effects. Psychological Methods, 16, 93-115. **

**Preacher, K. J., Zyphur, M. J., & Zhang, Z.
(2010). A general multilevel SEM framework for assessing multilevel mediation. Psychological
Methods, 15, 209-233. **

**Robins J. M., & Greenland S. (1992). Identiability and exchangeability for direct and indirect
effects. Epidemiology, 3, 143-155.**

**Shrout, P.
E., & Bolger, N. (2002). Mediation in experimental and
nonexperimental studies: New procedures and recommendations. Psychological
Methods, 7, 422-445. **

**Smith, E.
(1982). Beliefs, attributions, and evaluations: Nonhierarchical models of
mediation in social cognition. Journal of Personality and Social Psychology,
43, 248-259.**

**Sobel, M.
E. (1982). Asymptotic confidence intervals for indirect effects in
structural equation models. In S. Leinhardt (Ed.), Sociological Methodology
1982 (pp. 290-312).
**

**Valeri, L., & VanderWeele, T.J. (2013). Mediation analysis allowing for exposure-mediator interactions and causal interpretation: theoretical assumptions and implementation with SAS and SPSS macros. Psychological Methods, 18, 137-150**.

**VanderWeele, T.J. (2015). Explanation in causal inference: Methods for mediation and interaction. New York: Oxford University Press.**

**Wright, S. (1934). The method of path coefficients. Annals of Mathematical Statistics, 5, 161-215**.

*Go to the top of this page.*

*Go to the next SEM page.
*