Logo image
Conflated random slopes in multilevel analysis
Thesis   Open access

Conflated random slopes in multilevel analysis

Bladimir Padilla
University of Iowa
Master of Arts (MA), University of Iowa
Autumn 2025
DOI: 10.25820/etd.008200
pdf
Conflated Random Slopes in Multilevel Analysis1.59 MBDownloadView
Open Access

Abstract

Multilevel analysis is widely used for hierarchical data (e.g., individuals within clusters), allowing researchers to examine how individual- and cluster-level (i.e., level 1 and level 2) attributes relate to individual outcomes. A well-known issue concerns conflated fixed slopes, which arise when a level 1 predictor with systematic variance across clusters is modeled with a single fixed effect at the individual level, conflating its within- and between-cluster slopes. Less recognized, however, is the problem of conflated random slopes that occurs when the sources of heterogeneity introduced by the within- and between-cluster components of a level 1 predictor are not separated (i.e., the fixed effects of the predictor are separated across levels, but not their random effects). A conflated random slope is a blend of slope heterogeneity and intercept heteroscedasticity, producing misleading conclusions about between-cluster differences. Proper parameterization of the random slopes is necessary to avoid this issue, either using cluster-mean-centering (CMC) or constant-centering (CC). Despite its importance, the practical consequences of conflated random slopes remain rarely discussed in psychology and educational research. This study addresses that gap using a Monte Carlo simulation comparing three models that estimate unconflated random slopes and one widely used model that estimates conflated random slopes. The models differed in their parameterization of the random slope for a level 1 predictor: The within-only model had a random slope for the CMC level 1 predictor only; the within–between model had random slopes for both the CMC level 1 predictor and the CC level 2 predictor; the within–contextual model had random slopes for the CC level 1 predictor and the CC level 2 predictor; and the random-conflated model had a random slope for the CC level 1 predictor only. The simulation factors were level 1 sample size (5 and 30), level 2 sample size (25 and 50), random slope reliability (0.20 and 0.60), and the ratio of intercept heteroscedasticity to slope heterogeneity (0.5, 1.0, 1.5). After examining model convergence, model performance was assessed in terms of bias in fixed and random effects, power to detect random slopes, and accuracy of standard errors. Results showed clear trade-offs. The random-conflated model had very high convergence (> .97) even in small samples, but it consistently underestimated level 1 slope variance when within- and between-cluster variances differed, yielding downwardly biased variance estimates. The within-only model correctly specified the random slope but imposed homogeneity in intercept variance across cluster means, leading to slightly biased SEs for level 2 fixed effects. By contrast, the within–between and within–contextual models—theoretically correct specifications—had substantially lower convergence rates (.38–.73) in small samples but generally outperformed the simpler alternatives in parameter estimation and SE accuracy in larger sample sizes. Regarding power, all four models showed high capability to detect non-zero level 1 random slope variance. However, power to detect level 2 random slope variance in the within–between and within–contextual models was modest (.39 and .58, respectively), especially when sample sizes at level 1 and level 2 were small. This highlights the tension in practice between model complexity and the available information for the model from the data. Overall, our results show that estimating conflated random slopes distorts both fixed and random effect estimates, threatening valid inferences about between-cluster differences. When sample sizes permit, researchers may specify the within–between model, particularly if intercept heteroscedasticity matters. When data are limited or intercept heteroscedasticity is not of interest, the within-only model is a reasonable alternative to avoid random conflation, though with the trade-off of slightly biased SEs for level 2 fixed effects.
Quantitative Psychology Centering Techniques Clustered Data Conflated Effects Monte Carlo Simulation Multilevel Modeling Variance Components

Details

Metrics

1 Record Views
Logo image