Olmos, RicardoFerrer, EmilioMartínez Huertas, José Ángel::virtual::4270::600Martínez Huertas, José ÁngelMartínez Huertas, José ÁngelMartínez Huertas, José Ángel2024-05-202024-05-202021-02-260027-3171; eISSN 1532-7906https://doi.org/10.1080/00273171.2021.1889946https://hdl.handle.net/20.500.14468/12613A good deal of experimental research is characterized by the presence of random effects on subjects and items. A standard modeling approach that includes such sources of variability is the mixed-effects models (MEMs) with crossed random effects. However, under-parameterizing or over-parameterizing the random structure of MEMs bias the estimations of the Standard Errors (SEs) of fixed effects. In this simulation study, we examined two different but complementary perspectives: model selection with likelihood-ratio tests, AIC, and BIC; and model averaging with Akaike weights. Results showed that true model selection was constant across the different strategies examined (including ML and REML estimators). However, sample size and variance of random slopes were found to explain true model selection and SE bias of fixed effects. No relevant differences in SE bias were found for model selection and model averaging. Sample size and variance of random slopes interacted with the estimator to explain SE bias. Only the within-subjects effect showed significant underestimation of SEs with smaller number of items and larger item random slopes. SE bias was higher for ML than REML, but the variability of SE bias was the opposite. Such variability can be translated into high rates of unacceptable bias in many replications.eninfo:eu-repo/semantics/openAccessModel Selection and Model Averaging for Mixed-Effects Models with Crossed Random Effects for Subjects and Itemsjournal articlemixed-effects modelscrossed random effectsrandom slopesmodel selectionmodel averagingMLREML