Examinando por Autor "de la Torre, Jimmy"
Mostrando 1 - 2 de 2
Resultados por página
Opciones de ordenación
Publicación Comparing Traditional and IRT Scoring of Forced-Choice Tests(SAGE Publications, 2015-05-19) Hontangas, Pedro M.; de la Torre, Jimmy; Ponsoda, Vicente; Leenen, Iwin; Abad, Francisco J.; Morillo Cuadrado, Daniel VicenteThis article explores how traditional scores obtained from different forced-choice (FC) formats relate to their true scores and item response theory (IRT) estimates. Three FC formats are considered from a block of items, and respondents are asked to (a) pick the item that describes them most (PICK), (b) choose the two items that describe them the most and the least (MOLE), or (c) rank all the items in the order of their descriptiveness of the respondents (RANK). The multi-unidimensional pairwise-preference (MUPP) model, which is extended to more than two items per block and different FC formats, is applied to obtain the responses to each item block. Traditional and IRT (i.e., expected a posteriori) scores are computed from each data set and compared. The aim is to clarify the conditions under which simpler traditional scoring procedures for FC formats may be used in place of the more appropriate IRT estimates for the purpose of inter-individual comparisons. Six independent variables are considered: response format, number of items per block, correlation between the dimensions, item discrimination level, and sign-heterogeneity and variability of item difficulty parameters. Results show that the RANK response format outperforms the other formats for both the IRT estimates and traditional scores, although it is only slightly better than the MOLE format. The highest correlations between true and traditional scores are found when the test has a large number of blocks, dimensions assessed are independent, items have high discrimination and highly dispersed location parameters, and the test contains blocks formed by positive and negative items.Publicación Traditional scores versus IRT estimates on forced-choice tests based on a dominance model(Colegio Oficial de Psicólogos del Principado de Asturias, 2016) Hontangas, Pedro M.; Leenen, Iwin; de la Torre, Jimmy; Ponsoda, Vicente; Abad, Francisco J.; Morillo Cuadrado, Daniel VicenteBackground: Forced-choice tests (FCTs) were proposed to minimize response biases associated with Likert format items. It remains unclear whether scores based on traditional methods for scoring FCTs are appropriate for between-subjects comparisons. Recently, Hontangas et al. (2015) explored the extent to which traditional scoring of FCTs relates to the true scores and IRT estimates. The authors found certain conditions under which traditional scores (TS) can be used with FCTs when the underlying IRT model was an unfolding model. In this study, we examine to what extent the results are preserved when the underlying process becomes a dominance model. Method: The independent variables analyzed in a simulation study are: forced-choice format, number of blocks, discrimination of items, polarity of items, variability of intra-block difficulty, range of difficulty, and correlation between dimensions. Results: A similar pattern of results was observed for both models; however, correlations between TS and true thetas are higher and the differences between TS and IRT estimates are less discrepant when a dominance model involved. Conclusions: A dominance model produces a linear relationship between TS and true scores, and the subjects with extreme thetas are better measured.