A functional use of response time data in cognitive assessment

TitleA functional use of response time data in cognitive assessment
Publication TypeThesis
Year of Publication2012
AuthorsPrindle, JJ
AdvisorMcArdle, JJ
UniversityUniversity of Southern California
CityLos Angeles, CA
Accession Number1027770963
KeywordsHealth Conditions and Status, Healthcare, Methodology, Other

The stimulus for the current body of work comes from the desire of researchers to have concise and accurate cognitive tasks implemented in their surveys. The main purpose of this work was to introduce collateral information as a way of making up for lost or forgone information when adaptive frameworks are adopted. The nature of ongoing surveys and the ubiquity of computers provides ample collateral, or nonintrusive, information which can help improve score accuracy. Information such as how long a respondent spends on certain items, their age, education, and other characteristics can improve score prediction beyond simple item responses. The importance of this work included methods to effectively decrease the number of items given to participants, as well as keep the accuracy high despite the loss in information. In the current study, the Woodcock Johnson - III (WJ-III) Number Series (NS) task was presented with 30 previously unpublished items as stimuli. First, a couple of scoring models were implemented to test for model fit and compare the implications of the fit values. Then methods outlined below systematically adjusted patterns of missingness to mimic reduced and adapted subsets. Once the smaller NS item sets were delineated, several methods of adding predictive accuracy were tested and compared. In scoring respondents a traditional Item Response Theory (IRT) model as proposed by Rasch (1960) was first used to provide evidence for a uni-dimensional scale and obtain baseline statistics for item difficulty and person abilities. The next model was a Conditionally Independent Response Time (CIRT) model. The latter model includes a response model as well as a joint response time model for scoring. It was shown that with the full item set these two models provide identical ability estimates and item parameters. The response time model of the CIRT framework provides ability scores and speededness scores based on response time patterns. Next, focus was placed on effectively decreasing the number of items used in scoring each respondent. Methods included item reduction, test forms in which the same item sets were used to score each respondent, and adaptive tests, where each respondent could receive a different item set. Reduced item sets fared better when item difficulties more closely matched sample ability levels (r=0.72-0.90). Adaptive item sets were more consistent in measuring ability (i.e. half-adaptive, block adaptive, fully adaptive), but accuracy was best for the fully adaptive method used (r=0.79-0.91). The last steps of analysis involved introducing response time and demographic variables as additional predictors of the 30 item scores. Item response, response times, and response/response time interactions provided small improvements in explained variance when used as predictors (1-8%). When CIRT ability and speededness scores were used as predictors, speededness provided limited improvements (<1%) to prediction. The addition of age, education, and gender to response models improved explained variance to a moderate degree (1-5%). In conclusion, we note that sample had a higher than average ability level for the NS task and this should color our findings for the methods outlined. The item sets that did not match respondent abilities as well were improved more so by response time and demographic data. If one can correctly identify the ability ranges of a sample before administration, then a more focused reduced item set would be advantageous. Adaptive item sets seem advantageous in a more general testing situation where ability levels are more variable. The advantage of using collateral information in predicting cognitive scores is the amount of time saved by omitting items, potentially lowering costs, and allowing researchers to move onto more tasks if desired. While the improvement due to response time in these methods was limited with NS, there is a good foundation for other cognitive tasks administered in computer assisted designs.

Endnote Keywords

Cognitive psychology

Endnote ID


Citation Key6059