**BACKGROUND AND OBJECTIVE: **Most methods for developing clinical prognostic models focus on identifying parsimonious and accurate models to predict a single outcome; however, patients and providers often want to predict multiple outcomes simultaneously. As an example, for older adults one is often interested in predicting nursing home admission as well as mortality. We propose and evaluate a novel predictor-selection computing method for multiple outcomes and provide the code for its implementation.

**METHODS: **Our proposed algorithm selected the best subset of common predictors based on the minimum average normalized Bayesian Information Criterion (BIC) across outcomes: the Best Average BIC (baBIC) method. We compared the predictive accuracy (Harrell's C-statistic) and parsimony (number of predictors) of the model obtained using the baBIC method with: 1) a subset of common predictors obtained from the union of optimal models for each outcome (Union method), 2) a subset obtained from the intersection of optimal models for each outcome (Intersection method), and 3) a model with no variable selection (Full method). We used a case-study data from the Health and Retirement Study (HRS) to demonstrate our method and conducted a simulation study to investigate performance.

**RESULTS: **In the case-study data and simulations, the average Harrell's C-statistics across outcomes of the models obtained with the baBIC and Union methods were comparable. Despite the similar discrimination, the baBIC method produced more parsimonious models than the Union method. In contrast, the models selected with the Intersection method were the most parsimonious, but with worst predictive accuracy, and the opposite was true in the Full method. In the simulations, the baBIC method performed well by identifying many of the predictors selected in the baBIC model of the case-study data most of the time and excluding those not selected in the majority of the simulations.

**CONCLUSIONS: **Our method identified a common subset of variables to predict multiple clinical outcomes with superior balance between parsimony and predictive accuracy to current methods.

**BACKGROUND: **Guidelines recommend that clinicians use clinical prediction models to estimate future risk to guide decisions. For example, predicted fracture risk is a major factor in the decision to initiate bisphosphonate medications. However, current methods for developing prediction models often lead to models that are accurate but difficult to use in clinical settings.

**OBJECTIVE: **The objective of this study was to develop and test whether a new metric that explicitly balances model accuracy with clinical usability leads to accurate, easier-to-use prediction models.

**METHODS: **We propose a new metric called the Time-cost Information Criterion (TCIC) that will penalize potential predictor variables that take a long time to obtain in clinical settings. To demonstrate how the TCIC can be used to develop models that are easier-to-use in clinical settings, we use data from the 2000 wave of the Health and Retirement Study (n=6311) to develop and compare time to mortality prediction models using a traditional metric (Bayesian Information Criterion or BIC) and the TCIC.

**RESULTS: **We found that the TCIC models utilized predictors that could be obtained more quickly than BIC models while achieving similar discrimination. For example, the TCIC identified a 7-predictor model with a total time-cost of 44 seconds, while the BIC identified a 7-predictor model with a time-cost of 119 seconds. The Harrell C-statistic of the TCIC and BIC 7-predictor models did not differ (0.7065 vs. 0.7088, P=0.11).

**CONCLUSION: **Accounting for the time-costs of potential predictor variables through the use of the TCIC led to the development of an easier-to-use mortality prediction model with similar discrimination.