Skip to content Skip to navigation

Quantitative Psychology

Quantitative Faculty

Sarah Depaoli
Keke Lai
Jack Vevea
Haiyan Liu 
Ren Liu 
Emeritus and Founding Faculty: William Shadish (Academic Website)

Quantitative program

Quantitative psychologists create the methods used to gather data and the statistics used to analyze them. 

Quantitative psychology is central to all aspects of psychology: science, education, public interest and practice. This essential role of quantitative psychology is reflected in the fact that Division 5 - Evaluation, Measurement, and Statistics - is one of the Charter Divisions of the APA.

Quantitative psychology includes research and development in a number of broad areas: measurement, research design and statistical analysis (see Aiken, West, Sechrest & Reno, 1990), as well as mathematical and statistical modeling of psychological processes.

Within each area, quantitative psychologists develop new methodologies and evaluate existing methodologies to examine their behavior under conditions that exist in psychological data (e.g., with small samples). This work supports the substantive research of all areas within psychology.

At UC Merced, Psychological Sciences faculty members with interests in quantitative psychology have strengths in a wide array of topics, including Bayesian statistics, experimental and quasi-experimental design, meta-analysis, propensity score analysis, psychometric theory, structural equation modeling, hierarchical linear modeling, item response theory, longitudinal statistical modeling, sample size planning and statistics that are robust to violations of assumption.

Courses in these and related areas are available. The faculty members range in interests from the applied statistics to basic mathematical statistics. 


In addition to the core coursework, students interested in quantitative psychology are encouraged to take the following courses:

  • PSY 202c: Multivariate Statistics
  • PSY 203: Multilevel Modeling
  • PSY 205: Measurement Theory and Psychometrics
  • PSY 206: Quantitative Methods for Reviewing Research (Research Synthesis/Meta-Analysis)
  • PSY 207: Structural Equation Modeling
  • PSY 209: Longitudinal Data Analysis and Bayesian Extensions
  • PSY 210: Item Response Theory
  • PSY 212: Essential Math for Quantitative Psychologists
  • PSY 213: Mathematical Toolbox for Quantitative Psychology
  • PSY 290: Statistical Computing
  • PSY 290: Advanced Meta-Analysis
  • PSY 290 Bayesian Statistics

Additional specialized courses will be offered within this area. Students should work with their faculty mentors to select appropriate courses that can provide the best foundations for their research. This may include taking courses in other specialties within Psychological Sciences and courses offered by other programs or by other UC campuses offering courses in quantitative methods.

Students who are interested in quantitative psychology can also take substantive psychology courses in another area of psychology (e.g., developmental, health). This serves two purposes. First, it ensures a minimal level of contact with the field of psychology, commensurate with getting a doctorate in psychology. Second, it can increase the marketability of quantitative psychologists by demonstrating the ability to talk to faculty members in substantive areas such as developmental psychology or health psychology.

Selected Publications for the Quantitative Group (2015-present)

Bold font indicates Quantitative Psychology Faculty Member.
Underlined names indicates current or former PhD student at UC Merced.

  1. Buzhardt, J., Greenwood, C., Hackworth, N. J., Jia, F., Shannon, B. K., Walker, D., & Matthews, J. M. (2019). Cross-cultural exploration of growth in expressive communication of English-speaking infants and toddlers. Early Childhood Research Quarterly. doi: 10.1016/j.ecresq.2019.04.002

  2. Buzhardt, J., Greenwood, C., Walker, D., Jia, F., Higgins S., Montagna, D., & Muehe, C. (2018). Web-based support for data-based decision making: Effect of intervention implementation on children’s communication. Journal of Early Intervention. 40(3), 246-267. doi:10.1177/1053815118788059

  3. Chen, P., Wu, W., Garnier-Villarreal, M., Kite, B. A., Jia, F. (2019). Testing measurement invariance with ordinal missing data: a comparison of commonly used methods. Multivariate Behavioral Research. doi: 10.1080/00273171.2019.1608799

  4. Cheng, Y., & Liu. H. (2016). A short note on the maximal point-biserial correlation under non-normality. British Journal of Mathematical and Statistical Psychology, 69(3), 344-351. doi: 10.1111/bmsp.12075

  5. Citkowikz, M., & Vevea, J. L. (2017). A parsimonious weight function for modeling publication bias. Psychological Methods, 22, 28-41.

  6. Cobb., P., & Shadish, W. (2015). Abstract: Assessing trend in single case designs using generalized additive models. Multivariate Behavioral Research, 50, 131.

  7. Coburn, K. M., & Vevea, J. L. (2015). Publication bias as a function of study characteristics. Psychological Methods, 20, 310-330.

  8. da Silva, M. A. (s), Liu, R., Huggins-Manley, A. C., & Bazán, J. L. (2018). Incorporating the Q-matrix into multidimensional item response theory models. Educational and Psychological Measurement. doi: 10.1177/0013164418814898

  9. Depaoli, S., Agtarap, S., Choi, A. Y., Coburn, K. M., & Yu, J. (2018). Advances in quantitative research within the psychological sciences. Translational Issues in Psychological Science, 4, 335-339. Special issue on Quantitative Methods in Psychology.

  10. Depaoli, S., & Clifton, J. P. (2015). A Bayesian approach to multilevel structural equation modeling with continuous and dichotomous outcomes. Structural Equation Modeling, 22, 327-351.

  11. Depaoli, S., Clifton, J. P., & Cobb, P. R. (2016). Just Another Gibbs Sampler (JAGS): Flexible software for MCMC implementation. Journal of Educational and Behavioral Statistics, 6, 628-649.

  12. Depaoli, S., & Liu, Y. (2018). Review: Bayesian Psychometric Modeling. Psychometrika, 83, 511-514.

  13. Depaoli, S., Rus, H., Clifton, J., van de Schoot, R., & Tiemensma, J. (2017). An introduction to Bayesian statistics in Health Psychology. Health Psychology Review, 11, 248-264.

  14. Depaoli, S., Tiemensma, J., & Felt, J. (2018). Assessment of health surveys: Fitting a multidimensional graded response model. Psychology, Health, and Medicine (Methodology special issue), 23, 13-31.

  15. Depaoli, S., & van de Schoot, R. (2017). Improving transparency and replication in Bayesian statistics: The WAMBS-checklist. Psychological Methods, 22, 240-261.

  16. Depaoli, S., van de Schoot, R., van Loey, N., & Sijbrandij, M. (2015). Using Bayesian statistics for modeling PTSD through latent growth mixture modeling: Implementation and discussion. European Journal of Psychotraumatology, 6 , 27516.

  17. Depaoli, S., Winter, S. D., Lai, K., & Guerra-Peña, K. (in press). Implementing continuous non-normal skewed distributions in latent growth mixture modeling: An assessment of specification errors and class enumeration. Multivariate Behavioral Research.

  18. Depaoli, S., Yang, Y., & Felt, J. (2017). Using Bayesian statistics to model uncertainty in mixture models: A sensitivity analysis of priors. Structural Equation Modeling, 24, 198-215.

  19. DePue, M. K., Lambie G. W., Liu, R., & Gonzalez, J. (2016). Investigating supervisory relationships and therapeutic alliances using structural equation modeling. Counselor Education and Supervision, 55(4), 263-277. doi: 10.1002/ceas.12053

  20. Epperson, A., Depaoli, S., Song, A. V., Wallander, J. L., Elliott, M., Cuccaro, P., Tortolero, S., & Schuster, M. (2017). Perceived physical appearance: Assessing measurement equivalence in Black, Latino, and White adolescents. Journal of Pediatric Psychology, 42, 142-152.

  21. Epperson, A., Wallander, J., Song., A. V., Depaoli, S., Peskin, M. F., Elliot, M. N., & Schuster, M. A. (in press). Gender and racial/ethnic differences in adolescent intentions and willingness to smoke cigarettes: Evaluation of a structural equation model. Journal of Health Psychology.

  22. Felt, J. M., Castaneda, R., Tiemensma, J., & Depaoli, S. (2017). Identifying "atypical" responses in the CushingQoL questionnaire: Using person fit statistics to detect outliers. Frontiers in Psychology, 8, 1-9.

  23. Felt, J. M., Depaoli, S., Andela, C. D., Pereira, A. M., Biermasz, N. R., Kaptein, A. A., & Tiemensma, J. (2016). Using the Common Sense Model of illness perceptions to better understand the impaired quality of life of patients treated for neuroendocrine diseases. In Neuroendocrinology (pp. SAT-502). Endocrine Society.

  24. Felt, J. M., Depaoli, S., Pereira, A. M., Biermasz, N. R., & Tiemensma, J. (2015). Total score or subscales in scoring the Acromegaly Quality of Life Questionnaire: Using novel confirmatory methods to compare scoring options. European Journal of Endocrinology, 173, 37-42.

  25. Felt, J. M., Depaoli, S., Pereira, A. M., Biermasz, N. R., & Tiemensma, J. (2015). Using novel confirmatory statistical methods to compare scoring options of the Acromegaly Quality of Life (AcroQoL) Questionnaire. In Acromegaly (pp. PP09-3). Endocrine Society.

  26. Felt, J., Depaoli, S., & Tiemensma, J. (2017). An overview of latent growth curve models for biological markers of stress. Frontiers in Neuroscience, 11, 1-17.

  27. Gao, M., Miller, D., & Liu, R. (2017). The impact of Q-matrix misspecification and model misuse on classification accuracy in the generalized DINA model. Journal of Measurement and Evaluation in Education and Psychology, 8(4), 391-403. doi:10.21031/epod.332712

  28. Greenwood, C., Irvin, D., Walker, D., Buzhardt, J., & Jia, F. (2019). Criterion validity of the Early Communication Indicator (ECI) for infants and toddlers. Assessment for Effective Intervention. doi:10.1177/1534508418824154

  29. Greenwood, C., Walker, D., Buzhardt, J., Irvin, D., & Jia, F. (2018). Update on the Early Movement Indicator (EMI) for infants and toddlers. Topics in Early Childhood Special Education, 38(2), 105-117. doi:10.1177/0271121418777290

  30. Hansford, T. G., Depaoli, S., & Canelo, K. S. (in press). Locating U.S. Solicitors General in the Supreme Court’s policy space. Presidential Studies Quarterly.

  31. Jia, F. & Wu, W. (2019). Evaluating methods for handling missing ordinal data in structural equation modeling. Behavior Research Methods. doi: 10.3758/s13428-018-1187-4

  32. Kelley, K., & Lai, K. (2018). Sample size planning for confirmatory factor models: Power and accuracy for effects of interest. In P. Irwing, T. Booth, & D. Hughes (Eds.), The Wiley handbook of psychometric testing (pp. 113-138). London: Wiley.

  33. Konijn, E. A., van de Schoot, R. Winder, S. D., & Ferguson, C. J. (2015). Possible solution to publication bias through Bayesian statistics, including proper null hypothesis testing. Communication Methods and Measures, 9, 280-302.

  34. Lai, K. (in press). Better confidence intervals for RMSEA in growth models given nonnormal data. Structural Equation Modeling.

  35. Lai, K. (2019). A simple analytic confidence interval for CFI given nonnormal data. Structural Equation Modeling. Advance online publication. doi: 10.1080/10705511.2018.1562351

  36. Lai, K. (2019) Confidence interval for RMSEA or CFI difference between nonnested models. Structural Equation Modeling. Advance online publication. doi: 10.1080/10705511.2019.1631704

  37. Lai, K. (2019). Creating misspecified models in moment structure analysis. Psychometrika, 84, 781-801. doi: 10.1007/s11336-018-09655-0

  38. Lai, K. (2019). More robust standard error and confidence interval for SEM parameters given incorrect model and nonnormal data. Structural Equation Modeling, 26, 260-279. doi: 10.1080/10705511.2018.1505522

  39. Lai, K. (2018). Estimating standardized SEM parameters given nonnormal data and incorrect model: Methods and comparison. Structural Equation Modeling, 25, 600-620.

  40. Lai, K., & Green, S. B. (2016). The problem of having two watches: Assessment of model fit when RMSEA and CFI disagree. Multivariate Behavioral Research, 51 , 220-239.

  41. Lai, K., Green, S. B., & Levy, R. (2017). Graphical displays for understanding SEM model similarity. Structural Equation Modeling, 24, 803-818. doi: 10.1080/10705511.2017.1334206

  42. Lai, K., Green, S. B., Levy, R., Xu, Y., Yel, N., Thompson, M. S., Eggum-Wilkens, N. D., Kunze, K., Iida, M., Reichenberg, R., & Zhang, L. (2016). Assessing model similarity in structural equation modeling. Structural Equation Modeling, 23, 491-506.

  43. Lai, K., & Zhang, X. (2017). Standardized parameters in misspecified structural equation models: Empirical performance in point estimates, standard errors, and confidence intervals. Structural Equation Modeling, 24, 571-584.

  44. Liu, H. (2015). Review of The BUGS Book: A Practical Introduction to Bayesian Analysis, by David Lunn, Christopher Jackson, Nicky Best, Andrew Thomas, and David Spiegelhalter. Structural Equation Modeling: A Multidisciplinary Journal, 22 (2), 323-325. doi: 10.1080/10705511.2014.958046

  45. Liu, H., Jin, I. H., & Zhang, Z. (2018). Structural Equation Modeling of Social Networks: Specification, Estimation, and Application. Multivariate Behavioral Research. doi: 10.1080/00273171.2018.1479629

  46. Liu, H., & Zhang, Z. (2017). Logistic regression with misclassification in binary outcome variables: a method and software. Behaviormetrika, 44(2), 447-476. doi: 10.1007/s41237-017-0031-y

  47. Liu, H., Zhang, Z, & Grimm, K. J. (2016). Comparison of Inverse Wishart and Separation-Strategy Priors for Bayesian Estimation of Covariance Parameter Matrix in Growth Curve Analysis. Structural Equation Modeling: A Multidisciplinary Journal, 23 (3), 354-367. doi: 10.1080/10705511.2015.1057285

  48. Liu, R., Jiang, Z. (2019). A general diagnostic classification model for rating scales. Behavior Research Methods. doi: 10.3758/s13428-019-01239-9

  49. Liu, R. (2018). Misspecification of attribute structure in diagnostic measurement. Educational and Psychological Measurement, 78(4), 605-634. doi: 10.1177/0013164417702458

  50. Liu, R., & Huggins-Manley, A. C. (2016). The specification of attribute structures and its effects on classification accuracy in diagnostic test design. In L. A. van der Ark, D. M. Bolt, W. -C. Wang, J. A. Douglas, & M. Wiberg (Eds.), Quantitative psychology research (pp. 243-254). New York, NY: Springer. doi: 10.1007/978-3-319-38759-8_18

  51. Liu, R., & Huggins-Manley, A. C. (2016). Review of uneducated guesses: using evidence to uncover misguided education policies. Psychometrika, 81(1), 246-248. doi: 10.1007/s11336-015-9490-9

  52. Liu, R., Huggins-Manley, A. C., & Bradshaw, L. (2017). The impact of Q-matrix designs on diagnostic classification accuracy in the presence of attribute hierarchies. Educational and Psychological Measurement, 77(2), 220–240. doi: 10.1177/0013164416645636

  53. Liu, R., Huggins-Manley, A. C., & Bulut, O. (2018). Retrofitting diagnostic classification models to responses from IRT-based assessment forms. Educational and Psychological Measurement, 78(3), 357-383. doi: 10.1177/0013164416685599

  54. Liu, R., Jiang, Z. (2018). Diagnostic classification models for ordinal item responses. Frontiers in Psychology - Quantitative Psychology and Measurement.

  55. Liu, R., Qian, H., Luo, X., & Woo, A. (2018). Relative diagnostic profile: a subscore reporting framework. Educational and Psychological Measurement,78(6), 1072–1088. doi: 10.1177/0013164417740170

  56. Merluzzi, T. V., Philip, E. J., Heitzmann, C. A., Liu, H., Yang, M., & Conley, C. (2018). Self-Efficacy for Coping with Cancer: Revision of the Cancer Behavior Inventory (Version 3.0). Psychological Assessment, 30(4): 486-499. doi: 10.1037/pas0000483

  57. Montoya, M., Horton, R., Vevea, J. L., & Citkowicz, M. (2017). A re-examination of the mere exposure effect: The influence of repeated exposure on recognition, familiarity, and liking. Psychological Bulletin, 143, 459-498.

  58. Moore, T. M., Reise, S. P., Depaoli, S., & Haviland, M. G. (2015). Iteration of partially specified target matrices in exploratory and Bayesian confirmatory factor analysis. Multivariate Behavioral Research, 50, 149-161.

  59. Pustejovsky, J., Hedges, L. V., & Shadish, W. R. (2015). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. Quality Control and Applied Statistics, 60, 367-370.

  60. Rhemtulla, M., Jia, F., Wu, W., & Little, T. D. (2014). Planned missing designs to optimize the efficiency of latent growth parameter estimates. International Journal of Behavioral Development, 38(5), 423-434. doi: 10.1177/0165025413514324

  61. Richardson, E., DePue, M. K., Therriault, D., Alli, S., & Liu, R. (2019). The influence of substance use patterns on non-suicidal self-injury (NSI) in adults. Substance Use & Misuse.

  62. Scott, S., Wallander, J., Depaoli, S., Grunbaum, J., Tortolero, S. R., Cuccaro, P. M., Elliott, M. N., & Schuster, M. A. (2015). Gender role orientation and health-related quality of life among African American, Hispanic, and White youth. Quality of Life Research, 24, 2139-2149.

  63. Shadish, W. R. (2015). Introduction to the special issue on the origins of modern meta-analysis. Research Synthesis Methods, 6, 219-220.

  64. Shadish, W. R. & Lecy, J. D. (2015). The meta-analytic big bang. Research Synthesis Methods, 6, 246-264.

  65. Shamseer, L., Sampson, M., Bukutu, C., Nikles, J., Tate, R., Johnson, B. C., Zucker, D. R., Shadish, W., Kravitz, R., Guyatt, G., Altman, D. G., Moher, D., Vohra, S., & the CENT Group. (2015). CONSORT extension for N-of-1 Trials (CENT) 2015: Explanation and elaboration. British Medical Journal, 350, h1783.

  66. Shadish, W. R., Zelinsky, N. A. M., Vevea, J. L., & Kratochwill, T. R. (2016). A survey of publication preferences of single-case design researchers when treatments have small or large effects. Journal of Applied Behavior Analysis.

  67. Smid, S., Depaoli, S., & van de Schoot, R. (in press). Predicting a distal outcome variable from a latent growth model: ML versus Bayesian estimation. Structural Equation Modeling: A Multidisciplinary Journal.

  68. Sullivan, K. J., Shadish, W. R., & Steiner, P. M. (2015). An introduction to modeling longitudinal data with generalized additive models: Applications to single-case designs. Psychological Methods, 20, 26-42.

  69. Swank, J. M., Limberg, D., & Liu, R. (in press). Development of the altruism scale for children: an assessment of caring behaviors among children. Measurement and Evaluation in Counseling and Development.

  70. Tiemensma, J., Depaoli, S., & Felt, J. M. (2016). Using subscales when scoring the Cushing's Quality of Life Questionnaire. European Journal of Endocrinology, 174, 33-40.

  71. Tiemensma, J., Depaoli, S., Winter, S. D., Felt, J. M., Rus, H., & Arroyo, A. (2018). The Performance of the IES-R for Latinos and non-Latinos: Assessing Measurement Invariance. PLOS One.

  72. van de Schoot, R., Sijbrandij, M., Depaoli, S., Winter, S., Olff, M., & van Loey, N. (2018). Bayesian PTSD-trajectory analysis with informed priors based on a systematic literature search and expert elicitation. Multivariate Behavioral Research, 53, 267-291.

  73. van de Schoot, R., Sijbrandij, M., Winter, S. D., Depaoli, S., & Vermunt, J. K. (2017). The development of the GRoLTS-Checklist: A tool for assessing the quality of reporting on latent trajectory studies. Structural Equation Modeling: A Multidisciplinary Journal, 24, 451-467.

  74. van de Schoot, R., Veen, D., Smeets, L., Winter, S., & Depaoli, S. (in press). A tutorial on using the WAMBS-checklist to avoid the misuse of Bayesian statistics. [book chapter]

  75. van de Schoot, R., Winter, S., Zondervan-Zwijnenburg, M., Ryan, O., & Depaoli, S. (2017). A systematic review of Bayesian applications in psychology: The last 25 years. Psychological Methods, 22, 217-239.

  76. Varni, J. W., Thissen, D., Stucky, B. D., Liu, Y. , Magnus, B., He, J., DeWitt, E. M., Irwin D. E., Lai, J.-S., Amtmann, D., & DeWalt, D. A. (2015). Item-level information discrepancies between children and their parents on the PROMIS pediatric scales. Quality of Life Research, 24 , 1921-1937.

  77. Vevea, J.L. & Coburn, K.M. (2019). Publication Bias. In Valentine, J., Cooper, H., & Hedges, L.V., The Handbook of Research Synthesis and Meta-Analysis (3rd Edition). New York: Russel Sage Foundation.

  78. Vevea, J. L., & Coburn, K. M. (2015). Maximum-likelihood methods for meta-analysis: A tutorial using R. Group Processes and Intergroup Relations, 18, 329-347. (Invited Paper.)

  79. Vevea, J.L. & Zelinsky, N.A.M. (2019). Evaluating coding decisions. In Valentine, J., Cooper, H., & Hedges, L.V., The Handbook of Research Synthesis and Meta-Analysis (3rd Edition). New York: Russel Sage Foundation.

  80. Winter, S. D., Depaoli, S., & Tiemensma, J. (2018). Assessing differences in how the CushingQoL is interpreted across countries: Comparing patients from the U.S. and the Netherlands. Frontiers in Endocrinology.

  81. Wu, W., & Jia, F. (2013). A new procedure to test mediation with missing data through nonparametric bootstrapping and multiple imputation. Multivariate Behavioral Research, 48(5), 663-691. doi:10.1080/00273171.2013.816235

  82. Wu, W., Jia, F., & Enders, C. K. (2015). A comparison of imputation strategies to ordinal missing data for Likert scale variables. Multivariate Behavioral Research. doi:10.1080/00273171.2015.1022644

  83. Wu, W., Jia, F., Kinai, R., & Little, T. D. (2017). Optimal number and allocation of repeated measures for linear spline growth modeling: A search for efficient designs. International Journal of Behavioral Development. doi:0165025416644076

  84. Wu. W., Jia, F., Rhemtulla, M., & Little, T. D. (2016). Search for efficient complete and planned missing data designs for analysis of change. Behavioral Research Methods. doi: 10.3758/s13428-015-0629-5

  85. Zelinsky, N. A. M., & Shadish, W. R. (2016). A demonstration of how to do a meta-analysis that combines single-case designs with between-groups experiments: The effects of choice making on challenging behaviors performed by people with disabilities. Developmental Neurorehabilitation.

  86. Zhang, Z., Jiang, K., Liu, H., & In-Sue Oh. (2017). Bayesian meta-analysis of correlation coefficients through power prior. Communications in Statistics-Theory and Methods, 46, 11988-12007. doi: 10.1080/03610926.2017.1288251

  87. Zondervan-Zwijnenburg, M. A. J., Depaoli, S., Peeters, M., and van de Schoot, R. (2018). Pushing the limits: The performance of ML and Bayesian estimation with small and unbalanced samples in a latent growth model. Methodology.

  88. Zondervan-Zwijnenburg, M. A. J., Peeters, M., Depaoli, S., & van de Schoot, R. (2017). Where do priors come from? applying guidelines to construct informative priors in small sample research. Research in Human Development, 14, 305-320.