Biography

Neal Kingston, Ph.D., is a University Distinguished Professor in the department of Educational Psychology at the University of Kansas, in which he also serves as the Director of Graduate Studies and Director of the Achievement and Assessment Institute (AAI). His research focuses on large-scale assessment, with particular emphasis on how it can better support student learning through the use of learning maps and diagnostic classification models. Current interests include games-based assessment, personalizing assessments to improve student engagement, and the creation of more agile test development approaches. Dr. Kingston has served as principal investigator or co-principal investigator for over 250 research grants. Of particular note was the Dynamic Learning Maps Alternate Assessment grant from the US Department of Education, which was at that time was the largest grant in KU history and which currently serves 23 state departments of education. Other important testing projects include the Kansas Assessment Program, Project Lead The Way, and Adaptive Reading Motivation Measures.

Dr. Kingston is known internationally for his work on large-scale assessment, formative assessment, and learning maps. He has served as a consultant or advisor for organizations such as the AT&T, College Board, Department of Defense Advisory Committee on Military Personnel Testing, Edvantia, General Equivalency Diploma (GED), Kaplan, King Fahd University of Petroleum and Minerals, Merrill Lynch, National Council on Disability, Qeyas (Saudi Arabian National Center for Assessment in Higher Education), the state of New Hampshire, the state of Utah, the U.S. Department of Education, and Western Governors University.

As Director of AAI, Dr. Kingston is responsible for the support of multiple research centers with about 400 year-round staff and about 150 temporary employees.

Education

Curriculum Vitae (pdf)

Ph.D. Educational Measurement, Teachers College, Columbia University, New York, NY, 1983
M.Phil. Educational Measurement, Teachers College, Columbia University, New York, NY, 1983
M.Ed. Educational Measurement, Teachers College, Columbia University, New York, NY, 1978
M.A. Psychology in Education, Teachers College, Columbia University, New York, NY, 1977
B.A. Liberal Studies (concentrations in Biology & Education), State University of New York, Stony Brook, NY, 1974

Teaching

Currently Dr. Kingston teaches a proseminar in research, evaluation, measurement, and statistics every semester and a course in meta-analysis the spring of even numbered years.

EPSY 812 – Meta-analysis, Broad Course Learning Goals

  • Students will be able to conduct a comprehensive review of literature using electronic databases and backward and forward search techniques.
  • Students will be able to calculate different kinds of effect sizes and transform among different effect size measures.
  • Students will be able to code their data in ways that will illuminate variables not addressed explicitly in their primary research sources.
  • Students will be able to compute statistics necessary for a fixed effect meta-analysis.
  • Students will be able to compute statistics necessary for a random effects meta-analysis.
  • Students will know under what circumstances it is appropriate to conduct a fixed effect and a random effects meta-analysis.
  • Students will be able to make inferences about a field of research based on the heterogeneity of the set of effect sizes.
  • Students will understand how statistical artifacts impact meta-analytic results, be able to determine if such artifacts are likely problematic in their data and be able to adjust for or otherwise address these issues.
  • Students will be able to present meta-analytic results consistent with professional standards.

Research

The field of education moves slowly. Theoretical improvements often take 30-50 years before wide-spread implementation. Often research-based practices are crowded out by the fad of the day, making the challenge of improving education even greater. Sub-disciplines within education far too often work in isolation. Simple solutions that do not address the complexity of individual students or the dynamics of a classroom at best have little impact and too often have a negative impact. Such has been the case in large-scale assessment where the use of assessment to drive curriculum and instruction has had numerous negative consequences.

Coming to the University of Kansas gave me the opportunity to consider the fundamental issues of education from a broader perspective. As have many others before me, I had long realized that thinking about curriculum, instruction, and assessment needs to be integrated. However, few researchers attempted to develop models or theories to do this. I was impressed by the efforts of some, particularly the research trajectories of Susan Embretson and Kikumi Tatsuoka, but I remained frustrated regarding how incomplete this work was and how little impact it was having on federally mandated state assessment programs. This led me to develop three conference presentations in 2009 that served to focus my thinking. The first was presented at the National Council on Measurement in Education and was entitled, "What Have We Learned about the Structure of Learning from 30 Years of Research on Integrated Cognitive-Psychometric Models? Not Much." The second was presented at the American Educational Research Association Conference – The Efficacy of Formative Assessment: A Meta-Analysis. The third, presented at the National Conference on Student Assessment, was entitled, "Large-Scale Formative Assessment: Panacea, Transitional Tool, or Oxymoron."

In 2010 an opportunity presented itself and allowed me to solidify my thinking. The US Department of Education issued a request for proposals to develop a large-scale assessment system for students with significant cognitive disabilities –the approximately one percent of students with the greatest learning challenges. It was clear to me that such an assessment system needed to do far more than measure learning – it needed to facilitate learning. I identified six features that needed to be present to do this. They are as follows:

  1. Comprehensive fine-grained learning maps that guide instruction and assessment
  2. A subset of particularly important nodes that serve as content standards to provide an organizational structure for teachers
  3. Instructionally embedded assessments that reinforce the primacy of instruction
  4. Instructionally relevant testlets that model good instruction and reinforce learning
  5. Accessibility by design
  6. Status and growth reporting that is readily actionable

No one had ever tried to develop a learning environment in this way. Comprehensive fine-grained learning maps did not exist. The concept of instructionally relevant assessment previously was unnamed and in its infancy. Clearly much research – both basic and applied – was necessary and this has become the focus of my research. A closely related second area of research is assessment that supports the needs of learners who face educational or assessment challenges. This includes issues of test development and universal design and which have close ties to features 3-5 in the list above. I separate it as a research focus because it is also applicable to traditional testing programs.

  • Large-scale assessment
  • Computer-based testing
  • Diagnostic classification modelling
  • Learning maps
  • Test development
  • Score reporting
  • Assessment of students with significant cognitive disabilities
  • Assessment in higher education

Publications

The following are a few categories of recent publications (publications may appear in more than one list):

Assessments that support learning

  1. Kingston N.M., Alonzo A, Long H and Swinburne Romine R (2022) Editorial: The use of organized learning models in assessment. Front. Educ. 7:1009446. doi: 10.3389/feduc.2022.1009446
  2. Kingston, N.M., Hess, J., Cope, D., Romine, R.S. (2022). On Determining the Efficacy of Using Learning Maps as an Organizing Structure for Formative Assessment: Some Lessons Learned in Hong Jiao and Robert Lissitz (Eds.) Enhancing Effective Instruction and Learning Using Assessment Data. Charlotte, NC: Information Age Publishing
  3. Heritage, M. & Kingston, N.M. (2019). Classroom assessment and large-scale psychometrics: shall the twain meet? (a conversation with Margaret Heritage and Neal Kingston). Journal of Educational Measurement, 56(4), 670-685.
  4. Clark, A., Nash, B. Karvonen, M., & Kingston, N.M. (2017). Condensed Mastery Profile Method for Setting Standards for Diagnostic Assessment Systems. Educational Measurement: Issues and Practice. 36(4), 5–15.
  5. Kingston, N.M., Karvonen, M., Thompson, J.R., Wehmeyer, M.L., & Shogren, K.A. (2017). Fostering Inclusion of Students with Significant Cognitive Disabilities through the use of Learning Maps and Learning Map Based Assessments. Inclusion, 5(2), 110-120.
  6. Kingston, N.M. & Broaddus, A. (2017). The Use of Learning Map Systems to Support Formative Assessment in Mathematics. Education Sciences, 7 (41); doi:10.3390/educsci7010041.
  7. Kingston, N.M., Karvonen, M., Bechard, S., & Erickson, K. (2016). The Philosophical Underpinnings and Key Features of the Dynamic Learning Maps Alternate Assessment. Teachers College Record (Yearbook), 118(14). Retrieved September 1, 2016, from http://www.tcrecord.org ID Number: 140311.
  8. Popham, W. J., Berliner, D.C., Kingston, N., Fuhrman, S.H., Ladd, S.M., Charbonneau, J. & Chatterji, M. (2014). Can today's standardized tests yield instructionally useful data? Challenges, promises and the state of the art, Quality Assurance in Education, 22(4), 300-316.
  9. Bechard, S., Clark, A. K., Swinburne Romine, R., Karvonen, M., Kingston, N.M., & Erickson, K. (2019). Use of evidence-centered design to develop learning maps-based assessments. International Journal of Testing, 19:2, 188-205.

Students who face education or assessment challenges

  1. Karvonen, M., Kingston, N.M., Wehmeyer, M. & Thompson, W.J. (2020). New approaches to designing and administering inclusive assessments. Oxford Encyclopedia of Inclusive and Special Education.
  2. Wang, W., Kingston, N.M., Tiemann, G.C., Davis, M.H., Tonks, S., Hock, M. (2021). Applying evidence-centered design in the development of a multidimensional adaptive reading motivation measure. Educational Measurement: Issues and Practice, 40(4), 91-100. http://doi.org/10.1111/emip.12468
  3. Davis, M. H., Wang, W., Kingston, N., Hock, M., Tonks, S. M., & Tiemann, G. (2020). Computer Adaptive Measure of Reading Motivation. Research in Reading, 43(4), 434-453.
  4. Kingston, N.M., Karvonen, M., Thompson, J.R., Wehmeyer, M.L., & Shogren, K.A. (2017). Fostering Inclusion of Students with Significant Cognitive Disabilities through the use of Learning Maps and Learning Map Based Assessments. Inclusion, 5(2), 110-120.
  5. Kingston, N.M., Karvonen, M., Bechard, S., & Erickson, K. (2016). The Philosophical Underpinnings and Key Features of the Dynamic Learning Maps Alternate Assessment. Teachers College Record (Yearbook), 118(14). Retrieved September 1, 2016, from http://www.tcrecord.org ID Number: 140311.
  6. Cho, H. & Kingston, N.M. (2013). Why IEP Teams Assign Low Performers with Mild Disabilities to the Alternate Assessment Based on Alternate Achievement Standards. Journal of Special Education, 47, 162-174.
  7. Cho, H., Wehmeyer, M. & Kingston, N.M. (2013). Factors that Predict Elementary Educators’ Perceptions and Practice in Teaching Self-Determination. Psychology in the Schools, 50: 770-780.

Psychometric methods

  1. Wang, W., Chen, J., & Kingston, N. (2020). How well do simulation studies inform decisions about multistage testing? Journal of Applied Measurement, 21(3), 1-11.
  2. Pan, Q., Qin, L., & Kingston, N. (2020). Growth Modeling in a Diagnostic Classification Model (DCM) Framework–A Multivariate Longitudinal Diagnostic Classification Model.
  3. Wang, W. & Kingston, N.M. (2020). Using Bayesian Nonparametric Item Response Functions to Check Parametric Model Fit. Applied Psychological Measurement.
  4. Wang, W, & Kingston, N.M. (2019). Adaptive testing with the Hierarchical Item Response Theory Model. Applied Psychological Measurement, 43(1), 51-67.
  5. Embretson, S.E. & Kingston, N.M. (2018). Automatic Item Generation: A More Efficient Process for Developing Mathematics Achievement Items? Journal of Educational Measurement. 55(1), 112-131.
  6. Adjei, S., Selent, D., Heffernan, N., Pardos, Z., Broaddus, A., Kingston, N. (2014). Refining Learning Maps with Data Fitting Techniques: Searching for Better Fitting Learning Maps. In Pardos & Stamper (Eds.) The 2014 Proceedings of International Educational Data Mining Society.
  7. Gu, F., Little, T., & Kingston, N.M. (2013). Misestimation of Reliability Using Coefficient Alpha and Structural Equation Modeling when Assumptions of Tau-Equivalence and Uncorrelated Errors are Violated. Methodology, 9, 30-40.

Students

The following are Neal Kingston's current and former students.

Current Students

  • Merve Akin Tas

    After receiving my BA in Mathematics Education in Eskisehir, Turkey, I started my career as a math teacher in 2011. I received my master’s degree in Educational Psychology at Texas Tech University in 2017, where I also had more than 2 years of work experience in federally supported projects.

  • Haoyang Yu

    Fascinated by liberal arts curriculum in the United States, I first came to Pittsburgh from China for undergraduate studies in 2011. Also a believer in the power of education, I received a master’s degree from Teachers College, Columbia University.

  • Taylor Wilson

    I’m a doctoral student in the Educational Psychology department (REMS track). My academic journey has been relatively unconventional; I hopped around several schools before finally settling in Lawrence, where I received a BS in Marketing and an MS in Journalism from KU. 

  • Derick Reid

    Derick Reid, Ed.S., NCSP, is a doctoral student in Educational Psychology, REMS concentration, focusing on educational measurement, statistics, and policy studies. He obtained B.S., M.S., and Ed.S. degrees from Mississippi State University.

Former Students

  • Jessica Hess

    I am a doctoral candidate in the Educational Psychology and Research program with an emphasis on Research, Evaluation, Measurement, and Statistics. Prior to coming to KU, I received a Bachelor of Arts degree in Psychology from Bethel University in Arden Hills, MN.

  • Erkan Hasan ATALMIS

    I am associate professsor in the Measurement and Evaluation in Education program at Kahramanmaras Sutcu Imam University (KSU) in Turkey. Prior to coming to KSU, I earned PhD degree in Research, Evaluation, Measurement, and Statistics Program from University of Kansas, USA.

  • Jie Chen

    I am a psychometrician with Center for Accessible Teaching, Learning, and Assessment Systems at the University of Kansas. Prior to this current position, I was a program development associate at ACT, Inc., supporting content teams in item development.

  • Amy Clark

    I started my career as a classroom teacher and became interested in the use of assessment data to inform instruction, which led me to KU. I completed my MS and PhD in the Research, Evaluation, Measurement, and Statistics (REMS) track while working as a GRA on the Kansas Assessment Program and Dynamic Learning Maps projects.

  • Fei Gu

    After graduating from KU in 2013, and my first position was an assistant professor in quantitative psychology at McGill University. Currently, I am an assistant professor in the Educational Research and Evaluation (EDRE) program in the School of Education at Virginia Tech.

  • Hongling Lao

    After getting a bachelor’s degree in psychology, I continued my MS and PhD study in the field of Research, Evaluation, Measurement, and Statistics (REMS) at the University of Kansas.

  • Qianqian Pan

    I am currently a Post-doc research fellow at the University of Hong Kong. I received training in Research, Evaluation, Measurement, Statistics under the direction of Dr. Neal Kingston at the University of Kansas and received my Ph.D. in 2018.

  • Jake Thompson

    I completed my PhD in the Research, Evaluation, Measurement, and Statistics (REMS) program at KU in 2018. During my time in the program, I worked as a graduate research assistant on the Dynamic Learning Maps Alternate Assessment.

  • Gail Tiemann

    Currently, I am a research project manager at Accessible Teaching, Learning, and Assessment Systems (ATLAS) at KU. I primarily work with research and development projects in computer-based testing, including serving as co-principal investigator of our Innovations in Science Map, Assessment, and Report Technologies (I-SMART) project.

  • Wenhao Wang

    I am currently a senior psychometrician with Center for Accessible Teaching, Learning, and Assessment Systems (ATLAS) at the University of Kansas. After graduating in 2012, I remained at KU and worked as an operational psychometrician.

  • Chunmei (Rose) Zheng

    I was a graduate student in the Research, Evaluation, and Statistics (REMS) program at KU between 2008 and 2013. After graduation, I worked at Pearson-Always Learning for 6 years and I am currently a state psychometrician at New York State Department of Education.

  • Jessica Loughran

  • Fei Zhao