Expert Prediction Versus Difficulty Index Measured by Psychometric Analysis; A Mixed Method Study Interpreted through Diagnostic Judgment by Cognitive Modeling Framework
DOI:
https://doi.org/10.51846/jucmd.v3i2.3047Keywords:
Assessment, Expert Prediction, Dia Com Framework, Item Difficulty, Test PsychometricsAbstract
Objective: The item difficulty is determined in two ways; one relies on expert judgments, and the other on psychometric analysis. This study compared item developers' perceptions of item difficulty with psychometric analysis results and explored their thought processes in categorizing items.
Methodology: This explanatory sequential mixed method study was conducted from October to December in 2022 in three phases (quantitative, qualitative, and mixed method strand). Difficulty ranking of items by 20 subject experts, for all the preclinical years' end-of-module exams was compared with that obtained by psychometric analysis from the OMR (Optical Mark Reader). Cohen’s Kappa was used to check the agreement and Pearson’s correlation was used to infer the correlation between the two measures (item writers’ perception of item difficulty and Rightmark analysis). All the item developers (20) were interviewed through an open-ended two-item
questionnaire. Interviews were recorded and transcribed. Themes and subthemes were identified from interview data through manual coding. The anonymity of the participants was maintained.
Results: A total of 1150 items from Anatomy, Physiology, Biochemistry, Pharmacology, Pathology & Forensic Medicine were compared. These items were developed by 20 content experts. There was a weak positive (r=0.11) but significant correlation (p=0.00)
between faculty perception and Right mark analysis of the item difficulty. However, there was no agreement between the two measurements (Cohen’s Kappa k=0.042, p=0.027). The interviews of item developers identified four major themes: academic performance, learning habits, the content targeted, and the item's construction.
Conclusion: Experts consider contextual factors which cover content and student background, when ranking items, while psychometric analysis is based on item performance data. Thus, contextual nuances may lead to differences in judgment.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Memoona Mansoor, Shazia Imran, Ali Tayyab, Rehmah Sarfraz
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution 4.0 International License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository, in a journal or publish it in a book), with an acknowledgment of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process.