Evaluation of multiple choice questions using item analysis tool: a study from a medical institute of Ahmedabad, Gujarat
DOI:
https://doi.org/10.18203/2394-6040.ijcmph20172004Keywords:
Difficulty index, Discrimination index, Functioning distractors, Item Analysis, MCQsAbstract
Background: Multiple choice question (MCQ) assessments are becoming popular means to assess knowledge for many screening examinations among several fields including Medicine. The single best answer MCQs may also test higher-order thinking skills. Hence, MCQs remain useful assessment gadget. Objectives: 1) To evaluate Multiple Choice Questions for testing their quality. 2) To explore the association between difficulty index (p-value) and discrimination indices (DI) with distractor efficiency (DE). 3) To study the occurrence of functioning distractors for MCQs.
Methods: Total five MCQ test sessions were conducted among interns of a medical institute of Ahmedabad city Gujarat, between April 2016 to March 2017, as part of their compulsory rotating postings in the department. The average participation in each of the sessions was 17 interns, thus a total of 85 interns getting enrolled. For each test session, the questionnaire consisted of forty MCQs having 4 options including a single best answer. The MCQs were analyzed for difficulty index (DIF-I, p-value), discrimination index (DI), and distractor efficiency (DE).
Results: Total 85 interns attended the tests consisting of total 200 MCQ items (questions) from four major medical disciplines namely - Medicine, Surgery, Obstetrics & Gynecology and Community Medicine. Mean test scores of each test ranged from 36.0% to 45.8%.The reliability of the tests, the Kuder Richardson (KR) 20, ranged from 0.29 to 0.52. The standard error of Measurement ranged from 2.59 to 2.79.Out of total 200 MCQs, seventy nine (n=79) had Discrimination index (DI) <0.15 (poor), and 61 had DI ≥0.35 (excellent). Easy items having average DE of all tests was 20.1%.
Conclusions: Items having average difficulty and high discrimination with functioning distractors should be incorporated into tests to improve the validity of the assessment.
References
Hubbard JP, Clemans WV. Multiple-choice Examinations in Medicine: A Guide for Examiner and Examinee; Lea & Fabiger, Libraries Australia; 1961: 186.
Crisp GT, Palmer EJ. Engaging Academics with a Simplified Analysis of their Multiple - Choice Questions (MCQ) Assessment Results. J University Teach Learn Practice. 2007;4(2):88-106.
Clemans WV. Multiple-choice Examinations in Medicine: A Guide for Examiner and Examinee. London: Lea & Fabiger, WHO Library. 1965;18:61.
De Champlain AF, Melnick D, Scoles P, Subhiyah R, HoltzmanK, Swanson D, et al. Assessing medical students’ clinical sciences knowledge in France: a collaboration between the NBME and a consortium of French medical schools. Acad Med. 2003;78:509-17.
Gajjar S, Sharma R, Kumar P, Rana M. Item and test analysis to identify quality multiple choice questions (MCQS) from an assessment of medical students of Ahmedabad, Gujarat. Indian J Community Med. 2014;39 (1):17-20.
Norman G. Evaluation methods: A resource handbook. In: Shannon S, Norman, G, editors. Chapter 4.1. Multiple choice questions, The Program for Educational Development, McMaster University. Hamilton, Canada: McMaster University; 1995: 47-54.
Peitzman SJ, Nieman LZ, Gracely EJ. Comparison of “fact-recall” with “higher-order” questions in multiple-choice examinations as predictors of clinical performance of medical students. Acad Med. 1990;65:59-60.
Ross MM, McDonald B, McGuinness J. The palliative care quiz for nursing (PCQN): the development of an instrument to measure nurses’ knowledge of palliative care. J Adv Nurs. 1996;23:126-37.
Knight, P. The local practices of assessment. Assessment & Evaluation in Higher Education. 2006;31(4):435-52.
Biswas SS, Jain V, Agrawal V, Bindra M, Small group learning: effect on item analysis and accuracy of self-assessment of medical students, Educ Health (Abingdon). 2015;28(1):16-21.
Fowell SL, Southgate LJ, Bligh JG. Evaluating assessment: the missing link? Medic Education. 1999;33:276-81.
Kehoe J. Basic item analysis for multiple-choice tests. Practical Assessment, Research & Evaluation. 1995;4(10):20-4.
Maunder P. In support of multiple choice questions: some evidence from Curriculum 2000, Paper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, Education Line; 2002: 12-14.
Matlock-Hetzel S. Basic concept in item and test analysis, Presented at annual meeting of the Southwest Educational Research Association, Austin; 1997: 1-27.
Eaves S, Erford B. The Gale group: The purpose of item analysis, item difficulty, discrimination index, characteristic curve. 2007 edition, Online Source: Available at: www.education.com/reference/ article/itemanalysis/. Accessed on 21 April 2017.
Sim S, Rasiah RI, Relationship Between Item Difficulty and Discrimination Indices in True/False Type Multiple Choice Questions of a Para-clinical Multidisciplinary Paper. Ann Acad Med Singapore. 2006;35:67-71.
Hingorjo MR, Jaleel F. Analysis of one-best MCQs: The difficulty index, discrimination index and distracter efficiency. J Pak Med Assoc. 2012;62:142-7.
Pellegrino J, Chudowsky N, Glaser R, (Book) Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academic Press; 2001: 103-108.
Guilbert JJ. Educational Hand-Book for health professionals, WHO offset Publication 35, 1 ed. Geneva: World Health Organization; 1981: 461-465.
Kelley TL. The selection of upper and lower groups for the validation of test items. J Educ Psychol. 1939;30:17-24.
Patil VC, Patil HV, Item Analysis of Medicine Multiple Choice Questions (MCQs) for Under Graduate (3rd year MBBS) Students. RJPBCS. 2015;6(3):1242
Mukherjee P, Lahiri SK, Analysis of Multiple Choice Questions (MCQs): Item and Test Statistics from an assessment in a medical college of Kolkata, West Bengal, IOSR. J Dental Med Sci. 2015;14(12):47-52.