Evaluasi Karakteristik Psikometrik Tes Bakat Differensial dengan Teori Klasik

Farida Agus Setiawati, Rita Eka Izzaty, Veny Hidayat

Abstract


The aims of the research were to analyze the psychometric characteristics of a set Differential Aptitude Test (DAT) consisting of five subtests, they are: numerical, abstract reasoning, space relation, and mechanics. This research using quantitative research method. The data ware collected from documentation pschological testing in UNY. The total data of the study are 2118 youth students in Yogyakarta. The analysis of psychometric characteristics of instrument using classical approach. The characteristic instrument are index of difficulty, index of disciminancy, effectifitas distractor, and reliability coefficient. The data were analyzed with Microcat ITEMAN 3.0 program. The result of analysis can be concluded that there are many information about item characteristic. The items characteristik have variation in index difficulty. Majority items have good index discimination, and many items are bad or need improvement. The items of abstract reasoning subtes have most distractor option which are not functioning properly, and all subtests in DAT instruments are reliable.


Keywords


classical theory, differential aptitude test (DAT), psychometric characteristics

Full Text:

PDF

References


Aiken, L.R. (1994). Psychological testing and assessment (8th ed.). Boston: Allyn and Bacon.

Allen, M.J., & Yen, W.M. (1979). Introduction to measurement theory. California: Brooks/Cole Publishing Company.

Anastasi, A. (August 1982). Aptitude and achievement tests: the curious case of the indestructible strawperson. Paper presented in Invited Symposia: State of the Art Series, Achievement Testing, at the meeting of the American Psychological Association, Washington, D.C.

Anastasi, A. & Urbina, S. (1997). Psychological testing. Indiana: Prentice Hall, Inc.

Awopeju, O.A., & Afolabi, E.R.I. (2016). Comparative analysis of classical test theory and item response theory based item parameter estimates of senior school certificate mathematics examination. European Scientific Journal, 12 (28). 263-284.

Azwar, S. (2013). Reliabilitas dan validitas (Edisi keempat). Yogyakarta: Pustaka Pelajar.

Ballado, R.S., Morales, R.A., & Ortiz, R.M. (2014). Development and validation of a teacher education aptitude test. International Journal of Interdisciplinary Research and Innovations, 2 (4) 129-133.

Bennett, G. K., Seashore, H. G., & Wesman, A. G. (1948). Differential aptitude tests: some comments by the authors. Journal Counseling & Development, 27 (1), 20-22. doi: 10.1002/j.2164-5892.1948.tb01449.x.

Boopathiraj, C. & Chellamani, K. (2013). Analysis of tests items on difficulty level and discrimination index in the test for research in education. International Journal of Science and Interdisciplinary Research. 2(2). 189-193.

Chauhan, P., Chauhan, G. R., Chauhan, B. R., Vaza, J. V., & Rathod, S. P. (2015). Relationship between difficulty index and distracter effectiveness in single best-answer stem type multiple choice questions. International Journal of Anatomy and Research.3(4). 1607-1610. doi: 10.16965/ijar.2015.299.

Crocker, L., & Algina, J. (2008). Introduction to classical and modern test theory. New York: Holt, Reinhart, and Winston, Inc.

DeMars, C. (2010). Item response theory: understanding statistics measurement. New York: Oxford University Press, Inc.

Elbokai, H.F. (2012). Reliability and validity study of the school and college ability test (SCAT) advanced form. International Journal of Humanity and Social Sciences. 2 (11). 89-96.

Furr, R.M., & Bacharach, V.R. (2008). Psychometrics an introduction. Los Angeles: Sage Publication.

Goslin, D.A. (1963). The search for ability: standardized testing in social perspective. New York: Russell Sage.

Gronlund, N.E. (1998). Assessment of student achievement (6th Ed.). Boston, MA: Allyn and Bacon.

Hambleton, R.K., & Swaminathan, H. (1985). Items response theory: principles and application. Boston: Kluwer-Nijjhoff Publish.

Hambleton, R.K., Swaminathan, H., & Rogers H.J. (1991). Fundamental of item response theory. London: Sage Publication.

Hashmi, M. A., Zeeshan, A. Saleem, M., & Akbar, R. A. (2012). Development and validation of an aptitude test for secondary school mathematics students. Bulletin of Educational and Research. 34 (1). 65-76.

Hingorjo, R.M. & Jaleel, F. (2012). Analysis of one-best MCQs: the difficulty index, discrimination index and distractor efficiency. Journal of Pakistan Medical Association. 62(2). 142-147.

Junker, B.W. (2012). Some aspects of classical reliability theory & classical test theory. Pittsburgh, PA: Carniege Mellon University.

Kalton G. (1983). Introduction to survey sampling. New Delhi: Sage Publication Inc

Kaplan, R. M. & Saccuzzo, D.P. (2005). Psychological testing: principles, application, and issues (6th Ed.). Belmont: Thomson Wadsworth.

Kartowagiran, B. (Oktober 2012). Penulisan butir soal. Makalah disajikan dalam Pelatihan penulisan dan analisis butir soal bagi Sumber daya PNS Dik-Rekinpeg, di Hotel Kawanua Aerotel, Jakarta.

Kubiszyn, T. & Borich, G. (2003). Educational testing and measurement (7th Ed.). Singapore: John Wiley & Sons, Inc.

Kusaeri & Suprananto. (2012). Pengukuran dan penilaian pendidikan. Yogyakarta: Graha Ilmu.

Lange, A., Lehmann, I.J., & Mehrens, W.A. (1967). Using item analysis to improve tests. Journal of Educational Measurement, 4 (2), 65-68. doi: 10.1111/j.1745-3984.1967.tb00572.x

Linn, R.L. (1989). Education measurement (3th ed.). New York: MacMillan Publishing Company.

Mankar, J., & Chavan, D. (2013). Differential aptitude testing of youth. International Journal of Scientific and Research Publications, 3 (7), 1-6.

Mardapi, D. (2008). Teknik penyusunan instrumen tes dan non tes. Yogyakarta: Mitra Cendekia Offset.

Mardapi, D. (2012). Pengukuran, penilaian, dan evaluasi pendidikan. Yogyakarta: Nuha Litera.

Mardapi, D. (September 2014). Bahan pelatihan penilaian otentik. Makalah disajikan dalam Konferensi HEPI, di Denpasar Bali.

Mehta, G., & Mokhasi, V. (2014). Item analysis of multiple choice question-an assessment of the assessment tool. International Journal of Health Sciences and Research, 4 (7), 197-202.

Mehrens, W.A., & Lehmann, J.L. (1973). Measurement and evaluation in education and psychology. New York: Holt, Rinehart, and Winston, Inc.

Mitra, N. K, Nagaraja, H. S., Ponnudurai, G., & Judson, J. P. (2009). The levels of difficulty and discrimination indices in type a multiple choice questions of pre-clinical semester 1 multidisciplinary summative tests. International E-Journal of Science, Medicine, and Education. 3 (1). 2-7. Retrieved from http://web.imu.edu.my/ejournal/approved/iMEC_2.original_p02-07.pdf

Mukherjee, P., & Lahiri, S.K. (2015). Analysis of multiple choice questions (MCQs): item and test statistics from an assessment in a medical college of kolkata west bengal. IOSR Journal of Dental and Medical Sciences, 14 (12), 47-52.

Murphy, K.R. & Davidshofer, C.O. (2003). Psychological testing: principles and application. New Jersey : Prentice-Hall Inc.

Nitko, A. (1983). Educational test and measurement: an introduction. New York: Harcourt Brace Jovanovich, Inc.

Ojerinde, D. (May 2013). Classical test theory (ctt) vs item response theory (irt): an evaluation of the comparability of item analysis results. Lecture Presentation at the Institute of Education University of Ibadan, Ibadan, Oyo, Nigeria.

Pearson Assessment. (2009) The differential aptitude test. Report for simon sample. Upper Sadle River: Pearson Education.

Quaigrain, K., & Arhin, A. K. (2017). Using reliability and item analysis to evaluate a teacher-development test in educational measurement and evalution. Research Article. Cogent Education. 1-11.

Rahman, A. (2014). Developing teaching aptitude test: a perspective of Bangladesh. Green University Review of Social Sciences, 1 (1), 75-89.

Reynolds, C.R., Livingston, R.B., & Wilson, V. (2009). Measurement and assessment in education (2nd ed). Boston: Pearson Education Inc.

Salkind, N. J. & Rasmussen, K. (2007). Encyclopedia of measurement and statistics. London: SAGE Publications, Inc.

Sayyah, M., Vakili, Z., Alavi, N. M., Bigdeli, M., Soleymani, A., Assarian, M.,& Azarbad, Z. (2012). An item analysis of written multiple-choice questions: Kashan university of medical sciences. Nursing and Midwifery Studies, 1 (2), 83-87. doi: 10.5812/nms.8738.

Scheaffer, R. L., Mendenhall, W., & Lyman, O. (1990). Elementay survey sampling (4th Ed). Boston: PSW KENT Publishing Company.

Stickler. L. (2007). A critical review of the SAT: menace or mild-mannered measure?. The College of New Jersey. TCNJ Journal of Student Scholarship, 9, 1-9.

Thorndike, et.al (1991). Measurement and evaluation in psychology and education (5th ed.). New York: Macmillan Publishing Company.

Thorpe, G.L., & Favia, A. (2012). Data analysis using item response theory methodology: an instroduction to selected programs and applications. Psychology Faculty Scholarship, 20, 1-33.

Traub, R.E. (1997). Classical test theory in historical perspective. Educational Measurement and Practice. 8-14.

Van der Linden, W.J., & Hambleton, R.K. (1997). Handbook of modern item response theory. New York: Springer.

Varma, S. (n.d.). Preliminary item statistics using point-biserial correlation and p-values. Morgan Hill, CA: Educational Data Sistem, Inc.

Zoghi, M., & Valipour, V. (2014). A comparative study of classical test theory and item response theory in estimating test item parameters in a linguistics test. Indian Journal of Fundamental and Applied Life Sciences, 4(4). 424-435.




DOI: http://dx.doi.org/10.26555/humanitas.v15i1.7249

Refbacks

  • There are currently no refbacks.



HUMANITAS: Indonesian Psychological Journal
ISSN 1693-7236 (print), 2598-6368 (online)
Email : humanitas@psy.uad.ac.id

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

View My Stats
 

View My Stats