تبیین چارچوب مفهومی برنامه‌درسی سنجش‌شده دوره کارشناسی: یک مطالعه کیفی

نوع مقاله : مقاله پژوهشی

نویسندگان

1 دکتری مطالعات برنامه درسی، پژوهشگاه مطالعات آموزش‌وپرورش، سازمان پژوهش و برنامه‌ریزی آموزشی، تهران، ایران

2 دکتری مطالعات برنامه درسی، دانشگاه فرهنگیان، پردیس آیت‌الله طالقانی قم، قم، ایران

چکیده

هدف: پژوهش حاضر با هدف ارائه چارچویی مفهومی برای برنامه‌ سنجش شده بر اساس ادراکات و تجربیات ذینفعان برنامه درسی دوره کارشناسی انجام شد.
روش: با رویکرد کیفی پژوهش و روش نمونه‌گیری هدفمند، تجارب و ادراکات 31 نفر (17 عضو هیئت علمی، 12 دانشجوی دوره کارشناسی، 2 کارشناس دفتر برنامه‌ریزی آموزش عالی) از دانشگاه‌های علامه طباطبایی، تهران و خوارزمی، دانشگاه علوم پزشکی ارتش و دفتر برنامه-ریزی آموزش از طریق مصاحبه نیمه ساختارمند مورد کاوش قرار گرفت. جمع‌آوری و تحلیل داده به صورت مداوم تا رسیدن به اشباع نظری از طریق فرایند مقایسه مداوم تداوم یافت.
یافته‌ها: تحلیل داده‌ها با استفاده از کدگذاری باز، محوری و انتخابی منجر به شناسایی دو مقوله اصلی « فرایندهای طراحی سنجش» و «فرایندهای قضاوتی سنجش» و شش مقوله فرعی « تعیین نقشه سنجش»، « انتخاب روش سنجش»، «تعیین افراد شرکت کننده در سنجش»، «زمینه سازی فرهنگی»، «سنجش یادگیری» و «نظارت بر سنجش» گردید. در مجموع، برنامه درسی سنجش شده ماهیتی سیال دارد و به دنبال آنست که از طریق نگاهی تعاملی (برگشت به عقب و نگاه به بافت اجرایی سنجش) زمینه تغییرات مناسب در دانشجویان و بهبود کیفیت یادگیری را بررسی کند.

کلیدواژه‌ها


عنوان مقاله [English]

Exploring a Conceptual Framework for the Undergraduate Assessed Curriculum: A Qualitative Study

نویسندگان [English]

  • Ali KHaleghinezhad 1
  • Akbar Hedayati 2
1 PhD in Curriculum Studies, Institute for Educational Studies, Educational Research and Planning Organization, Tehran, Iran
2 PhD in Curriculum Studies, Farhangian University, Ayatollah Taleghani Campus, Qom, Qom, Iran
چکیده [English]

Goal: This study intends to propose the conceptual framework for the assessed curriculum based on experiences and perspectives of undergraduate curriculum stakeholders.
Method: Qualitative research approach was used in this study with samples who were selected purposively for participating in semi-structured interviews. To do so, 31 informants including 17 faculty members, 12 undergraduate students took part from Allameh Tabatabaei, Tehran and Kharazmi Universities, and AJA University of Medical Sciences as well as two experts in the Office of Higher Education Planning in Tehran. Data collection was conducted constantly until arriving at theoretical sufficiency by a process of constant comparison.
Results: Data analysis using open, axial and selective coding led to the identification of two main categories: "assessment design processes" and "assessment judicial processes" and six sub-categories including "determination of assessment map", "selection of assessment method", "determination of participants in assessment", " cultural grounding", "learning assessment", and "assessment monitoring". Overall, the assessed curriculum is fluid in nature and seeks to examine the context of appropriate changes in students and improves the quality of learning through an interactive viewpoint (looking at the executive context of assessment).
Goal: This study intends to propose the conceptual framework for the assessed curriculum based on experiences and perspectives of undergraduate curriculum stakeholders.
Method: Qualitative research approach was used in this study with samples who were selected purposively for participating in semi-structured interviews. To do so, 31 informants including 17 faculty members, 12 undergraduate students took part from Allameh Tabatabaei, Tehran and Kharazmi Universities, and AJA University of Medical Sciences as well as two experts in the Office of Higher Education Planning in Tehran. Data collection was conducted constantly until arriving at theoretical sufficiency by a process of constant comparison.
Results: Data analysis using open, axial and selective coding led to the identification of two main categories: "assessment design processes" and "assessment judicial processes" and six sub-categories including "determination of assessment map", "selection of assessment method", "determination of participants in assessment", " cultural grounding", "learning assessment", and "assessment monitoring". Overall, the assessed curriculum is fluid in nature and seeks to examine the context of appropriate changes in students and improves the quality of learning through an interactive viewpoint (looking at the executive context of assessment).
Goal: This study intends to propose the conceptual framework for the assessed curriculum based on experiences and perspectives of undergraduate curriculum stakeholders.
Method: Qualitative research approach was used in this study with samples who were selected purposively for participating in semi-structured interviews. To do so, 31 informants including 17 faculty members, 12 undergraduate students took part from Allameh Tabatabaei, Tehran and Kharazmi Universities, and AJA University of Medical Sciences as well as two experts in the Office of Higher Education Planning in Tehran. Data collection was conducted constantly until arriving at theoretical sufficiency by a process of constant comparison.
Results: Data analysis using open, axial and selective coding led to the identification of two main categories: "assessment design processes" and "assessment judicial processes" and six sub-categories including "determination of assessment map", "selection of assessment method", "determination of participants in assessment", " cultural grounding", "learning assessment", and "assessment monitoring". Overall, the assessed curriculum is fluid in nature and seeks to examine the context of appropriate changes in students and improves the quality of learning through an interactive viewpoint (looking at the executive context of assessment).
Goal: This study intends to propose the conceptual framework for the assessed curriculum based on experiences and perspectives of undergraduate curriculum stakeholders.
Method: Qualitative research approach was used in this study with samples who were selected purposively for participating in semi-structured interviews. To do so, 31 informants including 17 faculty members, 12 undergraduate students took part from Allameh Tabatabaei, Tehran and Kharazmi Universities, and AJA University of Medical Sciences as well as two experts in the Office of Higher Education Planning in Tehran. Data collection was conducted constantly until arriving at theoretical sufficiency by a process of constant comparison.
Results: Data analysis using open, axial and selective coding led to the identification of two main categories: "assessment design processes" and "assessment judicial processes" and six sub-categories including "determination of assessment map", "selection of assessment method", "determination of participants in assessment", " cultural grounding", "learning assessment", and "assessment monitoring". Overall, the assessed curriculum is fluid in nature and seeks to examine the context of appropriate changes in students and improves the quality of learning through an interactive viewpoint (looking at the executive context of assessment).
Goal: This study intends to propose the conceptual framework for the assessed curriculum based on experiences and perspectives of undergraduate curriculum stakeholders.
Method: Qualitative research approach was used in this study with samples who were selected purposively for participating in semi-structured interviews. To do so, 31 informants including 17 faculty members, 12 undergraduate students took part from Allameh Tabatabaei, Tehran and Kharazmi Universities, and AJA University of Medical Sciences as well as two experts in the Office of Higher Education Planning in Tehran. Data collection was conducted constantly until arriving at theoretical sufficiency by a process of constant comparison.
Results: Data analysis using open, axial and selective coding led to the identification of two main categories: "assessment design processes" and "assessment judicial processes" and six sub-categories including "determination of assessment map", "selection of assessment method", "determination of participants in assessment", " cultural grounding", "learning assessment", and "assessment monitoring". Overall, the assessed curriculum is fluid in nature and seeks to examine the context of appropriate changes in students and improves the quality of learning through an interactive viewpoint (looking at the executive context of assessment).
Goal: This study intends to propose the conceptual framework for the assessed curriculum based on experiences and perspectives of undergraduate curriculum stakeholders.
Method: Qualitative research approach was used in this study with samples who were selected purposively for participating in semi-structured interviews. To do so, 31 informants including 17 faculty members, 12 undergraduate students took part from Allameh Tabatabaei, Tehran and Kharazmi Universities, and AJA University of Medical Sciences as well as two experts in the Office of Higher Education Planning in Tehran. Data collection was conducted constantly until arriving at theoretical sufficiency by a process of constant comparison.
Results: Data analysis using open, axial and selective coding led to the identification of two main categories: "assessment design processes" and "assessment judicial processes" and six sub-categories including "determination of assessment map", "selection of assessment method", "determination of participants in assessment", " cultural grounding", "learning assessment", and "assessment monitoring".

کلیدواژه‌ها [English]

  • assessment
  • Curriculum
  • Assessed Curriculum
  • Undergraduate Education
  • qualitative research
Abate, M. A., Stamatakis, M. K., & Haggett, R. R. (2003). Excellence in curriculum development and assessment. American Journal of Pharmaceutical Education, 67(3) Article 89. http://www.xula.edu/cop/documents/Assessment-Curriculum/Excellence%20in%20Curriculum%20Development%20and%20  Assessment.pdf.
Anderson, H. M., Anaya, G., Bird, E., and Moore, D. L. (2005). A review of educational assessment. American Journal of Pharmaceutical Education, 69(1), 356-369.
Astin, A. W. (2012). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. Rowman & Littlefield Publishers.
Bala, L., van der Vleuten, C., Freeman, A., Torre, D., Heeneman, S., and Sam, A. H. (2020). COVID‐19 and programmatic assessment. The Clinical Teacher17(4), 420-422.
Banta, T. W. (1997). Moving assessment forward: Enabling conditions and stumbling blocks. Journal of New Directions for Higher Education, (100), 79-91.
Banta, T. (2001). Assessing competence in higher education. In C. Palomba and T. Banta (Eds.), Assessing student competence in accredited disciplines (pp. 1–12). Sterling, VA: Stylus.
Banta, T.W., Lund, J. P., Black, K. E., & Oblander, F.W. (1996). Assessment in practice: Putting principles towork on college campuses. San Francisco, CA: Jossey-Bass.
Biggs, J. (1996). Enhancing teaching through constructive alignment. Journal of Higher education, 32(3), 347-364.
Boyce, E. G. (2005). A guide for Doctor of Pharmacy program assessment. Journal of pharmacy practice, 5, 411- 419.
Costa, N., Marques, L., & Kempa, R. (2000) Science teachers’ awareness of findings from educational research. Journal of Chemistry Education: Research and Practice in Europe, 1(1), 31–36.
Torre, D. M., Schuwirth, L. W. T., & Van der Vleuten, C. P. M. (2020). Theoretical considerations on programmatic assessment. Journal of Medical Teacher42(2), 213-220.
Davies, W. M. (2009). Groupwork as a Form of Assessment: Common Problems and Recommended Solutions, Journal of Higher Education, 58 (4); 563–84.
De Jong, L. H., Bok, H. G., Kremer, W. D., and van der Vleuten, C. P. (2019). Programmatic assessment: Can we provide evidence for saturation of information. Journal of  Medical teacher41(6), 678-682.
Havnes, A. (2004) Examination and learning: an activity-theoretical analysis of the relationship between assessment and educational practice. Journal of Assessment & Evaluation in Higher Education, 29 (2), 159-176.
Hernández, R. (2012). Does continuous assessment in higher education support student learning. Journal of Higher Education, 64(4), 489-502.
Hounsell, D., McCulloch, M., & Scott, M. (1996). The ASSHE inventory: changing assessment practices in Scottish Higher Education. Available: http://www.ltsn.ac.uk/genericcentre/asshe/ Accessed 13 January 2005.
Iobst, W. F., & Holmboe, E. S. (2020). Programmatic Assessment: The Secret Sauce of Effective CBME Implementation. Journal of Grad Med Educ, 12(4), 518–521. doi: 10.4300/JGME-D-20-00702.1
Isaksson, S. (2007). Assess as you go: The effect of continuous assessment on student learning during a short course in archaeology. Journal of Asessment and Evaluation in Higher Education, 33(1), 1–7.
Johnson, B. (1996). Performance Assessment Handbook: Volume 1 Portfolios and Socratic Seminars. Princeton: Eye On Education, Inc.
Kelley, K. A., McAuley, J. W., Wallace, L. J., and Frank, S. G. (2008). Curricular mapping: process and product. American Journal of Pharmaceutical Education72(5). 1-7.
Kim, H., Sefcik, J. S., and Bradway, C. (2017). Characteristics of qualitative descriptive studies: a systematic review. Journal of Research in nursing and health40(1), 23-42.
Kirschenbaum, H. L., Brown, M. E., and Kalis, M. M. (2006). Programmatic curricular outcomes assessment at colleges and schools of pharmacy in the United States and Puerto Rico. American journal of pharmaceutical education, 70(1), 1-12.
Lund, J., & Tannehill, D. (2010). Standards-based physical education curriculum development. Jones and Bartlett publishers.
Milne, J., and Oberle, K. (2005). Enhancing rigor in qualitative description. Journal of Wound Ostomy and Continence Nursing32(6), 413-420.
Momeni Mahmouei, H. (2011). Pathology of curriculum evaluation in higher education. Journal of Educ Strategy Med Sci. 2011; 4 (2), 95-100. [Persian]
Monaghan, M. S., and Jones, R. M. (2005). Designing an assessment for an abilities-based curriculum. American Journal of Pharmaceutical Education, 69(2), 19.
Ndoye, A., & Parker, M. A. (2010). Creating and sustaining a culture of assessment. Journal of Planning for Higher Education, 38(2), 28-37.
Neergaard, M. A., Olesen, F., Andersen, R. S., and Sondergaard, J. (2009). Qualitative description–the poor cousin of health research? Journal of BMC medical research methodology9(1), 9- 52.
Niebling, B. C., Roach, A. T., and Rahn-Blakeslee, A. (2008). Best practices in curriculum, instruction, and assessment alignment. In A. Thomas and J. Grimes (Eds.), Best practices in school psychology, (4)5, 1059-1072. Bethesda, MD: National Association of School Psychologists.
Porter, A. C. (2006). Curriculum assessment. In J. Green, G. Camilli, & P. Elmore (Eds.), Handbook of complementary methods in education research (pp. 141–160). Washington, DC: American Educational Research Association.
Porter, A. C., and Smithson, J. L. (2001). Defining, Developing, and Using Curriculum Indicators. CPRE Research Report Series (No. RR-048). Philadelphia, PA: Consortium for Policy Research in Education.
Rabinowitz, S., Roeber, E., Schroeder, C., and Sheinker, J. (2006). Creating aligned standards and assessment systems. Washington, DC: Council of Chief State School Officers.
Ratcliff, J. L. (1997). What is a curriculum and what should it be? Handbook of the undergraduate curriculum: A comprehensive guide to purposes, structures, practices, and change, 5-29. Ratcliff JL.
 Rodgers, M., Grays, M. P., Fulcher, K. H., & Jurich, D. P. (2013). Improving academic program assessment: A mixed methods study. Journal of Innovative Higher Education, 38(5), 383-395.
Sandelowski, M. (2010). What's in a name? Qualitative description revisited. Journal of Research in nursing & health33(1), 77-84.
Schmidt, L., & Schuwirth, L. (2013). Final Report - Better Judgement: Improving assessors' management of factors affecting their judgement. Sydney: Australian Government Office for Learning and Teaching.
Schuwirth, L. W., and Van der Vleuten, C. P. (2011). Programmatic assessment: from assessment of learning to assessment for learning. Journal of Medical teacher, 33(6), 478-485.
Scott, I. M. (2020). Beyond ‘driving’: The relationship between assessment, performance and learning. Journal of Medical Education54(1), 54-59.
Serafini, F. (2000). Three paradigms of assessment: Measurement, procedure, and inquiry. The Reading Teacher, 54(4), 384-393.
Sridharan, B., & Boud, D. (2019). The effects of peer judgements on teamwork and self-assessment ability in collaborative group work. Journal of Assessment and Evaluation in Higher Education, 44(6), 894–909.
Van Der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., and Heeneman, S. (2015). Twelve Tips for programmatic assessment. Journal of Medical teacher, 37(7), 641-646.
Van Der Vleuten, C. P., and Schuwirth, L. W. (2005). Assessing professional competence: from methods to programmes. Journal of Medical education, 39(3), 309-317.
Webb, N. L. (1997). Criteria for alignment of expectations and assessments in mathematics and science education (Council of Chief State School Officers and National Institute for Science Education Research Monograph No. 6). Madison, WI: University of Wisconsin–Madison, Wisconsin Center for Education Research.
Wittstrom, K., Cone, C., Salazar, K., Bond, R., and Dominguez, K. (2010). Alignment of pharmacotherapy course assessments with course objectives. American journal of pharmaceutical education, 74(5), 1-8.
Zwaal, W. (2019) Assessment for problem-based learning, Research in Hospitality Management, 9:2, 77-82, DOI: 10.1080/22243534.2019.1689696.