A Prompt for Generating Script Concordance Test Using ChatGPT, Claude, and Llama Large Language Model Chatbots

Autores/as

DOI: https://doi.org/10.6018/edumed.612381
Palabras clave: script concordance test, clinical reasoning, medical education, artificial intelligence, ChatGPT, automatic item generation

Resumen

Medical education always evolves to incorporate more tools for specific needs in assessing clinical reasoning skills. Among these tools, Script Concordance Test (SCT) has a particular importance due to its focus on assessing decision-making in uncertain clinical situations. However, development of SCT items is effortful. Artificial intelligence tools, such as large language models, offer significant benefits. These models are already used for generating multiple-choice questions, and their use in generating SCTs offers great promise. However, this requires well-designed prompts to generate SCTs. This article proposes a generic prompt for the ChatGPT-4, Claude 3, Llama 3, and ChatGPT-4o large language model chatbots to generate SCTs, which can be tailored to various fields of medicine and different stages of medical education. It can help to streamline the development process of SCTs. Initial findings are promising, and there is a need for generating SCTs using large language models and conducting research to assess the quality of SCTs.

Descargas

Los datos de descargas todavía no están disponibles.

Métricas

Cargando métricas ...

Citas

ten Cate O. Introduction. In: ten Cate O, Custers EJFM, Durning SJ (eds.) Principles and Practice of Case-based Clinical Reasoning Education : A Method for Preclinical Students. Cham: Springer International Publishing; 2018. p. 3–19. https://doi.org/10.1007/978-3-319-64828-6_1.

Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, et al. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. Academic Medicine. 2019;94(6): 902–912. https://doi.org/10.1097/ACM.0000000000002618.

Charlin B, Van Der Vleuten C. Standardized Assessment of Reasoning in Contexts of Uncertainty: The Script Concordance Approach. Evaluation & the Health Professions. 2004;27(3): 304–319. https://doi.org/10.1177/0163278704267043.

Gheihman G, Johnson M, Simpkin AL. Twelve tips for thriving in the face of clinical uncertainty. Medical Teacher. 2020;42(5): 493–499. https://doi.org/10.1080/0142159X.2019.1579308.

Moulder G, Harris E, Santhosh L. Teaching the science of uncertainty. Diagnosis. 2023;10(1): 13–18. https://doi.org/10.1515/dx-2022-0045.

Fournier JP, Demeester A, Charlin B. Script Concordance Tests: Guidelines for Construction. BMC Medical Informatics and Decision Making. 2008;8(1): 18. https://doi.org/10.1186/1472-6947-8-18.

Lubarsky S, Dory V, Duggan P, Gagnon R, Charlin B. Script concordance testing: From theory to practice: AMEE Guide No. 75. Medical Teacher. 2013;35(3): 184–193. https://doi.org/10.3109/0142159X.2013.760036.

Mathieu S, Couderc M, Glace B, Tournadre A, Malochet-Guinamand S, Pereira B, et al. Construction and utilization of a script concordance test as an assessment tool for dcem3 (5th year) medical students in rheumatology. BMC Medical Education. 2013;13(1): 166. https://doi.org/10.1186/1472-6920-13-166.

Kün-Darbois JD, Annweiler C, Lerolle N, Lebdai S. Script concordance test acceptability and utility for assessing medical students’ clinical reasoning: a user’s survey and an institutional prospective evaluation of students’ scores. BMC Medical Education. 2022;22(1): 277. https://doi.org/10.1186/s12909-022-03339-1.

Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Medical Teacher. 2023;45(6): 574–584. https://doi.org/10.1080/0142159X.2023.2186203.

Kıyak YS. A ChatGPT Prompt for Writing Case-Based Multiple-Choice Questions. Revista Española de Educación Médica. 2023;4(3): 98–103.

Zuckerman M, Flood R, Tan RJB, Kelp N, Ecker DJ, Menke J, et al. ChatGPT for assessment writing. Medical Teacher. 2023;45(11): 1224–1227. https://doi.org/10.1080/0142159X.2023.2249239.

Cheung BHH, Lau GKK, Wong GTC, Lee EYP, Kulkarni D, Seow CS, et al. ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom). Wang J (ed.) PLOS ONE. 2023;18(8): e0290691. https://doi.org/10.1371/journal.pone.0290691.

Coşkun Ö, Kıyak YS, Budakoğlu Iİ. ChatGPT to generate clinical vignettes for teaching and multiple-choice questions for assessment: A randomized controlled experiment. Medical Teacher. 2024; 1–7. https://doi.org/10.1080/0142159X.2024.2327477.

Kıyak YS, Coşkun Ö, Budakoğlu Iİ, Uluoğlu C. ChatGPT for generating multiple-choice questions: Evidence on the use of artificial intelligence in automatic item generation for a rational pharmacotherapy exam. European journal of clinical pharmacology. 2024;80: 729–735. https://doi.org/10.1007/s00228-024-03649-x.

Laupichler MC, Rother JF, Grunwald Kadow IC, Ahmadi S, Raupach T. Large Language Models in Medical Education: Comparing ChatGPT- to Human-Generated Exam Questions. Academic Medicine. 2023; https://doi.org/10.1097/ACM.0000000000005626.

Kıyak YS, Emekli E. ChatGPT Prompts for Generating Multiple-Choice Questions in Medical Education and Evidence on Their Validity: A Literature Review. Postgraduate Medical Journal. 2024. [In-press]

Hudon A, Kiepura B, Pelletier M, Phan V. Using ChatGPT in Psychiatry to Design Script Concordance Tests in Undergraduate Medical Education: Mixed Methods Study. JMIR Medical Education. 2024;10: e54067–e54067. https://doi.org/10.2196/54067.

Indran IR, Paramanathan P, Gupta N, Mustafa N. Twelve tips to leverage AI for efficient and effective medical question generation: A guide for educators using Chat GPT. Medical Teacher. 2023; 1–6. https://doi.org/10.1080/0142159X.2023.2294703.

Masters K, Benjamin J, Agrawal A, MacNeill H, Pillow MT, Mehta N. Twelve tips on creating and using custom GPTs to enhance health professions education. Medical Teacher. 2024; 1–5. https://doi.org/10.1080/0142159X.2024.2305365.

Kıyak YS, Kononowicz AA. Case-based MCQ generator: A custom ChatGPT based on published prompts in the literature for automatic item generation. Medical Teacher. 2024; 1–3. https://doi.org/10.1080/0142159X.2024.2314723.

Publicado
15-05-2024
Cómo citar
Kıyak, Y. S., & Emekli, E. (2024). A Prompt for Generating Script Concordance Test Using ChatGPT, Claude, and Llama Large Language Model Chatbots. Revista Española de Educación Médica, 5(3). https://doi.org/10.6018/edumed.612381

Artículos más leídos del mismo autor/a

Publication Facts

Metric
This article
Other articles
Peer reviewers 
2,4 promedio

Reviewer profiles  N/D

Author statements

Author statements
This article
Other articles
Data availability 
N/A
16%
External funding 
N/D
32% con financiadores
Competing interests 
N/D
11%
Metric
Para esta revista
Other journals
Articles accepted 
Artículos aceptados: 85%
33% aceptado
Days to publication 
28
145

Indexado: {$indexList}

Editor & editorial board
profiles
Academic society 
Universidad de Murcia
Editora: 
Ediciones de la Universidad de Murcia (Editum)