The attitude, perceived usefulness, perceived ease of use and acceptance of artificial intelligence among medical students in Iran: An application of the technology acceptance model.
Resumo
Introduction: This study evaluated medical students’ attitude, perceived usefulness (PU), perceived ease of use (PEOU), and intention to accept artificial intelligence (AI) technology in Iran in 2024 using the Technology Acceptance Model (TAM). Methodology: In this cross-sectional study, 246 medical students were selected by stratified sampling. Data were collected with a TAM-based questionnaire on AI and analyzed using SPSS 24. Pearson correlation, linear regression, and descriptive statistics were used to assess relationships and predictors. Results: Attitude toward use (β = 0.41, P < 0.001), PEOU (β = 0.50, P < 0.001), PU (β = 0.43, P < 0.001), and intention to use (β = 0.58, P < 0.001) were significantly associated with actual AI use. In a multivariable regression, PU, PEOU, and attitude together explained 78% of the variance in actual AI use (R² = 0.78, Adjusted R² = 0.76, F(4, 241) = 60.75, p < 0.001). Conclusion: PU, PEOU, and positive attitude are strong predictors of AI acceptance and actual use among medical students. Educational institutions should address these factors to facilitate effective integration of AI into medical education.
Downloads
-
Resumo96
-
pdf 51
Referências
1. Al-Adwan AS, Li N, Al-Adwan A, Abbasi GA, Albelbisi NA, Habibi A. Correction to: “Extending the Technology Acceptance Model (TAM) to Predict University Students’ Intentions to Use Metaverse-Based Learning Platforms”. Education and Information Technologies. 2023, 1-2. https://doi.org/10.1007/s10639-023-11816-3
2. Garcia M, Kim S, Nguyen T. Explainability in medical artificial intelligence: Role and challenges. Wiley Interdisciplinary Reviews: Computational Biology and Medicine. 2025, 17(3), e1456. https://doi.org/10.1002/wicm.1456
3. Huang J, Saleh S, Liu Y. A review on artificial intelligence in education. Academic Journal of Interdisciplinary Studies. 2021, 10(206). https://doi.org/10.36941/ajis-2021-0077
4. Ahmadi F, Rezaei M. Emotional well-being and psychological health in Tehran adolescents. International Journal of Pediatrics. 2025, 13(5):12345. https://doi.org/10.1186/s12889-025-22960-5
5. Marycz M, Turowska I, Glazik S, Jasiński P. Artificial Intelligence in Anaerobic Digestion: A Review of Sensors, Modeling Approaches, and Optimization Strategies. Sensors. 2025, 25(22), 6961. http://dx.doi.org/10.3390/s25226961
6. Liu C, Tan Z, He M. Overview of artificial intelligence in medicine. In: Artificial Intelligence in Medicine: Applications, Limitations and Future Directions. Springer; 2022, 23-34. https://doi.org/10.1007/978-981-19-1222-1_2
7. Davis FD, Granić A, Marangunić N. The technology acceptance model 30 years of TAM. Technology. 2023. https://doi.org/10.1007/s00146-024-01896-1
8. Bigwanto A, Widayati N, Wibowo MA, Sari EM. Lean Construction: A Sustainability Operation for Government Projects. Sustainability. 2024, 16(8), 3386. http://dx.doi.org/10.3390/su16083386
9. Johnson A, Brown L, Davis E. Technology acceptance model in healthcare: A systematic review of TAM and UTAUT applications. Journal of Biomedical Informatics. 2025, 152, 104612. https://doi.org/10.1016/j.jbi.2025.104612
10. Sánchez-Prieto JC, Cruz-Benito J, Therón Sánchez R, García-Peñalvo FJ. Assessed by machines: Development of a TAM-based tool to measure AI-based assessment acceptance among students. International Journal of Interactive Multimedia and Artificial Intelligence. 2020, 6(4), 80. https://doi.org/10.9781/ijimai.2020.09.004
11. Na S, Heo S, Han S, Shin Y, Roh Y. Acceptance model of artificial intelligence (AI)-based technologies in construction firms: Applying the Technology Acceptance Model (TAM) with the Technology–Organisation–Environment (TOE) framework. Buildings. 2022, 12(2), 90. https://doi.org/10.3390/buildings12020090
12. Lee JWY, Tan JY, Bello F. Technology Acceptance Model in Medical Education: Systematic Review. JMIR Medical Education. 2025, 11(1), e67873. https://doi.org/10.2196/67873
13. Zaineldeen S, Hongbo L, Koffi AL, Hassan BMA. Technology acceptance model: concepts, contribution, limitation, and adoption in education. Universal Journal of Educational Research. 2020, 8(11), 5061-71. https://doi.org/10.13189/ujer.2020.081106
14. Sobhanian P, Eslami S, Ghezel MA. Attitudes and Readiness of Iranian Medical Science Students toward Artificial Intelligence: A Cross-Sectional Study. Iranian Biomedical Journal. 2024, 28, 115. http://dx.doi.org/10.61186/ibj.25th-11th-IACRTIMSS
15. Sousa L, Castro C, António C, Santos A. Inverse methods in design of industrial forging processes. Journal of Materials Processing Technology. 2002, 128(1-3), 266-73. https://doi.org/10.1016/s0924-0136(02)00464-8
16. Alkhaaldi SM, Kassab CH, Dimassi Z, Alsoud LO, Al Fahim M, Al Hageh C, et al. Medical student experiences and perceptions of ChatGPT and artificial intelligence: cross-sectional study. JMIR Medical Education. 2023, 9(1), e51302. http://dx.doi.org/10.2196/51302
Direitos de Autor (c) 2025 Serviço de Publicações da Universidade de Múrcia

Este trabalho encontra-se publicado com a Licença Internacional Creative Commons Atribuição-NãoComercial-SemDerivações 4.0.
Os trabalhos publicados nesta revista estão sujeitos aos seguintes termos:
1. O Serviço de Publicações da Universidade de Murcia (o editor) preserva os direitos económicos (direitos de autor) das obras publicadas e favorece e permite a sua reutilização ao abrigo da licença de utilização indicada no ponto 2.
2. Os trabalhos são publicados sob uma licença Creative Commons Atribuição-NãoComercial-NãoDerivada 4.0.
3. Condições de autoarquivamento. Os autores estão autorizados e incentivados a divulgar eletronicamente as versões pré-impressas (versão antes de ser avaliada e enviada à revista) e / ou pós-impressas (versão avaliada e aceita para publicação) de seus trabalhos antes da publicação, desde que favorece sua circulação e difusão mais precoce e com ela possível aumento de sua citação e alcance junto à comunidade acadêmica.
![]()












