Abstract
- The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
-
Keywords: Artificial intelligence; ChatGPT; Compliance; Ethics; Social responsibility
Introduction
- As insights into the relationship between research and technology rapidly evolve, new nuances in ethical concerns have emerged [1]. Historically, ethics in medical research has primarily focused on the security and protection of human subjects. However, with the increasing use of advanced technologies in contemporary research, expanding our ethical considerations has become necessary [2]. This shift requires addressing additional dimensions to ensure enhanced ethical safeguards in human research.
- Currently, the forefront of technology, particularly artificial intelligence (AI) and its specific applications such as ChatGPT (generative pre-trained transformer, OpenAI), is becoming increasingly relevant in discussions of ethical issues related to medical research [3,4]. These technologies present unique challenges that necessitate a re-evaluation of ethical frameworks to ensure they are adequately addressed. Furthermore, the acceptance of ethical issues in medical research varies across cultural backgrounds and generations of researchers [5-8]. This cultural and generational divide has influenced perspectives on medical ethics, leading to various manifestations in research practices concerning ethical issues.
- However, researchers continue to face challenges in evaluating the performance of studies incorporating AI [9-11]. In this review, we examine articles that discuss the ethical issues arising in medical research involving such technologies. We aimed to reflect on these debates and propose potential resolutions (Fig. 1).
Methods
- This review examines recent studies addressing ethical issues in medical research. Given the significant impact of these ethical considerations on the direction of medical research, we thoroughly analyzed the norms and strategies related to ethics accountability. Our methodology employed a holistic approach, encompassing all academic disciplines and levels, with a specific focus on sourcing evidence from medical institutions and hospitals. To identify the relevant articles, we meticulously searched for keywords and themes that include AI, ChatGPT, autonomy, privacy, confidentiality, accountability, fairness, regulatory compliance, informed consent, and liability across medical databases, including PubMed and Google Scholar. Our objective was to identify emerging trends, burgeoning areas of interest, such as AI, and the evolving focus on ethics in medical research over time.
Ethical challenges in medical AI
- 1. Privacy and confidentiality issues
- The use of AI and representative applications such as ChatGPT in medical research raises several ethical concerns that need to be addressed [12]. Key among these is the privacy and confidentiality of patient data [13-16]. Ensuring that the patient information used to train and operate AI models is handled securely and responsibly is crucial, especially when it involves sensitive health data [17]. This includes paying careful attention to how data are collected, stored, and shared.
- 2. Accountability issues
- The opacity of AI models, particularly deep-learning systems, is also a concern [18]. These “black box” systems often do not provide easy insights into their decision-making processes, which can complicate clinical decision-making and accountability [19]. For its effective use in medicine, AI processes must be transparent and explainable [20-22].
- 3. Bias and fairness issues
- Another critical issue is bias and fairness [23-25]. AI systems can unintentionally perpetuate or even worsen existing biases if trained on nondiverse or nonrepresentative datasets. This could lead to biased treatment recommendations or diagnostic outcomes, disproportionately affecting marginalized groups.
- 4. Informed consent and patient autonomy issues
- Informed consent and patient autonomy are further areas of concern [26,27]. Patients may not fully comprehend the extent of AI’s role in their diagnosis or treatment, potentially affecting their ability to make informed health-related decisions [28]. Additionally, as AI increasingly handles tasks traditionally performed by humans, medical professionals could become overly reliant on AI, potentially leading to the deskilling of the workforce [29,30]. This could diminish the healthcare workers’ ability to make nuanced decisions without AI assistance [31,32].
- 5. Liability issues
- Determining the liability when AI systems err remains a complex challenge [33]. The question of who is responsible for AI-related misdiagnosis or treatment failure continues to spark debate [34,35]. The use of AI could require redefining standards of care and adjusting the legal definitions of negligence and malpractice [36]. Large language model (LLM) AI might produce “hallucinations” or unreliable outputs, which could mislead clinicians and lead to potential malpractice [37]. LLM AI-generated advice may be treated similarly to other forms of third-party medical advice, which have historically had mixed acceptance in terms of establishing standard care in legal terms.
- 6. Regulatory compliance issues
- Finally, regulatory compliance is another significant issue [38,39]. AI tools must adhere to medical laws and regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States [40], which protects patient privacy, and the General Data Protection Regulation (GDPR) in Europe, which governs data protection [41,42]. However, the setup of research environments should adapt based on each nation’s unique medical infrastructure and level of industrial support [43].
- Addressing these ethical issues is crucial for the responsible development and application of AI in healthcare to ensure that it enhances patient outcomes without compromising ethical standards or patient trust.
- 7. Cultural differences and perceptions of ethics in AI and medical research
- Cultural differences profoundly influence how doctors, patients, and their families handle ethical dilemmas in medicine [44-46]. A comparative study of medical professionalism in China and the United States highlighted notable cultural distinctions and parallels between the two nations, examining Chinese practices such as family consent and familism, alongside the contentious issue of patient autonomy [47]. Furthermore, Western and Asian cultures significantly differ in their emphasis placed on patient rights [48]. Navigating these cultural variances is essential for international collaboration in AI and medical research, ensuring that technologies are developed and utilized in a manner that respects diverse ethical standards and values.
Strategies for resolving ethical issues in AI use in medical research
- Initially, determining the level of autonomy that AI should have in completing tasks is crucial, particularly in relation to their impact on patient outcomes [49,50]. The extent of AI autonomy varies significantly depending on the severity and importance of these tasks. Thus, a consensus is needed on the extent to which AI autonomy will be granted in clinical decision-making processes [51]. Ultimately, the role of AI—whether as an assistive aid or autonomous agent—must first be clearly defined [52]. For example, simpler and well-defined tasks such as administrative duties and data management are more suitable for AI automation [53]. By contrast, complex decision-making tasks that require human empathy and understanding should remain under human control [54,55].
- To ensure the accuracy of the decisions made by AI, its performance must be rigorously validated [56]. Clinical trials to evaluate these processes should be conducted before AI systems are used in practice [57]. Once these systems are commercially available and implemented in clinical practice, a continuous monitoring process is crucial to ensure that the AI systems operate as intended [58,59]. Several rigorous methods have been employed to validate AI systems for healthcare applications and to ensure their efficacy and safety [60]. Trial validation involves deploying systems in their intended clinical environments to monitor the real-time performance and user interactions. Simulation test systems in virtual environments that mimic complex medical scenarios provide a safe platform for evaluating potential risks without endangering patients. Model-centered validation focuses on the AI model itself using data-driven techniques to verify its predictive accuracy and reliability across diverse clinical datasets. Additionally, expert opinion plays a crucial role as healthcare professionals assess the system’s practicality and adherence to medical standards. These multifaceted validation approaches are critical for integrating reliable AI tools into medical practice and ultimately enhancing patient care and clinical outcomes.
- As mentioned previously, regulations and ethical guidelines are essential to ensure that AI tools comply with health privacy laws and meticulously protect patient data. The implementation of AI auditing processes should be guided by ethical standards that prioritize patient welfare and equity [61,62]. Healthcare providers must understand the decision-making processes of AI tools to make informed judgments regarding how AI-influenced outcomes affect patient care [63]. This includes ensuring that all processes implemented using AI obtain informed consent from patients concerning AI involvement in their care, the information it processes, and its influence on their treatment.
- To implement these processes in practice, revising current consent processes to explicitly inform patients about how AI is used in research and treatment and detailing both the benefits and risks is vital. Consent must be informed and voluntary, with clear options for opting out [64]. Aligning these guidelines with international standards has promoted global consistency [65]. Transparency and explainability should be emphasized to enable both clinicians and patients to understand AI decisions, thus building trust and facilitating informed clinical decision-making [21,66,67]. Openly publishing details on AI training processes, datasets, and performance metrics is essential.
- Training healthcare professionals is necessary to ensure safe AI applications in patient care [68]. They must learn to use AI tools safely and accurately [69]. As AI is expected to take over many tasks currently performed by healthcare providers, the risk that human skills may diminish remains high. Consequently, health professionals should focus more on areas in which AI systems cannot make accurate final decisions because of the irreplaceable nature of certain tasks [70]. This necessitates clear protocols to define the depth and range of AI involvement in patient care. The entire process involves risks that can affect patient outcomes. Therefore, establishing a backup system for AI tools is crucial to prepare for scenarios where tracing the decision-making process is possible [71,72]. A trial-and-error period seems inevitable as AI continues to develop and improve. The role of AI must be adjusted based on real-world requirements and evolving ethical standards. It is important for AI systems to enhance healthcare services without compromising patient safety and prognosis.
- Compliance with data protection regulations such as the GDPR and HIPAA is essential. Establishing comprehensive data governance frameworks is crucial for dictating the collection, storage, and use of patient data, thereby ensuring anonymity and safeguarding against breaches. Robust anonymization methods are vital for protecting patient information prior to its use in machine learning training. Furthermore, securing patient consent, maintaining data integrity, and implementing secure data management and storage protocols is imperative to adhere to relevant legal standards [73]. Various methods have been introduced to ensure that images remain useful for medical research and diagnosis while removing personally identifiable information to protect patient privacy, such as de-identifying facial features in magnetic resonance images, encrypting patient identifications, and modifying personal data fields [74,75].
- To reduce bias, AI models should be trained using diverse datasets representing various demographics, and regular audits of AI systems for biases and discrepancies in performance across different groups are necessary to ensure broad applicability and fairness [76]. Addressing bias in AI systems is also necessary to ensure that they deliver trustworthy and fair results, which necessitates a continuous effort to identify and mitigate potential biases in AI systems to ensure that they are accurate and equitable [77]. To evaluate fairness, the “fairness score” and “bias index” were introduced, and the researchers suggested that fairness certification is crucial for the broader acceptance and trustworthiness of AI systems [78].
- Defining the accountability clearly for AI decisions in a clinical setting is essential. This task encompasses AI developers, healthcare providers, and organizations that deploy AI solutions. AI tools must embody transparency to facilitate proper regulation and meet societal demands for accountability, particularly when unexpected outcomes arise [79]. Legislative initiatives, such as the Algorithmic Accountability Act of 2022 in the U.S. and the European Union AI Act, propose comprehensive measures. These legislative frameworks mandate that developers extensively evaluate the impact of their AI systems, including potential biases and discriminatory outcomes [80]. There is a pressing need for precise definitions of risk and accountability in the AI domain. Additionally, standardizing risk governance methods for practical industrial applications is crucial. We must clearly define AI-related risks and develop standardized risk governance and management strategies that are applicable across the sector. Five characteristics are essential for AI risk-management methodologies: balance, extendibility, representation, transparency, and long-term orientation. These attributes ensure that AI systems are accountable, sustainable, and ethically aligned with clinical needs and societal expectations [81].
- Encouraging collaboration between technologists, ethicists, healthcare providers, and patients is crucial for a holistic approach to AI development and implementation [16,67,82]. This has led to better-designed AI systems that respect ethical norms and are more effective in clinical settings. Although no singular global legal framework specifically governing the use of AI in healthcare exists [83], adopting these strategies can help address the ethical risks associated with deploying AI technologies, such as ChatGPT, in medical research. By implementing these measures, we can enhance the benefits of AI while reducing its potential harm.
- Cross-cultural training provides researchers and professionals with workshops, seminars, and programs that promote cultural competence and ethical sensitivity [84,85]. Global ethical standards should be developed to honor local values while upholding international norms. Collaborative international research encourages partnerships to enhance mutual understanding and integrate diverse ethical perspectives [86]. Community engagement involves public consultations and advisory panels to gain local insights, helping researchers grasp cultural nuances [87,88]. Transparent communication ensures information about AI and medical research is clear and culturally appropriate. Ethics reviews should include cultural experts to tackle potential cultural and ethical issues. Finally, adaptive technology design enables the customization of AI systems and research protocols to various cultural settings, supporting multiple languages and flexible interfaces [89].
- Engaging a diverse array of stakeholders in the development of policies and guidelines for AI and medical research is crucial. Including ethicists, technologists, patients, and practitioners from varied backgrounds ensures that these guidelines are comprehensive and that AI systems are capable of nuanced decision-making. This inclusive approach is vital for crafting policies that address the multifaceted impacts of AI technologies across different segments of society [90,91]. Moreover, the implementation of robust feedback mechanisms is essential for the ongoing refinement of AI systems. Such mechanisms enable stakeholders to report on how AI applications affect their lives, providing critical insights that can lead to enhancements in both functionality and ethical alignment [92]. Participatory design plays a pivotal role in AI development by involving end users and patients directly in the design and testing phases. This strategy results in innovations that are not only user-friendly but also deeply align with the diverse needs and values of various user groups [93].
Conclusion
- The integration of AI technologies like ChatGPT into medical research offers substantial transformative potential but also poses significant ethical challenges. These include concerns related to privacy, bias, accountability, informed consent, regulatory compliance, and liability. To responsibly deploy AI in healthcare, it is crucial to establish a clear ethical framework that defines AI’s role in clinical decision-making, ensuring it enhances transparency and patient trust.
- This entails rigorous validation through clinical trials, ongoing post-implementation monitoring, and the creation of comprehensive data governance frameworks that prioritize privacy and security. Moreover, developing diverse datasets is essential to reduce bias and promote equitable healthcare outcomes. Engaging a wide range of stakeholders, including technologists, ethicists, healthcare providers, and patients, will ensure the ethical alignment of AI systems and their alignment with actual clinical needs.
- Ultimately, by maintaining high ethical standards and fostering a collaborative development environment, AI can be leveraged to make significant advances in healthcare that are both innovative and ethically responsible.
- The future of AI in healthcare is poised to enhance clinical decision-making, patient care, and operational efficiencies significantly but it requires careful management to address potential ethical, regulatory, and practical challenges. Overall, the prospective view is one of cautious optimism, advocating for a balance between leveraging the potential benefits of AI in medical research while diligently addressing the ethical and practical challenges that accompany such technology.
Article information
-
Conflicts of interest
Hyunyong Hwang is an editorial board member of the journal but was not involved in the peer reviewer selection, evaluation, or decision process of this article. No other potential conflicts of interest relevant to this article were reported.
-
Funding
None.
-
Author contributions
Conceptualization: SY, SSL, HH. Data curation: SY, SSL, HH. Formal analysis: SY, SSL, HH. Methodology: SY, SSL, HH. Project administration: HH. Supervision: HH. Validation: HH. Visualization: SY, SSL, HH. Writing - original draft: SY. SSL. Writing - review & editing: HH.
Fig. 1.Balancing challenges and strategies for artificial intelligence (AI) integration in healthcare.
References
- 1. Goirand M, Austin E, Clay-Williams R. Implementing ethics in healthcare AI-based applications: a scoping review. Sci Eng Ethics 2021;27:61.ArticlePubMedPDF
- 2. Peterson M. The ethics of technology: response to critics. Sci Eng Ethics 2018;24:1645–52.ArticlePubMedPDF
- 3. Rahimi F, Talebi Bezmin Abadi A. ChatGPT and publication ethics. Arch Med Res 2023;54:272–4.ArticlePubMed
- 4. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell 2023;6:1169595.ArticlePubMedPMC
- 5. Kunstadter P. Medical ethics in cross-cultural and multi-cultural perspectives. Soc Sci Med Med Anthropol 1980;14B:289–96.ArticlePubMed
- 6. DeWane M, Grant-Kels JM. The ethics of volunteerism: whose cultural and ethical norms take precedence? J Am Acad Dermatol 2018;78:426–8.ArticlePubMed
- 7. Martinsons MG, Ma D. Sub-cultural differences in information ethics across China: focus on Chinese management generation gaps. J Assoc Inf Syst 2009;10:816–33.Article
- 8. Ess C. Ethical pluralism and global information ethics. Ethics Inf Technol 2006;8:215–26.ArticlePDF
- 9. Yuan S, Li F, Browning MH, Bardhan M, Zhang K, McAnirlin O, et al. Leveraging and exercising caution with ChatGPT and other generative artificial intelligence tools in environmental psychology research. Front Psychol 2024;15:1295275.ArticlePubMedPMC
- 10. Stenseke J. Interdisciplinary confusion and resolution in the context of moral machines. Sci Eng Ethics 2022;28:24.ArticlePubMedPMCPDF
- 11. Kazim E, Koshiyama AS. A high-level overview of AI ethics. Patterns (N Y) 2021;2:100314.ArticlePubMedPMC
- 12. Elendu C, Amaechi DC, Elendu TC, Jingwa KA, Okoye OK, John Okah M, et al. Ethical implications of AI and robotics in healthcare: a review. Medicine (Baltimore) 2023;102:e36671.ArticlePubMedPMC
- 13. Alhammad N, Alajlani M, Abd-Alrazaq A, Epiphaniou G, Arvanitis T. Patients’ perspectives on the data confidentiality, privacy, and security of mHealth apps: systematic review. J Med Internet Res 2024;26:e50715.ArticlePubMedPMC
- 14. Stanfill MH, Marc DT. Health information management: implications of artificial intelligence on healthcare data and information management. Yearb Med Inform 2019;28:56–64.ArticlePubMedPMC
- 15. Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical considerations of using ChatGPT in health care. J Med Internet Res 2023;25:e48009.ArticlePubMedPMC
- 16. Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the ethical enigma: artificial intelligence in healthcare. Cureus 2023;15:e43262.ArticlePubMedPMC
- 17. Diaz-Rodriguez N, Del Ser J, Coeckelbergh M, de Prado ML, Herrera-Viedma E, Herrera F. Connecting the dots in trustworthy artificial intelligence: from AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf Fusion 2023;99:101896.Article
- 18. Chan B. Black-box assisted medical decisions: AI power vs. ethical physician care. Med Health Care Philos 2023;26:285–92.ArticlePubMedPMCPDF
- 19. Marcus E, Teuwen J. Artificial intelligence and explanation: how, why, and when to explain black boxes. Eur J Radiol 2024;173:111393.ArticlePubMed
- 20. Amann J, Blasimme A, Vayena E, Frey D, Madai VI; Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 2020;20:310.ArticlePubMedPMC
- 21. Karim MR, Islam T, Shajalal M, Beyan O, Lange C, Cochez M, et al. Explainable AI for bioinformatics: methods, tools and applications. Brief Bioinform 2023;24:bbad236.ArticlePubMedPDF
- 22. Plass M, Kargl M, Kiehl TR, Regitnig P, Geibler C, Evans T, et al. Explainability and causability in digital pathology. J Pathol Clin Res 2023;9:251–60.ArticlePubMedPMC
- 23. Allareddy V, Oubaidin M, Rampa S, Venugopalan SR, Elnagar MH, Yadav S, et al. Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health. Orthod Craniofac Res 2023;26 Suppl 1:124–30.ArticlePubMed
- 24. Saint James Aquino Y. Making decisions: bias in artificial intelligence and data‑driven diagnostic tools. Aust J Gen Pract 2023;52:439–42.ArticlePubMed
- 25. Vicente L, Matute H. Humans inherit artificial intelligence biases. Sci Rep 2023;13:15737.ArticlePubMedPMCPDF
- 26. Sprung CL, Winick BJ. Informed consent in theory and practice: legal and medical perspectives on the informed consent doctrine and a proposed reconceptualization. Crit Care Med 1989;17:1346–54.ArticlePubMed
- 27. Wang Y, Ma Z. Ethical and legal challenges of medical AI on informed consent: China as an example. Dev World Bioeth 2024;Jan 19 [Epub]. https://doi.org/10.1111/dewb.12442.Article
- 28. Neri E, Coppola F, Miele V, Bibbolino C, Grassi R. Artificial intelligence: who is responsible for the diagnosis? Radiol Med 2020;125:517–21.ArticlePubMedPDF
- 29. Parchmann N, Hansen D, Orzechowski M, Steger F. An ethical assessment of professional opinions on concerns, chances, and limitations of the implementation of an artificial intelligence-based technology into the geriatric patient treatment and continuity of care. Geroscience 2024;46:6269–82.ArticlePubMedPMCPDF
- 30. Panesar SS, Kliot M, Parrish R, Fernandez-Miranda J, Cagle Y, Britz GW. Promises and perils of artificial intelligence in neurosurgery. Neurosurgery 2020;87:33–44.ArticlePubMedPDF
- 31. Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin 2019;69:127–57.ArticlePubMedPMCPDF
- 32. Choudhury A, Chaudhry Z. Large language models and user trust: consequence of self-referential learning loop and the deskilling of health care professionals. J Med Internet Res 2024;26:e56764.ArticlePubMedPMC
- 33. Bottomley D, Thaldar D. Liability for harm caused by AI in healthcare: an overview of the core legal concepts. Front Pharmacol 2023;14:1297353.ArticlePubMedPMC
- 34. Nolan P, Matulionyte R. Artificial intelligence in medicine: issues when determining negligence. J Law Med 2023;30:593–615.ArticlePubMed
- 35. Haftenberger A, Dierks C. Legal integration of artificial intelligence into internal medicine: data protection, regulatory, reimbursement and liability questions. Inn Med (Heidelb) 2023;64:1044–50.ArticlePubMed
- 36. Terranova C, Cestonaro C, Fava L, Cinquetti A. AI and professional liability assessment in healthcare: a revolution in legal medicine? Front Med (Lausanne) 2024;10:1337335.ArticlePubMedPMC
- 37. Shumway DO, Hartman HJ. Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations. J Osteopath Med 2024;124:287–90.ArticlePubMed
- 38. Mu’azzam K, Santos da Silva FV, Murtagh J, Sousa Gallagher MJ. A roadmap for model-based bioprocess development. Biotechnol Adv 2024;73:108378.ArticlePubMed
- 39. Ross J, Hammouche S, Chen Y, Rockall AG; Royal College of Radiologists AI Working Group. Beyond regulatory compliance: evaluating radiology artificial intelligence applications in deployment. Clin Radiol 2024;79:338–45.ArticlePubMed
- 40. Rezaeikhonakdar D. AI chatbots and challenges of HIPAA compliance for AI developers and vendors. J Law Med Ethics 2023;51:988–95.ArticlePubMedPMC
- 41. Siebelmann B, Grass G, Matthaei M, Cursiefen C, Gerhardt T, Koeberlein-Neu J, et al. Implementation and execution of big data-based studies in ophthalmology within the framework of the GDPR. Klin Monbl Augenheilkd 2024;241:758–67.ArticlePubMed
- 42. van Kolfschooten HB. A health-conformant reading of the GDPR’s right not to be subject to automated decision-making. Med Law Rev 2024;32:373–91.ArticlePubMedPMCPDF
- 43. Kumar K, Kumar P, Deb D, Unguresan ML, Muresan V. Artificial intelligence and machine learning based intervention in medical infrastructure: a review and future trends. Healthcare (Basel) 2023;11:207.ArticlePubMedPMC
- 44. Sharif T, Bugo J. The anthropological approach challenges the conventional approach to bioethical dilemmas: a Kenyan Maasai perspective. Afr Health Sci 2015;15:628–33.ArticlePubMedPMC
- 45. Orfali K. Parental role in medical decision-making: fact or fiction? A comparative study of ethical dilemmas in French and American neonatal intensive care units. Soc Sci Med 2004;58:2009–22.ArticlePubMed
- 46. Orfali K, Gordon EJ. Autonomy gone awry: a cross-cultural study of parents’ experiences in neonatal intensive care units. Theor Med Bioeth 2004;25:329–65.ArticlePubMedPDF
- 47. Nie JB, Smith KL, Cong Y, Hu L, Tucker JD. Medical professionalism in China and the United States: a transcultural interpretation. J Clin Ethics 2015;26:48–60.ArticlePubMed
- 48. Yasin L, Stapleton GR, Sandlow LJ. Medical professionalism across cultures: a literature review. MedEdPublish (2016) 2019;8:191.ArticlePubMedPMCPDF
- 49. Rieder TN, Hutler B, Mathews DJ. Artificial intelligence in service of human needs: pragmatic first steps toward an ethics for semi-autonomous agents. AJOB Neurosci 2020;11:120–7.ArticlePubMed
- 50. Laitinen A, Sahlgren O. AI systems and respect for human autonomy. Front Artif Intell 2021;4:705164.ArticlePubMedPMC
- 51. Vasey B, Lippert KA, Khan DZ, Ibrahim M, Koh CH, Layard Horsfall H, et al. Intraoperative applications of artificial intelligence in robotic surgery: a scoping review of current development stages and levels of autonomy. Ann Surg 2023;278:896–903.ArticlePubMed
- 52. Youssef A, Abramoff M, Char D. Is the algorithm good in a bad world, or has it learned to be bad? The ethical challenges of “locked” versus “continuously learning” and “autonomous” versus “assistive” AI tools in healthcare. Am J Bioeth 2023;23:43–5.Article
- 53. Talyshinskii A, Naik N, Hameed BM, Juliebo-Jones P, Somani BK. Potential of AI-driven chatbots in urology: revolutionizing patient care through artificial intelligence. Curr Urol Rep 2024;25:9–18.ArticlePubMedPMCPDF
- 54. Kerasidou A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ 2020;98:245–50.ArticlePubMedPMC
- 55. Perry A. AI will never convey the essence of human empathy. Nat Hum Behav 2023;7:1808–9.ArticlePubMedPDF
- 56. Magrabi F, Ammenwerth E, McNair JB, De Keizer NF, Hypponen H, Nykanen P, et al. Artificial intelligence in clinical decision support: challenges for evaluating AI and practical implications. Yearb Med Inform 2019;28:128–34.ArticlePubMedPMC
- 57. Reddy S, Rogers W, Makinen VP, Coiera E, Brown P, Wenzel M, et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform 2021;28:e100444.ArticlePubMedPMC
- 58. Allen B, Dreyer K, Stibolt R Jr, Agarwal S, Coombs L, Treml C, et al. Evaluation and real-world performance monitoring of artificial intelligence models in clinical practice: try it, buy it, check it. J Am Coll Radiol 2021;18:1489–96.ArticlePubMed
- 59. Feng J, Phillips RV, Malenica I, Bishara A, Hubbard AE, Celi LA, et al. Clinical artificial intelligence quality improvement: towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digit Med 2022;5:66.ArticlePubMedPMCPDF
- 60. Myllyaho L, Raatikainen M, Mannisto T, Mikkonen T, Nurminen JK. Systematic literature review of validation methods for AI systems. J Syst Softw 2021;181:111050.Article
- 61. Mokander J. Auditing of AI: legal, ethical and technical approaches. Digit Soc 2023;2:49.Article
- 62. Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci 2024;19:27.ArticlePubMedPMCPDF
- 63. Lysaght T, Lim HY, Xafis V, Ngiam KY. AI-assisted decision-making in healthcare: the application of an ethics framework for big data in health and research. Asian Bioeth Rev 2019;11:299–314.ArticlePubMedPMC
- 64. Kotsenas AL, Balthazar P, Andrews D, Geis JR, Cook TS. Rethinking patient consent in the era of artificial intelligence and big data. J Am Coll Radiol 2021;18(1 Pt B):180–4.ArticlePubMed
- 65. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell 2019;1:389–99.ArticlePDF
- 66. Rueda J, Rodriguez JD, Jounou IP, Hortal-Carmona J, Ausin T, Rodriguez-Arias D. “Just” accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI Soc 2024;39:1141–22.Article
- 67. Kiseleva A, Kotzinos D, De Hert P. Transparency of AI in healthcare as a multilayered system of accountabilities: between legal requirements and technical limitations. Front Artif Intell 2022;5:879603.ArticlePubMedPMC
- 68. Lomis K, Jeffries P, Palatta A, Sage M, Sheikh J, Sheperis C, et al. Artificial intelligence for health professions educators. NAM Perspect 2021;2021:10.31478/202109a.Article
- 69. Majumder A, Sen D. Artificial intelligence in cancer diagnostics and therapy: current perspectives. Indian J Cancer 2021;58:481–92.ArticlePubMed
- 70. Altamimi I, Altamimi A, Alhumimidi AS, Altamimi A, Temsah MH. Artificial intelligence (AI) chatbots in medicine: a supplement, not a substitute. Cureus 2023;15:e40922.ArticlePubMedPMC
- 71. Chen RY. A traceability chain algorithm for artificial neural networks using T-S fuzzy cognitive maps in blockchain. Future Gener Comput Syst 2018;80:198–210.Article
- 72. Narneg S, Dodde S, Adedoja A, Ayyalasomayajula MMT, Chintala S. AI-driven decision support systems in management: enhancing strategic planning and execution. IJRITCC 2024;12:268–76.PDF
- 73. Galbusera F, Cina A. Image annotation and curation in radiology: an overview for machine learning practitioners. Eur Radiol Exp 2024;8:11.ArticlePubMedPMCPDF
- 74. Jeong YU, Yoo S, Kim YH, Shim WH. De-identification of facial features in magnetic resonance images: software development using deep learning technology. J Med Internet Res 2020;22:e22739.ArticlePubMedPMC
- 75. Kondylakis H, Catalan R, Alabart SM, Barelle C, Bizopoulos P, Bobowicz M, et al. Documenting the de-identification process of clinical and imaging data for AI for health imaging projects. Insights Imaging 2024;15:130.ArticlePubMedPMCPDF
- 76. Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, et al. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024;42:3–15.ArticlePubMedPMCPDF
- 77. Kahn CE Jr. Hitting the mark: reducing bias in AI systems. Radiol Artif Intell 2022;4:e220171.ArticlePubMedPMC
- 78. Agarwal A, Agarwal H, Agarwal N. Fairness score and process standardization: framework for fairness certification in artificial intelligence systems. AI Ethics 2023;3:267–79.ArticlePDF
- 79. Fellander-Tsai L. AI ethics, accountability, and sustainability: revisiting the Hippocratic oath. Acta Orthop 2020;91:1–2.ArticlePMC
- 80. Oduro S, Moss E, Metcalf J. Obligations to assess: recent trends in AI accountability regulations. Patterns (N Y) 2022;3:100608.ArticlePubMedPMC
- 81. Hohma E, Boch A, Trauth R, Lutge C. Investigating accountability for artificial intelligence through risk governance: a workshop-based exploratory study. Front Psychol 2023;14:1073686.ArticlePubMedPMC
- 82. Vo V, Chen G, Aquino YS, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: a systematic review and thematic analysis. Soc Sci Med 2023;338:116357.ArticlePubMed
- 83. World Health Organization (WHO). Ethics and governance of artificial intelligence for health [Internet]. WHO; c2021 [cited 2024 Sep 1]. https://www.who.int/publications/i/item/9789240029200
- 84. Gradellini C, Gomez-Cantarino S, Dominguez-Isabel P, Molina-Gallego B, Mecugni D, Ugarte-Gurrutxaga MI. Cultural competence and cultural sensitivity education in university nursing courses. a scoping review. Front Psychol 2021;12:682920.ArticlePubMedPMC
- 85. Cabler KA. Exploring the impact of diversity training on the development and application of cultural competence skills in higher education professionals [Dissertation]. Virginia Commonwealth University; 2019.Article
- 86. Alper J, Sloan SS. Data matters: ethics, data, and international research collaboration in a changing world. Proceedings of a workshop. 1st ed. National Academies Press; 2018.
- 87. Newman SD, Andrews JO, Magwood GS, Jenkins C, Cox MJ, Williamson DC. Community advisory boards in community-based participatory research: a synthesis of best processes. Prev Chronic Dis 2011;8:A70.PubMedPMC
- 88. Davies A, Ormel I, Bernier A, Harriss E, Mumba N, Gobat N, et al. A rapid review of community engagement and informed consent processes for adaptive platform trials and alternative design trials for public health emergencies. Wellcome Open Res 2023;8:194.ArticlePubMedPMCPDF
- 89. Lee CP. Design, development, and deployment of context-adaptive AI systems for enhanced user adoption. In: CHI EA '24: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems; 2024 May 11-16; Honolulu, USA. Association for Computing Machinery; 2024. p. 429.Article
- 90. World Health Organization (WHO). WHO issues first global report on artificial intelligence (AI) in health and six guiding principles for its design and use [Internet]. WHO; c2021 [cited 2024 Sep 1]. https://www.who.int/news/item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and-six-guiding-principles-for-its-design-and-use
- 91. Nyariro M, Emami E, Caidor P, Abbasgholizadeh Rahimi S. Integrating equity, diversity and inclusion throughout the lifecycle of AI within healthcare: a scoping review protocol. BMJ Open 2023;13:e072069.ArticlePubMedPMC
- 92. Izadi S, Forouzanfar M. Error correction and adaptation in conversational AI: a review of techniques and applications in chatbots. AI 2024;5:803–41.Article
- 93. Rambach T, Gleim P, Mandelartz S, Heizmann C, Kunze C, Kellmeyer P. Challenges and facilitation approaches for the participatory design of AI-based clinical decision support systems: protocol for a scoping review. JMIR Res Protoc 2024;13:e58185.ArticlePubMedPMC
Citations
Citations to this article as recorded by