Introduction

Chat Generative Pre-trained Transformer (ChatGPT) is a natural language processing (NLP) tool based on a large-scale pre-trained language model developed by OpenAI1. It utilizes the Transformer architecture, which generates, comprehends, translates, and interprets human language. With the advancement of AI technology, the potential of ChatGPT’s application in the medical field has gradually emerged, especially in health education, disease prevention, and patient management2,3,4,5. ChatGPT can aid with medical history collecting, electronic medical record documentation, clinical decision support, patient health management, medical education, clinical research, and scientific writing in clinical practice3,5.

While ChatGPT exhibits proficiency in data processing and information production, the acceptance and attitudes by healthcare professionals and the public will significantly influence its widespread application in clinical practice6,7. Healthcare professionals may have concerns regarding the reliability of the technology in the decision-making process, particularly in intricate or urgent care situations when dependence on ChatGPT-generated recommendations may increase clinical risk8. Likewise, patient trust is crucial; their willingness to allow ChatGPT in the medical process will depend on trust in the technology, data privacy safeguarding, and emotional requirements satisfaction9. ChatGPT-generated advice may lack humanization for many patients, particularly in emotionally sensitive healthcare situations.

Therefore, studying the attitudes of healthcare professionals and the public toward the clinical application of ChatGPT may elucidate potential barriers and facilitators. Comprehending these attitudes enables the formulation of effective methods for integrating ChatGPT into clinical practice, ultimately improving healthcare service efficiency and optimizing the medical experience.

Background

The successful application of ChatGPT in clinical practice depends on technical performance and critically on the attitudes and perceptions of non-healthcare and healthcare6.

The attitudes of non-healthcare, as direct beneficiaries of medical services, will significantly impact the application of ChatGPT in clinical practice10. The researchers concluded that non-healthcare’s willingness to use ChatGPT is associated with expected benefits. Positive sentiments were reflected in younger patients’ willingness to use new technology, particularly about medical information and self-health management. They saw ChatGPT as a source of immediate and convenient advice11. Nevertheless, additional research indicates unfavorable attitudes, with patients exhibiting heightened apprehensions over the trustworthiness of information produced by AI technology, particularly concerning privacy protection and data security2,9. Moreover, the existing standardized recommendations produced by ChatGPT may need more human touch, affecting non-healthcare stakeholders’ participation in healthcare decision-making9. Therefore, non-healthcare’s attitude toward applying ChatGPT in clinical practice can help ensure the adoption of this technology.

At the same time, the attitude of healthcare professionals, as providers of medical services, determines whether ChatGPT can be effectively incorporated into daily clinical work. Positive sentiments are reflected in healthcare’s belief that ChatGPT can significantly improve efficiency in paperwork processing and reduce workload5,7,12,13. However, many healthcare professionals express skepticism over its function in intricate clinical decision-making7,8. Some physicians worry that relying on ChatGPT may weaken their professional judgment and even increase the risk of misdiagnosis. Moreover, ethical issues and the division of responsibility are essential concerns for healthcare professionals, especially in medical errors, where ChatGPT may obfuscate the assignment of accountability5,7,14. Therefore, studying healthcare professionals’ acceptance, concerns, and ethical considerations regarding ChatGPT can help ensure its safe and practical application in clinical environments.

The Internet platform serves as the primary channel for the rapid access and dissemination of ChatGPT information, and the Internet community is most concerned about the development of ChatGPT. ChatGPT, a hot topic on the Chinese Internet, has attracted extensive attention from healthcare and non-healthcare since its emergence15. The Internet is an open platform for discussion, enabling them to quickly access and share experiences and opinions about using ChatGPT. Healthcare professionals and the public can discuss its advantages and disadvantages in clinical practice. However, current research has yet to explore the attitudes of the Internet community regarding the application of ChatGPT in clinical practice, particularly in healthcare and non-healthcare perspectives. Therefore, this study will analyze the attitudes of healthcare and non-healthcare toward using ChatGPT in clinical practice based on Internet data to provide strategic recommendations for the clinical dissemination of ChatGPTI.

Methods

Data collection

This study used keywords to search from November 30, 2022, to September 1, 2024. The keywords are (“CHATGPT” OR “AI”) AND (“Doctor” OR “Nurse” OR “Health” OR “Medical” OR “Medical” OR “Diagnosis” OR “Treatment”) AND (“Intelligent medical” OR “AI consulting” OR “Medical technology innovation” OR “AI-assisted health management”). Social media include Zhihu, Weibo, and Red Book; medical forums include Ding Xiang Yuan and Ai Medicine. The information collected includes time, User type, text content, number of likes, etc. User type contains non-healthcare, health professional, and unknown. Non-healthcare is divided into self, family/friend, celebrity, and media organization. Finally, this study forms the text of non-healthcare as public opinion and the text of health professionals as health professional opinion.

Data cleaning

The data cleaning section is used to remove irrelevant information. This part is divided into four steps: removing irrelevant information, Text normalization, Language detection and filtering, and handling private information. Firstly, irrelevant information should be removed, and data redundancy should be reduced to avoid interfering with the analysis results. It includes removing blatant advertisements or spam comments and deleting duplicate comment content. Secondly, text normalization to ensure consistency and accuracy for data analysis. This part includes six steps: case conversion, removal of punctuation, special characters and emoticons, correction of spelling errors in comments, handling of spaces and acronyms, and removal of URLs. Thirdly, Language detection and filtering: This step can filter out and detect irrelevant language content if the dataset contains languages other than English and Chinese. Fourthly, Handling private information: Sensitive information (e.g., phone numbers, email addresses, etc.) should be deleted or anonymized for people’s privacy.

Validation of relevance

When investigating the availability of ChatGPT in medical and non-medical topics, a two-person test can effectively ensure rigor in data cleansing. Two researchers independently examined the data for relevance and usability in response to common problems such as hidden advertising, misinformation, and irrelevant content. First, the two men sifted through the original data set to find irrelevant or low-quality data, including advertising content, exaggerated misinformation, and repetitive or meaningless content. The two sides then review each other’s labeled data, record and discuss the items that differ, and reach a consensus to reduce the subjective deviation. The new types of issues identified during the review can be used to dynamically update data cleansing rules further to enhance the accuracy and credibility of data analysis.

Sentimental analysis

Selection of the sentiment dictionary

The sentiment dictionary of the Dalian University of Technology was used. This dictionary contains information on word lexical categories, categories, intensity, and polarity and has been adapted and improved in Chinese under Ekman’s emotion classification system16,17,18.

Text segmentation

This part is divided into three steps: Chinese word splitting, processing synonyms, and creating customized medical word lists. Chinese word splitting: Use Jieba, a reliable tool, to split the cleaned text into individual words. The lexical information (e.g., nouns, verbs, adjectives, etc.) in the split result is retained for matching emotion words. Handle near-synonyms: Use a thesaurus to replace or tag synonyms to improve vocabulary matching. Creating medical customized word lists to improve the accuracy of sentiment analysis.

Vocabulary matching

This step includes matching sentiment vocabulary, sentiment annotation, and contextual information. First, the segmented text was matched against a predefined sentiment dictionary adapted from the Dalian University of Technology lexicon. Identified words were assigned an emotion category, polarity, and intensity. Second, sentiment words were annotated within the text, linking them to specific emotion categories to facilitate quantification and detailed analysis. Finally, contextual adjustments were made to refine sentiment polarity and intensity, accounting for factors such as negations or amplifiers. For instance, positive words preceded by negations were reassigned to negative polarity, ensuring the accurate representation of emotional nuances. This systematic approach ensured robust and precise sentiment detection, forming the basis for subsequent analysis.

Emotion scoring method

This study utilized a combined weighting and counting approach to compute the overall sentiment score of each comment. Firstly, Sentiment words identified in the text were assigned intensity values based on the sentiment dictionary. Positive words contributed to a positive score, while negative words contributed to a negative score. Secondly, The overall sentiment score was calculated by subtracting the total negative score from the total positive score. Finally, the overall sentiment scores were standardized to a scale (−1 to 1) to ensure consistency and facilitate comparisons. This transformation helped align the results with practical application needs and allowed a more straightforward interpretation of sentiment polarity and intensity.

Testing and optimization

A two-stage validation process was implemented to ensure the reliability and accuracy of sentiment analysis. Firstly, manual Annotation: Two trained annotators independently labeled a subset of the data with sentiment categories. Annotation consistency was calculated using inter-annotator agreement measures to assess the reliability of manual labels. Secondly, comparison with Automated Analysis: The results of the automated sentiment analysis were compared against the manually annotated data. Performance was evaluated using precision, recall, and F1 scores, which balance precision and recall. The study achieved an F1 score of 76, indicating that the automated sentiment analysis met the required standard for accuracy. This iterative process ensured that the sentiment analysis was robust, reliable, and reflective of the original text data.

Content analysis using K-means algorithm

We conducted content analysis using the K-Means Algorithm to delve deeper into the underlying themes and patterns within the textual data. This method, a widely recognized unsupervised machine learning technique, is particularly effective in identifying clusters within large datasets, enabling the extraction of insightful patterns. To ensure the reliability and applicability of this method in our research, we followed a rigorous, step-by-step process validated through prior optimization efforts19. The structured application of the K-Means algorithm was divided into four steps.

  • Step 1: Initialization: The initial stage involves establishing cluster centers to initiate the clustering process. Specifically, K data points were randomly selected from the dataset as initial centroids. The value of K, representing the number of clusters, was determined through a combination of preliminary analysis and ___domain-specific knowledge. Multiple random initializations were performed in certain instances to enhance robustness and ensure the reliability and accuracy of sentiment analysis.

  • Step 2: Iterative optimization: The refinement of cluster centers was achieved through an iterative optimization process designed to minimize variance within clusters. This step unfolded in three phases: Firstly, assigning Data Points to Clusters: For each data point, distances to the K centroids were calculated (using Euclidean distance as the standard). Each data point was then assigned to the cluster with the nearest centroid. Secondly, updating Cluster Centers: After the assignment, each cluster’s mean of data points was calculated to redefine its centroid. These updated centroids ensured that cluster centers accurately reflected their data points. Thirdly, reassignment and Update: This process of assigning data points to clusters and recalculating centroids was repeated iteratively. Refinements continued until the clusters stabilized.

  • Step 3: Termination criteria: The iterative process was concluded when the cluster centers displayed minimal or no significant changes between iterations, signaling convergence. A maximum iteration threshold was also applied to ensure computational efficiency, preventing excessive computations even in scenarios where centroids adjusted slightly between iterations.

  • Step 4: Evaluation and interpretation: The clusters were evaluated for quality and relevance using the elbow method, which determines the optimal number of clusters by assessing within-cluster variance against the number of clusters. Each cluster was then analyzed for prevalent topics, including frequently occurring terms and sentiment scores. The clusters were validated against known categories and expected patterns, providing a robust foundation for interpreting the results.

Result

This study comprised 3,130 comments, amounting to 1,593,650 words. The results for Non-healthcare professionals showed 1856 comments, with 92% self, family/friend; healthcare professionals: 1120; unknown user: 154. Time analysis produced 494 comments for 2022, 1596 for 2023, and 1040 for 2024. Tables 1 and 2 present the results of sentiment analysis and content analysis.

Table 1 Sentiment analysis and content analysis of non-healthcare professionals.
Table 2 Sentiment analysis and content analysis of healthcare professionals.

Non-healthcare professionals had seven emotions on the use of ChatGPT in clinical practice. Good emotion included anticipations of tangible outcomes, optimization consulting, and trust in potential results. Happy emotions included emotional support, relief of anxiety, and positive acceptance of feelings. Surprise emotions included exceeding expected results, delivery of information, and curiosity about potential applications. Anger emotion included misapplication, therapeutic misinformation, and ethical issues. Disgust emotions included privacy and data risks, questioning accuracy, over-reliance, and complexity. Fear emotion included unknowns, skepticism about responsiveness, and negative medical experiences. Sad emotions included failure to meet therapeutic expectations, failure to alleviate pain, and loneliness.

Healthcare professionals had seven emotions on the use of ChatGPT in clinical practice. Good emotion included professionalism, innovative development, and efficiency and accuracy expectations. Happy emotions included application satisfaction and workload relief. Surprise emotions included cautiousness of popularization and innovative trends. Anger emotion included limitations, misinformation regarding therapy, and patient over-reliance. Disgust emotion included reliability in the professional field, ethical and legal risk, and job-loss risk. Fear emotion includes a crisis of trust, losing control, and questioning processing capability. Sad emotions included self-blame and the Powerlessness of technological innovations.

Discussion

This study is the first to explore the sentiments of healthcare and non-healthcare professionals toward implementing ChatGPT in clinical practice and to provide feasible recommendations for AI-assisted clinical practice. This study extracted Good, Happy, Surprise, Anger, Disgust, Fear, and Sad emotions. Healthcare and non-healthcare professionals share concerns about misinformation, over-reliance, trust, expectations of actual results, and possible risks. In particular, healthcare professionals were more concerned about the changes and potential risks of ChatGPT to medical work patterns. Non-healthcare professionals had ambivalence between expectations and actual application, reflecting this group’s perception of the need for more ChatGPT in clinical practice.

Sentiment analysis of non-healthcare professionals

Good emotion included anticipations of tangible outcomes, optimization consulting, and trust in potential results. Firstly, the result of the anticipation of tangible outcomes is consistent with Platt and Avishek’s findings20,21. Non-healthcare professionals require ChatGPT to deliver accurate and credible medical diagnoses, health information, and further support. Consequently, ChatGPT must be refined to guarantee the precision of its medical information while elucidating intricate terminology. Furthermore, it consistently updates its medical knowledge repository to guarantee alignment with contemporary medical standards. Secondly, the results of optimization consulting are consistent with Cheng, Shang, Scherr, and Alanzi’s findings22,23,24,25. Non-healthcare professionals seek prompt and comprehensive feedback from ChatGPT, particularly in urgent circumstances. This concept is closely related to their time cost and health status. Therefore, this study suggests that ChatGPT should be required to have real-time response capability to shorten the waiting time of patients and fulfill urgent health needs. Thirdly, the result of trust in potential results is consistent with Choudhury, Platt, Avishek, and Razdan’s findings20,21,26,27. Non-healthcare professionals’ intention to use ChatGPT enhances their faith, and conversely, good experiences such as successful medical treatment or health recovery increase trust in ChatGPT. Moreover, the findings of this study are inconsistent with those of Shinnosuke Ikeda. The selection of outcomes based on expert and AI recommendations is not associated with faith in AI. Consequently, ChatGPT must deliver an optimal user experience and implement a trust mechanism in actual applications.

Happy emotions included emotional support, relief of anxiety, and positive acceptance of feelings. Firstly, the results of emotional support are consistent with Zheng, Sharma, and Alanezi’s findings28,29,30. Non-healthcare professionals anticipate ChatGPT’s efficacy in medical treatment outcomes or health recuperation domains. Non-healthcare professionals anticipate that the technology will assist in alleviating their ailment or identifying the appropriate therapy pathway. Consequently, the ChatGPT therapy procedure should be supplemented with manual support and psychological assistance for the patient. Secondly, the results of anxiety relief are consistent with Alanezi and Farhat’s findings28,31. Non-healthcare professionals depend on ChatGPT to alleviate tension and promote relaxation when confronted with ambiguous medical concerns. This study proposes that ChatGPT should assist patients in stress management by suggesting resources that alleviate emotional distress. Thirdly, the result of positive acceptance is consistent with Alanezi and Morita’s findings28,32. Non-healthcare professionals are very receptive to new technologies and demonstrate considerable motivation and enthusiasm for the ChatGPT application, indicating their aspiration to improve their medical experience through innovative solutions. Consequently, the promotion of ChatGPT should emphasize innovation and practicality while showcasing personalized and intelligent features to fulfill patients’ expectations of emerging technologies.

Surprise emotions included exceeding expected results, delivery of information, and curiosity about potential applications. Firstly, exceeding expected results is consistent with Platt and Deiana’s findings21,33. Non-healthcare professionals frequently empathize with ChatGPT’s performance in medical applications, particularly when the outcomes surpass their expectations. Consequently, we advocate for gradually familiarizing the public with the application domains and functionalities of ChatGPT. Secondly, information delivery is consistent with Alanezi’s findings28. Non-healthcare professionals may view ChatGPT as markedly distinct from conventional methods of delivering medical information. Consequently, we recommend that ChatGPT implement a more participatory and visual methodology for information dissemination, thereby improving comprehension and involvement through diagrams and sequential instructions. Thirdly, curiosity about potential applications is consistent with Elisabetta, Choudhury, and Lee’s findings26,34,35. Non-healthcare professionals are keenly interested in ChatGPT’s functionalities within medical scenarios, especially since ChatGPT can solve complex medical problems. Therefore, the design process of ChatGPT should emphasize its proficiency in addressing intricate medical issues and encourage patients to engage with the technology by presenting real-world instances of interdisciplinary collaboration, disease forecasting, and the formulation of personalized treatment plans.

Anger emotion included misapplication and misinformation, as well as ethical issues. Firstly, the misapplication of results originated from discontent or technical faults during the consultation process and outcomes delivered by ChatGPT, particularly with high expectations. This concept should have been referenced in the previous study. It is because the ChatGPT techniques employed have successfully undergone ethical audits or the researcher’s subjective non-disclosure has led to the absence of this aspect of the problem. Consequently, we propose that a stringent trial design is necessary when using ChatGPT. Secondly, misinformation results are consistent with Alanezi, Deiana, Kahambing, De Angelis, Park, Morita, and Li’s findings14,28,32,33,36,37,38. The accurate diagnostic and treatment information provided by ChatGPT may positively impact patient health. These circumstances arise from patients’ heightened health sensitivity and significant dependence on the precision of information. It is advisable to enhance information accuracy monitoring via a multi-tiered medical data validation system, updates of the medical knowledge repository, and the implementation of expert audits. Simultaneously, self-correction and disclaimer mechanisms enhance patient vigilance. Thirdly, ethical issues are consistent with Platt and Alanezi’s findings21,28. Non-healthcare professionals are apprehensive that the use of ChatGPT may intensify societal issues, particularly in potential inequitable or discriminating treatment. To mitigate ethical concerns, the design of ChatGPT must adhere to the ideals of equity and transparency. Opening the decision-making process, incorporating ethical reviews and patient feedback systems, and continuously identifying and mitigating ethical risks to achieve equity and inclusivity.

Disgust emotions included privacy and data risks, questioning accuracy, over-reliance, and complexity. Firstly, the complexity result should have been mentioned in the previous study. The ChatGPT’s usage restrictions in China and the discomfort encountered during interactions may lead the public to forsake, particularly among persons with inadequate information literacy. Consequently, this study indicates that participants in future programs must evaluate their informational capabilities. ChatGPT treatment procedures can be customized by streamlining the interface, incorporating interactive coaching, and minimizing the learning curve. Secondly, the privacy and data risks results are consistent with Kahei Au, Platt, Alanezi, Kahambing, Morit, and Alanzi’s findings21,22,28,32,37,39. Non-healthcare professionals express concerns about privacy and personal data vulnerabilities, disinformation, and the potential inaccuracies of AI in health advising. This study advocates for privacy protection and data security enhancement. Frequent information audits and data security assessments must be performed to guarantee data protection and uniform utilization. Thirdly, the accuracy of the results of the questionnaires is consistent with Platt, Alanezi, Morita, and Alanzi’s findings21,22,28,32. Non-healthcare professionals are concerned about the distrust generated by ChatGPT, mainly the misinformation and the potential misinformation of AI for health counseling. Consequently, the collaboration of interdisciplinary medical specialists guarantees that the diagnostic and treatment information produced is reliable and has clinical value. A user feedback mechanism is implemented to rectify misinformation promptly. Fourthly, over-reliance results are consistent with Kiyoshi, Hussain, and Chakraborty Samant’s findings40,41,42. Non-healthcare professionals depend on ChatGPT for decision-making and need help comprehending its output. Consequently, the design of ChatGPT should prioritize the role of “assisting decision-making” over “leading decision-making,” clearly delineating its supportive function as a medical instrument. Simultaneously, explicit reminders and guidance must motivate patients to consult their physicians when making significant medical decisions, mitigating the excessive dependence on AI.

Fear emotion included unknowns, skepticism about responsiveness, and negative medical experiences. Firstly, unknowns are consistent with the Ikeda findings43. Non-healthcare professionals are concerned about ChatGPT’s potential diagnosis or treatment inaccuracies, insufficient communication with their physicians, and care quality and safety. These emotions primarily reflect unfamiliar technologies and the anxiety that human-computer connection may diminish emotional support. Therefore, enhancing human-computer collaboration and improving verbal communication and psychological reassurance features in ChatGPT apps is advisable to facilitate physicians’ involvement in crucial medical decision-making. Secondly, the results of responsiveness skepticism are consistent with Saeidnia and Scherr’s findings24,44. Non-healthcare professionals are concerned about the incapacity of ChatGPT technology to manage intricate medical scenarios, particularly when patients need precarious and swift decision-making. Therefore, it is advisable to enhance ChatGPT’s response and referral systems in intricate situations by collaborating with physicians to acquire prompt patient history and current health information. Thirdly, the fear of negative medical experiences may arise from worries that ChatGPT fails to encompass the entirety of an individual’s medical history or intricate pathology, particularly about high-risk or complex conditions. The researchers should have mentioned this idea because ChatGPT is used less in medical practice. Therefore, this study advocates prioritizing the patient’s healthcare experience in future practice. It is advisable to enhance the integration of historical data and personalized medication in future applications of ChatGPT by refining the patient record management system and condition monitoring mechanisms.

Sad emotions included failure to meet therapeutic expectations, failure to alleviate pain, and loneliness. Firstly, the result of failure to meet therapeutic expectations is consistent with Elshazly’s findings45. Non-healthcare professionals’ primary issue is that ChatGPT is ineffective in medical procedures, particularly regarding disease progression or inadequate treatment outcomes. This result contradicts the conclusions of Xue and Levkovich46,47. They believe that ChatGPT presents a viable option for telemedicine consultations. Healthcare professionals may use it as patient education, while patients might use it as a quick tool for health inquiries. Augmenting ChatGPT’s function as a supplementary resource to assist patients in comprehending their illnesses and offering prudent health management guidance. Secondly, the result of failure to alleviate pain is consistent with Elshazly’s findings45. Failure to alleviate pain suggests that ChatGPT may not effectively mitigate patients’ physical discomfort during medical operations. The result is inconsistent with Wen Peng’s findings48. The study suggests that ChatGPT excels in pain management and meets the established criteria. Enhancing ChatGPT functionality in pain evaluation and management is advisable by meticulously documenting and monitoring variations in patients’ pain levels. Simultaneously, collaboration with physicians and medication protocols should be enhanced to guarantee the efficacy of pain management measures. Thirdly, loneliness suggests an absence of interpersonal interactions and emotional engagement in ChatGPT. This idea should have been mentioned in previous studies. Therefore, this study suggests designing a humanized and interactive approach in practice. It is recommended that personalized conversational styles, emotionally supportive language, and interactive care features be added to the design of ChatGPT. Additionally, it can be synergized with human caregivers to provide emotional support.

Sentiment analysis of healthcare professionals

Good emotion included professionalism, innovative development, and efficiency and accuracy expectations. Firstly, professionalism is consistent with Wang and Xie’s findings49,50. Healthcare professionals prioritize ChatGPT’s ability to conform to established standards of care and assist in clinical decision-making. Healthcare professionals emphasize standardization and consistency. Therefore, this study suggests establishing a mechanism to standardize the evaluation and certification of ChatGPT by strictly adhering to medical standards in its design and application. This design will allow healthcare professionals to perceive it as compliant with industry standards during its utilization, hence augmenting their faith in it as an assistive tool. Secondly, innovative development is consistent with Jin, Platt, and Qi’s findings21,51,52. Healthcare professionals are concerned about their potential innovative ability, particularly in spearheading the advancement and utilization of medical technology. Healthcare professionals enhance treatment outcomes through technological innovation. Therefore, this study suggests promoting the application of ChatGPT in technological innovation, including introducing the latest AI algorithms, data analytics techniques, and intelligent predictive models, to enhance its cutting-edge in the medical field. Thirdly, efficiency and accuracy expectations are consistent with Xue and Xie’s findings47,50. Healthcare professionals consider the ChatGPT’s efficacy and precision as crucial indicators for advancing healthcare and refining the diagnostic and therapeutic processes. Therefore, this study suggests optimizing the processing efficiency and diagnostic accuracy in applying ChatGPT and developing an efficient diagnosis and treatment support tool for specific conditions.

Happy emotions included application satisfaction and workload relief. Firstly, application satisfaction is consistent with Elisabetta, Choudhury, and Avishek’s findings20,35,53. Healthcare professionals prioritize the practical implementation of technology in the medical process, as well as safety and satisfaction. This study indicates that the design and implementation should prioritize optimizing the user experience to ensure a user-friendly interface and intuitive functionality while minimizing operational difficulties. Secondly, workload relief is consistent with Avishek’s findings20. Healthcare professionals anticipate ChatGPT will alleviate the burden of demanding everyday responsibilities and facilitate secure decision-making in intricate situations. Therefore, this study recommends the implementation of automated tasks, intelligent decision support features, and optimized workflow integration in the design and application of ChatGPT.

Surprise emotions included cautiousness of popularization and innovative trends. Firstly, the cautiousness of popularization is consistent with Wardah, Han, Felix, Yuan, and Tian’s findings54,55,56,57,58. Healthcare professionals have been surprised by the performance of ChatGPT in clinical settings. However, they showed more caution in popularizing and applying this non-traditional technique. Including comprehensive clinical trial data and empirical studies is advisable while advocating for ChatGPT to bolster physicians’ trust. A phased implementation plan should be employed, commencing with a small-scale pilot and progressively broadening the scope to mitigate risks and gather practical experience. Secondly, innovative trends are consistent with Santiago, Bryan, and Qi’s findings52,59,60. Healthcare professionals see ChatGPT’s capabilities combined with traditional diagnostic and treatment methods as a transformative instrument for healthcare. Physicians monitor the advancement of the technologies while maintaining vigilance. Consequently, we encourage physicians to investigate the capabilities of ChatGPT in enhancing the efficiency of their practice and improving patient management.

Anger emotion included limitations, therapeutic misinformation, and patient over-reliance. Firstly, limitations are consistent with Matthew, Lukas J Meier, and Eric J Beltrami’s findings61,62,63. Healthcare professionals’ anger predominantly stems from perceived constraints of the technology. They may experience dissatisfaction and skepticism regarding the technology’s failure to adequately resolve specific complex therapeutic issues. Therefore, this study recommends that detailed technical documentation and clinical trial data be provided to clarify the scope and limitations of ChatGPT. Secondly, therapeutic misinformation is consistent with Kahambing and Tian’s findings37,57. Healthcare professionals’ anger may arise from ChatGPT’s provision of diagnostic and treatment suggestions contradicting their medical expertise, particularly in complex or rare cases. Healthcare professionals may regard ChatGPT’s diagnostic advice as misleading, resulting in grievances and emotional protests. This study recommends robust quality control measures to evaluate, scrutinize, and validate ChatGPT’s medical diagnostic suggestions. Concurrently, physicians should be motivated to provide feedback and rectify problems to enhance the precision and dependability of the technology consistently. Thirdly, Patient over-reliance on ChatGPT signifies a tendency to doubt their physician’s diagnosis in favor of ChatGPT’s therapy suggestions. In this situation, physicians perceive a danger to their professional status, resulting in unhappiness. This idea should have been mentioned in a previous study. However, a similar view is that patients’ reliance on search engines affects physicians’ professional authority. This study suggests that instructions on using ChatGPT should be strengthened in patient education to clarify its role as a supportive tool and improved to enhance patients’ understanding and trust in doctors’ advice.

Disgust emotion included reliability in the professional field, ethical and legal risk, and job-loss risk. Firstly, reliability in the professional field is consistent with Kahei Au, Tian, Liu, and Harriet’s findings39,57,64,65. Healthcare professionals are skeptical about the use of ChatGPT in specialized medicine. ChatGPT cannot understand complex medical problems accurately because of its non-specific medical data and sluggish update frequency. They believe that ChatGPT technology still needs to achieve accuracy and reliability for clinical application and cannot replace doctors’ judgment. Inconsistent with Wang and Meo’s findings49,66, ChatGPT possesses the capability to comprehend medical knowledge within the Chinese context and has attained the proficiency and standards necessary for entry into graduate programs in clinical medicine. Therefore, this study recommends updating the medical datasets and conducting clinical trials. Secondly, ethical and legal risk is consistent with Kahei Au, Wang, Liu, and Eric J Beltrami’s findings39,49,61,64. Healthcare professionals are concerned about the legal and ethical risks that ChatGPT may pose in medical practice, especially privacy protection and data leakage. Therefore, this study recommends implementing strict data privacy protection measures, including encryption and anonymization of patient information, alongside creating an ethical review framework. Thirdly, the job-loss risk is consistent with Umar Ali Bukar’s findings67. Healthcare professionals resist using ChatGPT, viewing it as a potential danger to their professional standing. This study advocates for the physicians’ professional training in new technology by elucidating the features and limitations of ChatGPT to boost the recognition of physicians’ professionalism.

Fear emotion includes a crisis of trust, losing control, and questioning processing capability. Firstly, the crisis of trust is consistent with Kahei Au and Liu’s findings39,64. Healthcare professionals may worry that ChatGPT could undermine their professionalism and authority in the diagnostic process, particularly when its recommendations contradict their clinical judgment, diminishing patient trust. Therefore, this study suggests clarifying the role of ChatGPT as an auxiliary tool in its application to ensure that its recommendations are consistent with physicians’ professional judgment. Secondly, losing control is consistent with Julien and Liu’s findings64,68. Healthcare professionals may be concerned about the likelihood of ChatGPT failing in complex medical decisions or causing severe medical errors due to technical malfunctions. This sentiment is associated with physicians’ long-term duty to focus on patient safety. Therefore, this study suggests implementing rigorous technology testing and quality control measures and developing contingency plans for possible technological failures. Thirdly, questioning processing capability is consistent with Saeidnia, Scherr, Julien, Deanna, and Liu’s findings24,44,64,68,69. Healthcare professionals are concerned that ChatGPT can manage intricate clinical scenarios because ChatGPT’s data is not specialized for medical data. This study recommends augmenting ChatGPT’s processing capabilities by incorporating medical ___domain-specific datasets and algorithmic optimization. Simultaneously, model training and validation are conducted to improve its capacity to manage intricate clinical situations.

Sad emotions included self-blame and the Powerlessness of technological innovations. Firstly, Self-blame is consistent with Liu’s findings64. Healthcare professionals may feel self-blame for errors or misdiagnoses during medical procedures if they make wrong decisions by ChatGPT, which may exacerbate their guilt. This concept reflects physicians’ heightened sensitivity to their professional responsibilities. Therefore, it is recommended that the accuracy and reliability of the technology in the application be enhanced to mitigate the escalation of accountability resulting from technological failures. Secondly, the Powerlessness of technological innovations is consistent with Christy’s findings70. Healthcare professionals frequently need help managing the intricacies and resource deficiencies. The advent of ChatGPT may not have mitigated pressures; instead, it has intensified physicians’ skepticism over the technology’s capacity to resolve the issue entirely. Therefore, this study suggests upgrading physicians’ training on new technologies, increasing their acceptance and confidence.

Comparison between health and non-health professionals

Health professionals and non-health people both express trust issues and concerns about misinformation, highlighting shared skepticism regarding ChatGPT’s reliability. Positive emotions reflect shared optimism about its potential to improve clinical outcomes and reduce workload. However, healthcare professionals show more caution about ChatGPT’s impact on professional roles and medical errors, emphasizing ethical and legal risks. Non-healthcare individuals focus more on personal experiences, such as emotional support and anxiety relief, highlighting the practical and immediate utility of ChatGPT.

Limitations

This study included two limitations. First, the Internet data was biased, excluding the viewpoints of older persons with limited Internet abilities. The emotional attitudes of the older population as a group with higher health risks are valuable. Future studies must validate the attitudes of the older population toward the new technology of ChatGPT. Secondly, despite incorporating the temporal dimension into the analysis. Nonetheless, the duration of ChatGPT’s application is relatively brief and requires extended observation. Consequently, future studies may be undertaken to corroborate the multidimensional temporal nodes. Finally, this study exclusively encompassed the content of ChatGPT and clinical medical documents, ignoring the content related to ChatGPT and writing skills. Consequently, we propose that future studies examine the relationship between clinical medical documents and writing skills.

Conclusions

This study revealed distinct yet overlapping sentiments among healthcare professionals and the public toward ChatGPT’s use in clinical practice. Healthcare professionals valued efficiency but voiced concerns about ethical risks and professional accountability. Non-healthcare individuals emphasized emotional support and accessibility while worrying about privacy and misinformation. Addressing these issues—such as enhancing reliability, safeguarding privacy, and defining ChatGPT’s role as a support tool—will be key to its integration. Future research should focus on aligning ChatGPT with clinical standards, and user needs to ensure it enhances healthcare delivery while maintaining the trust and ethical integrity.