Introduction

In early 2020, the world faced a significant public health crisis precipitated by the novel coronavirus (COVID-19). Due to its extended incubation period and high transmissibility, the virus spread rapidly on a global scale, exacerbated by the interconnectedness of modern systems. As of 26 January 2025, there have been over 777.35 million confirmed cases and more than 7.77 million deaths worldwide (World Health Organization 2025), ranking COVID-19 as the fifth deadliest pandemic in history (Joshi and Shukla 2022). Although “COVID-19 is now an established and ongoing health issue which no longer constitutes a public health emergency of international concern (PHEIC)” (World Health Organization 2023), the pandemic has catalyzed an unprecedented global crisis, impacting individual well-being and the efficacy of health systems and governments, necessitating innovative approaches to address its challenges.

Amid this crisis, artificial intelligence (AI) emerged as a critical tool for addressing the pandemic’s multifaceted challenges, from tracking virus spread to supporting vaccine development and managing healthcare resources (Arora et al. 2021). Defined as the simulation of human intelligence by machines, AI has revolutionized society, economy, and culture (Li 2020; Xu et al. 2021), offering numerous advantages across sectors such as efficiency and speed (Davenport 2018; Spronck et al. 2006), 24/7 availability (Nebeker et al. 2019; Stephens et al. 2019), high precision and accuracy (Dlamini et al. 2020; Partel et al. 2019), cost savings (Salehi and Burgueño 2018; Wamba-Taguimdje et al. 2020), personalized experiences (Ameen et al. 2021; Pataranutaporn et al. 2021), decision-making support (Duan et al. 2019; Jarrahi 2018), and innovation (Cockburn et al. 2019; Verganti et al. 2020). Specifically, during the COVID-19 pandemic, AI has shown potential to improve healthcare outcomes (McCall 2020), enhance surveillance and prediction systems (Alazab et al. 2020; Arora et al. 2021; Hossain et al. 2020; Jin et al. 2020), and facilitate efficient information dissemination (Ahuja et al. 2020; Bunker 2020).

During the early stages of the pandemic, China swiftly implemented AI technologies to manage and mitigate virus spread. Starting in January 2020 with AI-enhanced surveillance and monitoring systems, by February, AI-enabled contact tracing and diagnostic tools played key roles in several provinces. In Guangdong, AI algorithms processed vast amounts of data to trace and predict transmission paths, achieving significant reductions in local transmission rates within the first month. Major cities employed AI-based thermal imaging and facial recognition at transport hubs to efficiently identify symptomatic individuals and enforce quarantine measures, helping to curb the spread in densely populated areas.

Furthermore, AI-driven diagnostic tools, like those developed by Alibaba’s DAMO Academy, demonstrated high accuracy in virus detection through CT scans, reducing diagnosis times. AI-based chatbots delivered real-time pandemic information and health guidelines, improving public communication. By March, national AI platforms coordinated response efforts across regions, integrating diverse data sources. AI-enabled drones were deployed in rural areas for disinfection and broadcasting safety measures, minimizing human contact and further reducing virus transmission. These initiatives illustrate AI’s valuable role in strengthening public health responses and underscore its potential in managing global health crises.

However, the widespread use of AI in pandemic response has brought challenges that must be carefully considered. These challenges impact individuals and society. On an individual level, concerns include the potential impact on mental health during the pandemic (Ćosić et al. 2020), attitudes toward AI-assisted diagnosis and treatment (Abdel Wahed et al. 2020; Hussain et al. 2021), and information privacy (Aman et al. 2021; Hakak et al. 2020). At the societal level, challenges involve the spread of misinformation (Khan et al. 2020; Rashid and Wang 2021), the high cost of technological errors (Taylor et al. 2020; Yahya et al. 2021), and job displacement (Coombs 2020; Dwivedi et al. 2020).

Delving deeper, critical issues emerge. Privacy concerns arise as AI systems rely on large-scale personal data, raising questions about data security and potential misuse of sensitive information (Bai et al. 2021; Gerke et al. 2020). Security issues also exist, as AI systems are vulnerable to hacking and manipulation, posing threats to public safety (Ferrag et al. 2021; Rahman et al. 2020). The deployment of AI raises questions about autonomy and agency, as individuals may feel their decision-making power diminished when relying on AI-assisted systems (Malik et al. 2021; Wang et al. 2021; Zhang et al. 2020a). Moreover, ethical implications regarding utilitarianism and individual freedom in the context of AI deployment during a pandemic require careful examination.

To address these challenges, several strategies can be adopted. Public acceptance of AI technologies can be fostered through education and transparency, aiding individuals in understanding AI’s benefits and limitations (Cresswell et al. 2021; Kim et al. 2021). Establishing social norms can guide ethical AI use and mitigate risks, ensuring responsible deployment aligned with societal values (Anshari et al. 2023; Latkin et al. 2022; Naudé 2020). Technological advancements, such as improved privacy and security measures, can address concerns and build trust (Kumar et al. 2020; Shamman et al. 2023). Furthermore, international cooperation is crucial to develop unified frameworks and standards for AI in health emergencies (Luengo-Oroz et al. 2020; Peiffer-Smadja et al. 2020).

We select China as the focal point for examining AI applications during the COVID-19 pandemic for several reasons. First, as one of the earliest countries to face the outbreak, China provides a template for AI’s role in crisis management. Second, China’s prominence in AI, bolstered by substantial investment and innovation from tech giants like Baidu, Alibaba, and Tencent, positions it as a rich case study. Third, the extensive application of AI technologies across China, driven by its large population and efficient administration, offers a unique vantage point to assess the scalability and impact of AI tools in managing public health crises. Fourth, China’s strategies, including AI deployment, have influenced global public health policies, economics, and logistics. Finally, China’s distinct approach to privacy and data governance provides a comparative perspective to Western ideologies, enriching the ethical debate regarding AI’s role in public health.

Therefore, from China’s perspective, we aim to examine the extensive applications of AI during the COVID-19 pandemic, its significant influences on both individual and societal levels, and the consequent challenges and policy implications. By addressing these topics, this research contributes to existing literature by offering a comprehensive view of AI’s utility and the imperative of addressing critical ethical and social issues to protect individual rights and societal welfare. The insights presented aim to resonate with researchers across disciplines, illuminating the complex interplay between advanced technology and human society amidst a profound historical challenge. This examination informs our understanding of technological interventions and enhances our preparedness for future global health emergencies.

AI applications in various aspects of the COVID-19 pandemic

The COVID-19 pandemic has presented unprecedented challenges to global public health, prompting the adoption of AI technologies across various sectors. In China and internationally, AI has been instrumental in revolutionizing healthcare, streamlining pandemic management, and facilitating public communication. By mid-2020, AI was widely integrated into China’s healthcare ecosystem, notably enhancing the capabilities of public health institutions during the pandemic. A significant proportion of designated COVID-19 hospitals and many urban clinics adopted AI-driven systems for diagnostic imaging, patient monitoring, data management, and predictive analytics.

In densely populated provinces like Guangdong and Zhejiang, numerous local healthcare facilities implemented AI applications, demonstrating extensive geographic and demographic reach. Longitudinal observations indicate that AI adoption in healthcare not only increased significantly during the initial months of the pandemic but also diversified in its applications. Initially focused on diagnosis and contact tracing, AI technology evolved to include predictive health management and operational logistics. This flexible integration of AI was designed to meet the shifting demands of the pandemic, highlighting its vital role in enhancing operational efficiency and improving the responsiveness of healthcare services during and beyond the crisis.

Table 1 provides a comprehensive overview of the AI technologies deployed during China’s COVID-19 response, categorizing them by provider, methodology, functionality, and integration with public health protocols. It illustrates the extensive reach and substantial impact of these AI solutions, emphasizing their pivotal role in improving the efficiency and effectiveness of pandemic response efforts—from diagnostic support to the enforcement of public safety measures.

Table 1 Overview of AI technologies deployed in China’s COVID-19 response.

AI in healthcare

The integration of AI into healthcare has been pivotal in the pandemic response. Pre-pandemic statistics from the World Health Organization indicated significant pressure on China’s healthcare resources, including shortages of physicians, nurses, midwives, and hospital beds (The National Health Workforce Accounts Database 2023). The onset of COVID-19, with nonspecific early symptoms often indistinguishable from other viral infections, necessitated rapid processing of lung CT images for diagnosis, further straining healthcare systems.

AI applications have played a crucial role in alleviating this burden, particularly in the analysis of lung CT scans—a critical diagnostic tool for COVID-19 (Guo et al. 2020; Wang et al. 2021). Baidu’s AI-powered platform, Melody, exemplifies AI’s utility in healthcare, offering remote consultation capabilities that have streamlined diagnostics and patient care during the pandemic. The chatbot assists in symptom collection, preliminary diagnosis, and treatment suggestions, supporting healthcare providers in primary care settings.

Furthermore, AI’s proficiency in parsing CT images has enhanced clinical decision-making and reduced misdiagnosis rates at institutions like the Shanghai Public Health Clinical Center, which implemented the “COVID-19 Intelligent Evaluation System” (Zhang et al. 2020b). This system’s ability to screen suspected COVID-19 patients and assist in early prevention measures demonstrates AI’s potential in medical diagnostics (Guo et al. 2020).

Beyond diagnostics, AI’s deep learning capabilities have expedited the research and development of treatments and vaccines (Arora et al. 2021). By analyzing extensive datasets, AI models identify potential therapeutic compounds more rapidly and economically than traditional research methods. AI’s data-driven approach, leveraging information from hospital networks, equips medical professionals with predictive insights for patient outcomes and resource allocation (Zhang et al. 2020b).

In conclusion, the pandemic has catalyzed the expansion of AI’s role within healthcare, gaining traction and recognition from medical practitioners and the public alike. Through sophisticated analysis and decision-support systems, AI has streamlined diagnostic processes and contributed to advancements in medical research, demonstrating its critical position in the healthcare landscape during times of crisis.

AI in pandemic prevention and control

The rapid transmission of COVID-19 necessitated prompt and effective measures to reduce contact between infected individuals and the broader population. In response, countries including China implemented emergency protocols to restrict movement within and between cities. AI technologies have played a pivotal role in enhancing surveillance and tracking efforts, significantly influencing the identification and management of cases. For example, an AI-driven platform developed by Baidu was instrumental in monitoring the virus’s spread, identifying numerous potential cases within the first few weeks of deployment. This early detection, enabled by AI’s sophisticated data analytics, is believed to have contributed to a noticeable reduction in transmission rates in areas where it was implemented.

Comparative analyses of data before and after the implementation of AI surveillance systems indicate improved response times. Specifically, the average completion time for contact tracing was reduced from ~48 to 24 h and then even less. These observations highlight the enhancements in public health response capabilities attributable to AI interventions, demonstrating increases in both speed and accuracy of pandemic management strategies.

During stringent movement restrictions and logistical challenges, AI-enabled transportation robots from Jingdong (JD.com), one of China’s e-commerce giants, facilitated the delivery of essential supplies while minimizing human contact and the risk of infection. Intelligent disinfection robots and thermal imaging equipment became fixtures in public spaces, contributing to sanitation efforts and enabling rapid fever detection (Tao 2020).

AI algorithms have also been essential in contact tracing and movement analysis, supporting precise containment strategies and efficient resource distribution. The “COVID-19 Analysis and Control Platform,” developed by the Nanjing Edge Intelligence Research Institute, exemplifies this application. It aggregates and analyzes real-time pandemic data across geographic, temporal, and spatial dimensions, providing vital support for regional prevention and control measures (Zhang et al. 2020b).

The economic ramifications of the pandemic have been significant, affecting global trade and financial markets. Ozili and Arun (2023) highlight the impact of extended lockdowns, monetary policy shifts, and travel restrictions on economic activity, as well as the correlation between rising numbers of COVID-19 cases, mortality rates, and global economic indicators such as inflation, unemployment, and the energy commodity index.

In China, the deployment of AI-driven health and travel codes has aided in monitoring the virus’s spread and forecasting population movements. These predictive tools have been instrumental in issuing early warnings of potential outbreaks, facilitating the cautious resumption of economic activities. The strategic use of AI not only underscores its value in public health management but also in mitigating the pandemic’s economic impact by enabling more informed and agile policy responses.

While AI technologies have enhanced surveillance and contact tracing efforts, they have also raised significant privacy and ethical concerns. The extensive use of facial recognition and mobile data tracking has prompted debates about surveillance overreach and the potential normalization of invasive technologies post-pandemic (Huang et al. 2022). Balancing public health priorities with individual rights remains a critical challenge.

AI in managing social sentiment

AI played a crucial role during the COVID-19 pandemic by analyzing social sentiment. AI tools were essential for monitoring public opinion and emotional reactions to pandemic policies and news (Boon-Itt and Skunkan 2020; Hung et al. 2020). The profound impact of COVID-19 on public opinion necessitated accurate and timely information dissemination. Cognitive and social factors, such as morality and law, influence people’s thoughts, emotions, and behaviors, highlighting the media’s critical role during crises (Metzler et al. 2023; Tsao et al. 2021). AI emerged as a key player in curating and prioritizing news content, enabling the public to access factual information while in quarantine.

AI-powered tools employing machine learning were extensively used to monitor social media and digital communication channels, analyzing public sentiment regarding COVID-19. These tools assessed the tone, sentiment, and emotional context of public discussions, effectively tracking shifts in public mood following government announcements or changes in pandemic statistics. This provided critical feedback to policymakers on public reception and compliance (Chandrasekaran et al. 2020; Lwin et al. 2020; Xue et al. 2020). Studies indicate a negative correlation between the volume of media reports and infection cases, suggesting that well-informed populations are better equipped to curb the virus’s spread (Yan et al. 2020; Zhou et al. 2020). Thus, accurate news dissemination is vital for the public’s understanding of COVID-19 and its containment.

With traditional media channels compromised during the pandemic, AI-driven platforms enhanced communication efficiency. AI tools monitored social media to detect and correct misinformation, providing reliable content to alleviate public anxiety. For instance, outlets like China Daily utilized AI to distribute information through WeChat and other platforms, amplifying news reach via technology. Governments used insights from AI-driven sentiment analysis to refine communication strategies, ensuring accurate public health messages and controlling panic and misinformation. This analysis also helped health authorities identify and address public anxieties or misconceptions about vaccines, leading to focused educational campaigns (Hu et al. 2021).

The pandemic also saw a surge in social media tools, including chatbots, pivotal for information dissemination and pandemic monitoring. AI-powered platforms, such as the WHO’s “WHO Health Alert” chatbot on WhatsApp, served as conduits for verified information, mitigating the spread of misinformation (Li and Zhang 2021).

In summary, AI’s role extended beyond supporting health measures to managing social discourse during the pandemic. The technology demonstrated adaptability and scalability, proving critical in navigating complex health crisis challenges by ensuring the public remained informed through reliable and authoritative sources.

Effectiveness of AI technologies

The effectiveness of AI technologies during the COVID-19 pandemic in China can be evaluated through metrics such as diagnostic accuracy, reliability, patient satisfaction, cost-effectiveness, and overall healthcare outcomes. This section synthesizes data from multiple studies to analyze AI’s role.

AI-driven diagnostic tools have shown promising accuracy. Alibaba’s DAMO Academy developed tools achieving a diagnostic accuracy of 96% for COVID-19 detection from CT scans in just 20 s (Nayak et al. 2022; Taleghani and Taghipour, 2021). Another development, COVID-Net—a deep convolutional neural network—achieved 93.3% test accuracy for detecting COVID-19 from chest X-ray images (Wang et al. 2020b). These examples highlight AI’s significant impact and reliability in enhancing diagnostic processes during critical health emergencies.

Prior to AI integration, manual handling of epidemiological data and diagnostics was slow and error-prone. AI adoption revolutionized these processes, enhancing data analysis and enabling timely public health decisions. Jin et al. (2020) developed a deep learning-based system for COVID-19 diagnosis using a dataset of 11,356 CT scans from 9025 subjects, including COVID-19, community-acquired pneumonia (CAP), influenza, and non-pneumonia cases. The AI system’s diagnostic accuracy surpassed that of experienced radiologists, achieving areas under the receiver operating characteristic curve (AUC) of 0.9869 for pneumonia-or-non-pneumonia, 0.9727 for CAP-or-COVID-19, and 0.9585 for influenza-or-COVID-19.

Additionally, the AI system significantly reduced processing time, averaging 2.73 s per analysis compared to 6.5 min by radiologists (Jin et al. 2020). This improvement boosts radiologists’ productivity and expedites the diagnostic process, crucial during health crises. These findings underscore AI’s transformative impact in streamlining healthcare operations and enhancing diagnostic accuracy and efficiency.

The rapid availability of information through AI systems enhanced patient engagement and satisfaction. Liu et al. (2021) conducted a discrete choice experiment to evaluate patients’ preferences for AI-driven diagnostics versus traditional clinician assessments in China. Using various models—including generalized multinomial logit and latent class—they found that 55.8% of respondents preferred AI diagnostics. The model indicated a strong preference for diagnostics with 100% accuracy (OR 4.548, 95% CI 4.048–5.110, P < .001). The latent class model identified accuracy (39.29%) and cost (21.69%) as the highest-valued attributes. These findings suggest growing acceptance of AI in diagnostics, emphasizing the importance of accuracy and cost in patient decision-making.

The global impact of COVID-19 increased mental health disorders, prompting the use of AI-based chatbots for mental health services. Zhu et al. (2021) applied the Theory of Consumption Values to analyze determinants influencing user experience and satisfaction with these chatbots. Surveying 295 users in Wuhan and Chongqing, they found that personalization, enjoyment, learning, and condition significantly enhanced user experience and satisfaction. These insights underscore AI’s critical role in healthcare, both in diagnostics and mental health services, highlighting the need for ongoing technological improvements to meet user needs effectively.

However, high accuracy rates depend on data quality and deployment contexts. Concerns exist about overfitting models to specific datasets, potentially limiting generalizability across populations (Trivedi et al. 2022). Reliance on AI diagnostics without adequate human oversight could lead to missed diagnoses or overdiagnosis, especially when encountering unfamiliar data or rare conditions (Kelly et al. 2019). Rigorous validation of AI systems across diverse settings is essential to mitigate these risks.

Comparative analysis of global AI applications during COVID-19

As per the reviewer’s request, this section presents a structured comparative analysis of AI technologies used in public health across various countries during the COVID-19 pandemic, as summarized in Table 2. The table outlines variations in public health policies, AI adoption levels, technological innovation, outcomes, and the cultural, political, and economic factors influencing these differences among nations such as the United States, United Kingdom, Sweden, Germany, Italy, Japan, South Africa, and Ecuador. It also highlights the distinctiveness of China’s approach.

Table 2 Comparative analysis of AI technologies in public health across different countries.

The analysis reveals diverse levels of AI adoption and impact on public health, shaped by each country’s unique cultural, economic, and political factors:

  • Cultural factors significantly influence AI adoption, particularly public acceptance of surveillance and technology. For example, Japan’s cultural trust in technology contrasts with Sweden’s cautious approach to privacy and surveillance.

  • Political factors, such as government policies, play a crucial role. China’s centralized and authoritative approach facilitated rapid and widespread AI deployment, in stark contrast to the decentralized policies in the United States, leading to varied adoption levels across states.

  • Economic strength enables higher levels of technological innovation and AI adoption. Countries like Germany and the United States have leveraged substantial resources to integrate advanced AI solutions in healthcare—a feat less feasible in economically constrained nations like Ecuador and South Africa.

This analysis emphasizes the necessity of tailoring AI technologies to local contexts, reflecting each country’s cultural norms, economic capabilities, and political frameworks. Such an approach can guide future global health strategies and AI integrations, ensuring they are culturally sensitive and effectively aligned with national public health policies.

In contrast to countries where AI adoption was minimal or hindered by infrastructural limitations, such as Sweden and Ecuador, China’s approach was comprehensive and top-down. Its response was marked by rapid AI integration and a proactive government stance, facilitating extensive deployment across public health systems. Unlike the decentralized approaches observed in the United States and Italy, China’s centralized health policy enabled quick, uniform AI deployment nationwide. This centralization, combined with its advanced technology sector, fostered rapid scaling and innovation, setting it apart from strategies employed in other regions. Wang et al. (2020c) attribute China’s success in combating COVID-19 to adaptable governance, a culture of moral compliance, trusted collaboration between government and people, and an advanced technical framework encompassing AI, blockchain, cloud computing, big data, and 5G.

This comparative analysis not only highlights the unique challenges and successes each country faced but also provides valuable lessons for managing future global health crises through the effective use of AI.

Immediate practical challenges posed by AI applications during the COVID-19 pandemic

The accelerated integration of AI into society during the COVID-19 pandemic necessitates a critical evaluation of its broader implications. Despite effectively addressing numerous pandemic-related challenges, AI deployment has introduced practical issues requiring careful consideration. This section examines the immediate challenges impacting individuals and society arising from AI use during the pandemic.

Practical challenges impacting individuals

Human mental health during the pandemic

The pandemic significantly disrupted social life, especially in countries like China that experienced severe outbreaks. Lockdown measures necessitated rapid development of digital platforms for remote work, education, and administration—such as “cloud office,” “cloud education,” and “cloud management” solutions—to enforce social distancing guidelines.

Quarantine conditions challenge the inherent social nature of humans, as discussed by Aristotle and Marx (Fetscher 1973; Runciman 2000), by limiting physical interaction and replacing it with digital communication that lacks emotional engagement. Studies have reported a surge in mental health issues during the pandemic, including increased anxiety and depression (Dong et al. 2020; Wang et al. 2020d), indicating an urgent need for comprehensive mental health services (Moreno et al. 2020; Vindegaard and Benros 2020; Xiong et al. 2020), particularly for vulnerable groups like children and adolescents (Deng et al. 2023; Kauhanen et al. 2023).

To address this escalating mental health crisis, numerous AI-powered platforms have been developed globally. The efficacy of AI chatbots in providing psychological support is well-documented. Chatbots utilizing Cognitive Behavioral Therapy (CBT) techniques, such as Woebot and Wysa, have effectively managed anxiety and depression, offering significant emotional support (Beatty et al. 2022; Kretzschmar et al. 2019; Meadows et al. 2020). Research indicates these chatbots are generally well-received, enhance engagement, and potentially improve mental health outcomes (Boucher et al. 2021). For example, a randomized controlled trial with Woebot showed a significant reduction in depression symptoms within 2 weeks, measured by the PHQ-9 (Fitzpatrick et al. 2017). An 8-week pilot study found that increased interactions with the chatbot Tess correlated with decreased anxiety symptoms (Klos et al. 2021).

In China, the CBT-based chatbot XiaoE demonstrated significant short-term and long-term effectiveness and a unique ability to foster relationships with users, enhancing engagement during therapy (He et al. 2022). Similarly, trials with PsyChatbot, a novel Chinese psychological chatbot system, confirmed its effectiveness (Chen et al. 2024). The scalability of these AI solutions extends access to mental health support, particularly in areas lacking professional resources. While these platforms illustrate AI’s potential to provide positive support, they highlight the need for a nuanced approach to technology implementation amid pandemic-induced challenges. Balancing technological opportunities with the human need for direct interaction and emotional support is essential.

Despite potential benefits, the use of AI-powered mental health chatbots is controversial. Critics argue that such chatbots may lack the empathy and nuanced understanding that human therapists provide, potentially leading to inadequate support for users in crisis (Berry et al. 2016; Vaidyam et al. 2019). Concerns exist about chatbots’ ability to handle complex mental health issues, especially when users present with co-morbid conditions or suicidal ideation. Privacy and data security issues also arise due to the sensitive nature of mental health information processed by AI systems (Harman et al. 2012; Shenoy and Appel 2017). Users may hesitate to share personal information, fearing data breaches or misuse. Additionally, the absence of human oversight could lead to ethical dilemmas if chatbots provide inappropriate or harmful advice (Denecke et al. 2021; Omarov et al. 2023). These challenges underscore the need for rigorous evaluation, regulation, and integration of AI chatbots as complementary tools rather than replacements for professional mental health services.

Attitudes toward AI-assisted diagnosis and treatment

Public attitudes toward AI in healthcare during the COVID-19 pandemic reflect broader concerns about the technology’s implications, particularly fears of unequal healthcare outcomes due to biased AI algorithms. These challenges influence public trust and acceptance of AI technologies, highlighting the need for transparent and equitable systems.

The health crisis underscored AI’s transformative potential in enhancing case detection and forecasting viral spread, integrating advanced technologies such as AI, IoT, Big Data, and Machine Learning into healthcare delivery (Vaishya et al. 2020). Despite advancements, public reticence toward AI-assisted healthcare persists, rooted in concerns about displacement of healthcare professionals (Coombs 2020; World Economic Forum 2020), dependability of AI-generated decisions (Albahri et al. 2023; Solaiman and Awad 2024), and equitable distribution of medical resources (Ahad et al. 2024; Huang et al. 2021).

In clinical settings, the impersonal nature of AI may clash with the need for empathy and respect, raising ethical questions (Formosa et al. 2022). Data standardization issues compound these dilemmas, as inconsistencies can cast doubt on AI diagnostic accuracy (Jiang and Wang 2021). While online medical consultations have increased convenience, they often fail to capture nuances of face-to-face evaluations, leading to discrepancies in diagnosis and treatment advice, potentially eroding trust in virtual healthcare services. Bashshur et al. (2020) emphasize that telemedicine may lack the thoroughness of in-person consultations, possibly resulting in diagnostic errors. Greenhalgh et al. (2020) note that the absence of physical examination contributes to these discrepancies. A survey by the American Medical Association (2021) reveals many patients have mixed experiences with telemedicine, citing issues related to lack of personal connection and comprehensive care compared to in-person visits. Online consultations inherently limit holistic evaluation of a patient’s health, affecting self-awareness and potentially delaying critical treatment interventions.

Personal information leakage

The pandemic necessitated sharing personal information to support public health initiatives, under government assurances of privacy protection. However, increased data flow elevated the risk of breaches. For instance, sensitive details of over 7000 individuals from Wuhan or Hubei were inadvertently exposed, leading to discrimination and fraud exploiting public distress (Southern Metropolis Daily 2023). Such breaches often occur during online information dissemination and AI-assisted medical processes, with telecommunications fraud being an immediate consequence.

Fraud’s resurgence in the digital era is profound, driven by technological advancements and globalization of criminal practices. During the pandemic, fraudulent schemes exploiting uncertainties surged (Cross 2021; Ma and McKinnon 2022). Early 2020 saw a proliferation of COVID-19-themed phishing attacks, including scamming, brand impersonation, and Business Email Compromise, where criminals impersonated health officials to spread false information and exploit public fear (Minnaar 2020).

Cybercriminals exploit psychological vulnerabilities, leveraging pandemic-induced anxiety to facilitate cyber fraud (Ma and McKinnon 2022). They masquerade as health authorities, issuing counterfeit directives, and exploit the lack of rigorous data protection and ethical frameworks to deceive individuals using accurate personal data.

The pandemic’s circumstances make the public particularly susceptible to deception, as individuals tend to follow perceived authority without question, increasing the risk of falling victim to fraud. These incidents underscore the delicate balance between employing AI for pandemic response and ensuring individual privacy, highlighting the need for stringent data protection and security measures.

Han (2023) notes that Telecommunications and Network Fraud (TNF) typically occurs without physical interaction, facilitated by digital communication tools, allowing criminals to target victims across borders. The psychological impact of TNF heightens public anxiety and skepticism toward AI-driven technologies, affecting societal trust.

In conclusion, while AI technologies offer significant benefits in managing COVID-19 complexities, their implementation must balance social and psychological impacts on individuals, ethical dimensions of healthcare, and the necessity for rigorous personal data safeguards.

Practical challenges impacting society

Rapid dissemination of social sentiment

Managing social sentiment during health crises like the COVID-19 pandemic is crucial for influencing public behavior and compliance with health guidelines. The rise of AI-enhanced news media has drastically amplified information dissemination, overshadowing traditional print media. Computational propaganda, driven by big data and AI, manipulates social sentiment by collecting, analyzing, and targeting data on digital platforms, often using bots to mimic human interaction and spread information. While this capability has been instrumental in circulating vital information about the virus, it has also facilitated the rapid spread of misinformation.

Early in the pandemic, false reports—for instance, claims about the efficacy of the traditional Chinese medicine “Shuanghuanglian oral liquid” against COVID-19—triggered widespread panic buying and hoarding (Huang and Zhao 2020; Zhang et al. 2022b). The difficulty in discerning accurate information amidst an “infodemic” has profound societal consequences, leading to confusion and hindering pandemic response efforts (Hartley and Vu 2020; Rocha et al. 2021).

On February 2, 2020, the World Health Organization (WHO) highlighted the “infodemic” as a parallel crisis to the pandemic and a significant barrier to effective public health responses (Gallotti et al. 2020; Naeem and Bhatti 2020; World Health Organization 2020). Kouzy et al. (2020) found that among sampled tweets about COVID-19, 24.8% contained misinformation and 17.4% featured unverifiable information, undermining public trust. Similarly, Cinelli et al. (2020) analyzed COVID-19-related information across social media platforms and identified varying levels of misinformation. The sheer volume and difficulty of verifying online information exacerbate this crisis. Misinformation proliferates on platforms like WeChat, Weibo, and TikTok, often outpacing official responses. The anonymity of the internet complicates identifying rumor sources, underscoring the need for proactive media approaches to foster positive discourse and counteract the “infodemic” (Chen and Zhang 2021; Xi 2020).

China employs a centralized approach to managing public opinion during the pandemic, using state-controlled media and digital platforms to disseminate uniform health messages nationwide (Lv et al. 2022; Xu et al. 2020; Zou et al. 2020). Extensive surveillance and data analytics monitor virus spread and enforce quarantine measures. This strategy enabled rapid dissemination of crucial information about hygiene practices and lockdowns, aiding initial containment efforts.

In contrast, the United States adopts a decentralized approach, with state governments and private media playing key roles (Bergquist et al. 2020; Park and Fowler 2021). This diversity encourages scientific debate and innovation, allowing states to tailor strategies to local needs. However, decentralization can lead to inconsistent messaging and polarization, especially when federal and state directives conflict. During the pandemic, conflicting messages about mask-wearing and social distancing led to confusion and politicization (Barry et al. 2021; Hornik et al. 2021).

The U.S. also faced strategic errors early in the pandemic (Bergquist et al. 2020; Carter and May 2020; Schuchat 2020). Emphasizing the use of mechanical ventilators based on preliminary data may have led to overuse and higher mortality in some groups (Dar et al. 2021; Wells et al. 2020). Additionally, some states returned COVID-19-positive patients to elder care facilities, causing outbreaks (Elman et al. 2020; Monahan et al. 2020). A lack of focus on the most vulnerable populations and insufficient consideration of natural immunity complicated public health strategies (Moghadas et al. 2021).

Cultural differences impact the effectiveness of AI-driven public health interventions. In China, high trust in government supports stringent measures and surveillance, emphasizing collective welfare (Gozgor 2022; Wang et al. 2023; Zhao and Hu 2017). In the U.S., lower trust in government and media, rooted in values of individual freedom, can hinder acceptance of such measures. The for-profit nature of the U.S. healthcare system further complicates trust, as interventions may be perceived as prioritizing corporate interests (Horwitz 2005).

This comparative analysis highlights the need for culturally sensitive and adaptable public health policies. China’s centralized approach allows rapid technology deployment but requires careful management of scientific debate to maintain trust. In the U.S., improving messaging consistency and transparency in AI use could enhance compliance and trust. Both countries must support open scientific dialog and adjust policies based on emerging data to ensure effective, ethical, and accepted public health strategies.

Higher cost of technological errors

Since its inception in 1956, AI has evolved into a cornerstone of modern science, playing an integral role in pandemic management. However, as McLuhan (1994, p. 45) suggests, technology extends our capabilities but necessitates new societal balances. The pandemic has exposed the immaturity of certain AI applications and the repercussions of technological failures, such as personal data breaches during critical periods.

In China, reliance on health codes and travel codes for public access has revealed vulnerabilities. System outages and erroneous health status updates caused significant disruptions in areas like Beijing, Xi’an, and Guangzhou, preventing commuters from accessing workplaces and individuals from using transportation (Cheng et al. 2023; Wu et al. 2020; Zhou et al. 2021). These incidents highlight the need for robust maintenance mechanisms and illustrate the societal impact when technology fails (Cong 2021; Jin et al. 2022). Though resolved within hours, these issues sparked debates on social media about the reliability of AI-driven public health systems, temporarily shaking public confidence.

Using the Chinese search term “health code,” Yu et al. (2022) analyzed data from Zhihu, a prominent Chinese Q&A platform, focusing on three types of digital errors: (1) algorithmic bugs from unintended coding consequences; (2) territorial bugs due to discrepancies between health code apps and spatial configurations; and (3) corporeal bugs arising from mismatches between app assumptions and actual user profiles. Their research underscores how these errors affected user experiences with China’s contact tracing systems during the pandemic, enhancing understanding of their impact on algorithmic governance and platform urbanism.

AI replacement of human workers

During the peak of the COVID-19 pandemic, AI technology emerged as a critical tool to alleviate strain on human resources in prevention and control efforts. Deploying drones for large-scale disinfection, using facial recognition coupled with infrared temperature measurement for efficient screening, and introducing autonomous nucleic acid testing apparatus exemplify AI’s role in replacing human labor for basic tasks (Zhang et al. 2020b). These applications reduced viral transmission risks and relieved healthcare staff workloads.

However, the pervasive use of AI across various sectors has triggered concerns about a potential “tech backlash” (Sacasas 2018) and the emergence of “digital scars” (Gallego and Kurer 2022; Marabelli et al. 2021). AI’s advancement into the job market, particularly in replacing low-skill jobs, has become a significant issue (Abdullah and Fakieh 2020; Huang and Rust 2018; Jaiswal et al. 2021; Wang and Siau 2019). The economic repercussions of the pandemic, exacerbated by stringent containment measures like lockdowns, have increased unemployment rates and adversely impacted livelihoods, placing employment concerns alongside health anxieties.

AI’s integration across sectors raises concerns about job security, especially for roles involving routine and repetitive tasks. While AI offers increased efficiency and innovation, it poses a significant risk of displacing workers in basic positions. The McKinsey Global Institute claims that “the pandemic accelerated existing trends in remote work, e-commerce, and automation, with up to 25 percent more workers than previously estimated potentially needing to switch occupations”, suggesting substantial labor market transformation (Lund et al. 2021). This transition may challenge workers in basic roles who may struggle with reskilling or moving into new job categories.

This development is particularly acute in China, where a significant portion of the population is employed in jobs susceptible to automation. While technological progress can free humans from monotonous labor, job displacement from AI adoption has constrained the livelihoods of those affected. Therefore, although AI has been indispensable in pandemic response efforts, its impact on employment necessitates a careful approach from policymakers to balance technological innovation with social welfare and economic stability.

The impact of AI on jobs during the pandemic must also be considered within the broader context of economic and health crises. The pandemic resulted in severe staff shortages, notably in healthcare and retail, driven by health risks and increased demand (Bourgeault et al. 2020; Costa Dias et al. 2020). In some cases, AI technologies were introduced not to replace workers but to support and bridge these gaps. Additionally, the influence of privatization and private equity in healthcare has led to staff cuts and restructuring, impacting employment independently of AI (Alayed et al. 2024; Alonso et al. 2016; Goodair and Reeves 2024). This suggests AI’s role during the pandemic was intertwined with broader economic and social factors affecting jobs.

In conclusion, while AI has the potential to significantly alter the labor market, addressing transitional challenges and supporting displaced workers is crucial. A balanced perspective must recognize both AI’s capacity to displace jobs and its utility in filling labor shortages during crises. Ongoing research and policy dialog are essential to manage these challenges, ensuring AI integration promotes economic stability and workforce development. Policymakers and industry leaders must collaborate on strategies that facilitate reskilling and equitable job creation to mitigate the negative impacts of job displacement.

Ethical considerations in AI deployment during the COVID-19 response

The COVID-19 pandemic, characterized by its abrupt onset and evolving viral strains, has seen decreased infection and fatality rates due to global vaccination efforts and natural epidemiological trends (Kitahara et al. 2023; Liu et al. 2022). Despite this progress, the virus persists, and AI has been instrumental in the multifaceted pandemic response. However, AI remains an evolving technology with challenges in standardization and ethical application. This section explores broader ethical implications—such as privacy, safety, autonomy, and utilitarian concerns—that require higher-level discourse to inform the framework within which these challenges are addressed.

Privacy concerns

In the digital age, data is paramount, but the ease of data collection and manipulation raises significant privacy concerns. The handling of personal information has become critical in Internet governance, with data breaches and privacy violations at the forefront. Privacy in AI is crucial for safeguarding personal data, especially in health monitoring and contact tracing applications used during the pandemic. AI systems handling sensitive information necessitate robust protections to prevent misuse or accidental exposure. For instance, systems tracking individuals’ movements or health statuses must ensure secure data collection strictly for public health objectives, minimizing potential misuse or unauthorized access (Fahey and Hino 2020; Ribeiro-Navarrete et al. 2021; Romero and Young 2022).

AI’s role in the pandemic heavily relies on data analysis involving complex “black box” algorithms that often lack transparency (Durán and Jongsma 2021; Rai 2020; von Eschenbach 2021). Transparency involves making AI processes and decisions clear to all stakeholders, including the public. The opacity of many AI systems poses challenges for auditing and understanding decision-making processes. In healthcare, transparency is essential for maintaining trust, accountability, adherence to ethical standards, and ensuring biases or errors are not perpetuated (Durán and Jongsma 2021; Wischmeyer 2020).

Technical constraints can hinder full disclosure of AI processes, leading to uncertainties in personal data protection. During the pandemic, multiple stakeholders—including AI developers, data repositories, and social media platforms—engaged in data processing. Human intervention introduces additional risks for privacy breaches, underscoring the need for high ethical standards among all personnel with data access.

In China, the widespread use of health code apps, which tracked individuals’ health status and movement history, raised serious privacy concerns (Huang et al. 2022). Implemented via platforms like Alipay and WeChat, these apps collected vast amounts of personal data to assess COVID-19 exposure risks. While effective in controlling the pandemic, incidents of data breaches and unauthorized data sharing were reported, highlighting the tension between public health objectives and individual privacy rights (Cong 2021; Wu et al. 2020).

Despite the public’s willingness to share personal information for pandemic mitigation, legal frameworks for data protection remain imperfect. Industry self-regulatory practices are often insufficient, and public awareness of data security is lacking (Chen 2020). Safeguarding privacy requires vigilant oversight and collaborative efforts among stakeholders to address gaps in the legal system and foster a culture of data security consciousness.

Privacy and transparency are interconnected; a lack of transparency can heighten privacy concerns. If AI system operations are unclear, stakeholders may doubt the security and use of their personal data. Enhancing transparency can mitigate privacy issues by clarifying data management practices. Effective measures involve adopting a “privacy by design” approach in AI development, including anonymization techniques to protect identities (Diver and Schafer 2017; Yanisky-Ravid and Hallisey 2019). Increasing AI interpretability through explainable AI (XAI) can make outputs more comprehensible to users, ensuring compliance with data protection laws like GDPR (General Data Protection Regulation) and HIPAA (Health Insurance Portability and Accountability Act) (Albahri et al. 2023).

Safety concerns

AI deployment during the COVID-19 pandemic demonstrated its ability to alleviate public health burdens but also highlighted safety concerns. Healthcare AI systems delivering personalized recommendations require advanced data integration and interpretation methods surpassing current standards (Kulikowski and Maojo 2021). The rapid mutation of the novel coronavirus complicates AI reliability, necessitating continual system updates that integrate scientific advancements, expert knowledge, and policy support.

These challenges had direct public health implications. AI models initially trained on data from earlier strains risked obsolescence as the virus mutated, potentially compromising diagnostics, prognostics, and patient management accuracy. For example, AI diagnostic tools effective against the original strain might fail to detect new symptoms or patterns from emerging variants (Al Meslamani et al. 2024). Inaccurate AI predictions can result in inappropriate treatments, care delays, or harmful outcomes (Malik et al. 2021).

In the rush to deploy AI diagnostic tools in China, some systems faced criticism for inaccuracies and biases. AI models trained predominantly on data from specific regions may not have performed well when applied to populations in other provinces with varied demographics (Dong et al. 2021; Ye et al. 2020). Reports indicated that AI-assisted CT scan analysis sometimes resulted in false positives or negatives, causing unnecessary anxiety or missed cases (Mlynska et al. 2023). Moreover, the lack of standardized protocols for AI tool validation meant some products entered the market without rigorous testing, raising safety concerns about their reliability and effectiveness in real-world settings (Zhang et al. 2022a).

AI’s reliance on data for decision-making introduces the possibility of bias, especially if algorithm designers inadvertently embed prejudices into the code. This issue was evident in medical image analysis during the pandemic, where AI systems trained on data from specific institutions developed biases that compromised performance on diverse datasets (Sha and Jing 2021). Such biases can propagate inequality in patient diagnosis and healthcare resource distribution, sparking debate over their nature and impact (Delgado et al. 2022; El Naqa et al. 2021; Estiri et al. 2022; Röösli et al. 2021; Williams et al. 2020).

These concerns reflect public apprehension about AI safety and the potential consequences of technological errors. As discussed in the section “Immediate practical challenges posed by AI applications during the COVID-19 pandemic”, AI’s impact on individuals and society is profound, and mistakes can be costly. To mitigate these risks, continuous reflection and proactive management of AI safety are imperative.

Autonomy and agency concerns

The discourse on autonomy and agency in AI has been extensively explored (Formosa 2021; Laitinen and Sahlgren 2021; Prunkl 2022), raising critical questions about the autonomy granted to AI systems and their potential status as moral agents (Brożek and Janik 2019; Chesterman 2020; Floridi 2014). The ethical implications of AI as autonomous decision-makers are controversial. While some argue that AI can act without human biases, they acknowledge that AI lacks the capacity to comprehend the moral weight of its actions (Bonnefon et al. 2023; Pauketat and Anthis 2022).

In healthcare, particularly during the COVID-19 pandemic, AI’s role in decision-making has been both beneficial and scrutinized due to the need for nuanced judgments in resource allocation (Neves et al. 2020). Healthcare professionals benefit from AI’s analytical capabilities amid high workloads and ethical dilemmas, but reliance on AI raises concerns about transparency, accountability, and potential algorithmic biases (Neves et al. 2020). Integrating AI into healthcare requires a deliberate blend of medical expertise, ethical principles, and human judgment, with AI augmenting rather than replacing human decision-makers.

While AI autonomy in healthcare has helped manage pandemic-related demands and alleviate manual labor pressures, it raises concerns about the reliability of AI-driven diagnostics and potential erosion of trust in human providers (Bonnefon et al. 2023). In social media, AI’s influence over user autonomy has been magnified during the pandemic, with algorithms shaping information landscapes and potentially limiting the diversity of user experiences (Sahebi and Formosa 2022).

In China, the collectivist culture and emphasis on societal harmony often supersede individual autonomy (Hofstede Insights 2024). During the pandemic, the public generally accepted AI technologies for surveillance and tracking as necessary for the greater good. However, the mandatory use of health code apps highlights the ethical dilemma between public health priorities and individual rights, as collective welfare tends to outweigh personal autonomy (Yu 2022).

AI-driven management of public opinions can significantly influence perceptions and behaviors, especially during health crises when accurate information is crucial. Without stringent oversight, such systems risk manipulating opinions, suppressing dissent, and reducing viewpoint diversity, undermining human autonomy. The EU AI Act establishes a regulatory framework demanding high standards of transparency, accountability, and fairness in AI applications, emphasizing transparency in operations and decision-making processes during health emergencies. The OECD AI Principles advocate for ethical AI that upholds human rights and democratic values, promoting inclusive growth and sustainable development.

In democratic societies, where freedom of expression and access to diverse information are essential, the role of AI in public opinion management requires careful consideration. AI systems should support, not replace, human decision-making by providing verified information and diverse viewpoints to enable informed discourse. Ethically managing social sentiments with AI during health crises is complex but vital. By adhering to strict ethical standards and implementing measures that ensure transparency, inclusivity, and respect for individual autonomy, AI can enhance public health management. This approach ensures AI supports, rather than undermines, democratic values and personal freedoms crucial to society.

Utilitarianism and freedom concerns

Utilitarianism, as framed by the “Greatest Happiness Principle” (Long 1990; Rosen 1998), posits that actions should aim to maximize general utility, asserting that “each individual counts for exactly as much as another if each experiences an equal quantity of utility of the same kind or quality” (Riley 2009). Jeremy Bentham and John Stuart Mill suggest that the principle of utility operates under the notion that each person counts for one and no more (Mill 1969).

The COVID-19 pandemic underscores the necessity of global cooperation, as advocated by China’s concept of a shared human destiny (Zeng 2021), which is crucial in managing global health crises (Yuan 2020). Regional collaboration in East Asia—through fiscal policy, health resource sharing, and technological applications for testing—exemplifies an effective response to concurrent health and economic shocks (Kimura et al. 2020). Post-pandemic, cooperative strategies remain vital for enhancing regional preparedness for public health emergencies (Zhu and Liu 2021).

China’s application of AI technologies during the pandemic can be viewed through a utilitarian lens, aiming to achieve the greatest good for the greatest number (Herron and Manuel 2022). Measures like widespread surveillance, mandatory health codes, and AI-driven quarantine enforcement were justified as necessary to protect public health. However, these actions often came at the expense of individual freedoms and privacy (Ishmaev et al. 2021). The tension between collective welfare and personal rights was evident when individuals faced restrictions based on algorithmic assessments without clear avenues for appeal (Liang 2020). This raises ethical questions about the proportionality of such measures and the need for safeguards to protect individual liberties even in emergencies.

Cultural and policy differences influence public reception of pandemic measures. Western liberal democracies often exhibit lower compliance compared to collectivist societies like China (Dohle et al. 2020; Guglielmi et al. 2020; Wang et al. 2020a). For instance, mask-wearing sparked debate and protests in Western countries during peak pandemic periods (Cherry et al. 2021; Betsch et al. 2020; MacIntyre et al. 2021). These differences highlight the diverse cultural-psychological structures shaping national responses and reveal tensions between individual and collective interests.

Western philosophy traditionally values human freedom as subjectivity, which can conflict with pandemic restrictions (Xie 2021). In contrast, influenced by Confucianism, Chinese society aligns with social norms and collective values, demonstrating heightened collective consciousness in pandemic management. The contrasting approaches of Western individualism versus Chinese collectivism reflect their cultural-psychological structures and expose limitations in policies shaped by these influences.

In summary, while Western societies prioritize individual freedom—sometimes overlooking broader societal or ecological interests—China’s collectivist stance must balance collective action with respect for individual autonomy and minority rights (Blokland 2019; Franck 1997; Ho and Chiu 1994; Hui and Villareal 1989; Wang and Liu 2010). Deploying AI in pandemic response demands understanding these cultural nuances and appreciating human values to foster better international cooperation and anticipate varying national attitudes toward health crises.

Recommendations for addressing AI challenges in pandemic management

This section presents strategic recommendations to enhance the deployment and acceptance of AI technologies in healthcare, particularly for pandemic response. These recommendations are divided into general suggestions applicable globally and specific ones targeted at China. The goal is to leverage AI effectively while addressing ethical, legal, and societal challenges to optimize pandemic response and routine healthcare delivery.

Public acceptance

The rapid development of AI during the pandemic has led to inevitable shortcomings. As Yang and Mo (2022) note, societal risks evolve with technological advancements. Public trust in AI has been challenged, with concerns ranging from the reliability of AI-assisted diagnoses to the professionalism of AI-driven remote consultation platforms (see Table 3). Skepticism arises from inherent biases and a lack of familiarity with the technology’s capabilities, particularly regarding safety and reliability. However, history demonstrates that patience and tolerance are key to integrating new technologies into society, and AI is no exception. The application of AI during COVID-19 represents a synergistic effort between human expertise and technological innovation, showcasing adaptability in facing novel crises.

Table 3 Overview of AI applications, challenges, and ethical considerations in the COVID-19 pandemic response.

For example, a robot equipped with diagnostic tools was used to treat the first hospitalized COVID-19 patient in the U.S. (The Guardian 2020). Alibaba DAMO Academy and Alibaba Cloud’s AI, trained on over 5000 COVID-19 cases, achieved a 96% diagnostic accuracy in seconds, alleviating pressure on healthcare systems (Calandra and Favareto 2020; Mahmud and Kaiser 2021; Pham et al. 2020). These applications demonstrate AI’s potential in aiding COVID-19 diagnosis and treatment. Additionally, AI has played a role in pandemic management through intelligent temperature monitors, disinfection robots, and autonomous logistics, contributing to various sectors’ response efforts (see Table 3) (Ruan et al. 2021; Shen et al. 2020; Wang and Wang 2019).

Improving public trust and transparency involves clear communication about AI’s role, capabilities, and limitations in healthcare. Public education campaigns and open forums for feedback can demystify AI technologies. Transparent reporting of AI outcomes and involving the public in ethical discussions can help adapt AI applications to better meet societal expectations and needs.

Progress in AI cannot be separated from real-world application and evaluation. Practical deployment allows us to discern the strengths, weaknesses, and future trajectories of these technologies. Recognizing AI’s contributions during the pandemic, such as conserving resources and enhancing response measures, is vital.

Public understanding of science and technology must evolve, acknowledging AI’s interdisciplinary nature and its relationship with other fields. The advent of new technologies should inspire optimism for human progress supported by technological advancement, rather than induce panic. During the COVID-19 pandemic, AI applications have proven instrumental in combating the virus. Nevertheless, ethical considerations regarding technology integration persist, necessitating ongoing reflection and response to emerging risks and challenges.

Social norms

While AI deployment during the pandemic has been instrumental in public health efforts, the absence of a comprehensive regulatory framework has led to significant challenges. Personal privacy breaches are particularly concerning; despite authorities’ measures, incidents of privacy leaks and associated cybercrimes persist due to inadequate legal protections (Chen 2020). As the public becomes more privacy-conscious, the digital age paradoxically lowers barriers to committing cybercrimes, revealing gaps in current legislation (Ajayi 2016; Dashora 2011; McGuire and Dowling 2013; Saini et al. 2012; Zhang et al. 2012). The continued occurrence of privacy breaches during the pandemic underscores the need for enhanced legal frameworks to safeguard citizen privacy effectively (Buil-Gil et al. 2021; Kemp et al. 2021; Khweiled et al. 2021; Naidoo 2020).

Ethical frameworks guide the responsible development and deployment of AI, ensuring these technologies support equitable health outcomes and maintain patient trust. Table 4 illustrates diverse approaches to AI governance, from stringent data privacy laws in China to innovation-centric strategies in the U.S., and highlights efforts toward international cooperation and standardization through bodies like the UN, G7, and bilateral agreements. Implementing these frameworks involves navigating diverse cultural values and legal systems, which can differ markedly across regions. Initiatives like the EU’s AI Act or the WHO’s guidelines on AI ethics can serve as templates. The primary challenge is ensuring these frameworks are comprehensive and enforceable across jurisdictions.

Table 4 Overview of national, regional, and international efforts on tech/AI regulatory frameworks.

In response to concerns about data privacy and security in AI applications for healthcare and pandemic management, China has strengthened its data protection framework. The Personal Information Protection Law (PIPL), enacted on August 20, 2021, and effective from November 1, 2021, is a comprehensive privacy law akin to the EU’s GDPR but tailored to China’s context. PIPL imposes stringent requirements on the collection, storage, and processing of personal information, ensuring AI applications comply with high data governance standards, thereby addressing critical challenges in deploying AI technologies during the COVID-19 pandemic.

The evolution of AI is closely linked to efforts by key internet companies. In China, Baidu, Alibaba, and Tencent have significantly advanced AI technology and collaborated with the government in pandemic mitigation. However, the focus on data collection, storage, and processing introduces vulnerabilities where personal information may be at risk of theft or misuse (see Table 3). Therefore, protecting privacy must be prioritized across all stages of data handling and by all parties involved, including government and enterprise personnel. This highlights the necessity for professional ethics in AI development and operation.

The pandemic has disrupted societal norms and governmental operations, with AI playing a pivotal role in adapting to these changes. However, as Wang and Wang (2019) assert, technology in social governance carries significant political weight. Pandemic measures have required citizens to surrender certain rights, such as privacy and freedom, for public health. Governments have leveraged AI for improved pandemic control and social governance, raising issues regarding the equitable trade-off between citizen rights and the benefits gained.

Dependence on technology for social management can lead to inertia and complacency. Relying on AI to regulate citizen rights without a comprehensive policy framework can result in inadequate protection of these rights. As AI becomes more entrenched in social governance, the algorithms guiding decisions start to act as social norms, raising critical questions about alignment with legal and ethical standards. It is crucial to approach the intersection of technology and policy with caution to mitigate social risks and ensure governance systems include AI as a responsible entity. From legal, policy, and ethical standpoints, governments must strive to “improve the socialization, legalization, intelligence, and professionalism of social governance” (Xi 2017), especially amid the rapid advancement of AI. Balancing technological growth with the protection of individual rights and societal norms is essential for maintaining trust and upholding democratic values in the digital age.

Technological advancements

The persistent and evolving nature of the COVID-19 pandemic, which continued beyond 2023, defied initial hopes for a swift resolution similar to the SARS outbreak in 2003. The rapid spread and mutation of the virus, coupled with varied international prevention policies, highlight the complexity of global health governance (Xie 2020). AI has emerged as a crucial tool in this ongoing battle but must evolve alongside the changing dynamics of the virus to meet humanity’s shifting needs.

During the pandemic, AI was rapidly deployed across various domains, from intelligent temperature measurement to online health consultation services. This swift integration, driven by urgency, often bypassed the thorough vetting typically associated with new technologies (see Table 3). As technological weaknesses—such as failures in health code platforms—became apparent, the need for continuous AI enhancement and maintenance was evident. AI’s role extends beyond immediate applications; it requires governance through appropriate norms and policies to ensure stability and reliability. Ongoing innovation in science and technology is essential to refine AI’s contribution to pandemic management, emphasizing a dynamic approach to technology development.

Despite rapidly deploying AI-driven contact tracing apps, China faced significant hurdles similar to other countries. A unified national strategy for AI in healthcare can help streamline initiatives across different regions and administrative levels. Initially, the contact tracing app landscape was fragmented, with various regions developing their own systems lacking interoperability. This fragmentation led to public confusion and inefficiencies in data collection and use, delaying effective pandemic responses. Over time, efforts consolidated data on centralized platforms (e.g., WeChat or Alipay) to improve efficacy. Coordination initiatives, such as national health strategies integrating AI diagnostics in remote areas, have shown that centralized planning with local adaptations can be effective.

Human history is marked by advancements in productive forces, with science and technology driving societal transformation. Each scientific and technological revolution reshapes society to varying degrees, and AI’s emergence is a contemporary example (Beniger 2009; Hilbert 2020). The prudent application of technology is crucial for maintaining social stability and public safety amid a pandemic. Practical deployment must be accompanied by vigilant adjustments and maintenance to address society’s pressing needs effectively.

Global collaboration

The COVID-19 pandemic reaffirms the concept of humanity’s shared destiny, emphasizing the necessity of solidarity, unity, and cooperative action as our most effective tools (China Daily 2020a, 2020b). Global collaboration is crucial for pooling resources, knowledge, and data essential for developing AI solutions effective across diverse populations and healthcare systems. China’s commitment to international collaboration is evident through its efforts to assist other nations by constructing modern hospitals and sharing critical information about the virus, significantly contributing to global containment and prevention strategies.

China’s strategic approach to managing the pandemic has significantly influenced global health policy and privacy standards. Countries like South Korea and Singapore adopted contact tracing and surveillance strategies similar to China’s technological methods. These nations implemented extensive tracking and data analysis techniques initially pioneered by China, demonstrating the effectiveness of such strategies in controlling the virus’s spread. Internationally, China’s proactive measures helped shape guidelines issued by the World Health Organization (WHO) and the United Nations (UN), particularly concerning the rapid deployment of public health infrastructure and digital surveillance tools.

Initially, COVID-19 presented a high fatality rate, but over time, infections largely manifested as milder or asymptomatic cases, leading to reductions in both fatality and severity. This shift prompted reevaluation and relaxation of stringent control measures worldwide. For instance, Sweden lifted most pandemic restrictions and ceased extensive testing on February 9, 2022, becoming the first nation to officially declare the pandemic over (Reuters 2022). Approximately a year later, on January 8, 2023, China downgraded COVID-19 to a Class B infectious disease, easing quarantine protocols and adjusting medical policies accordingly. As of May 5, 2023, the WHO recognized COVID-19 as an ongoing health issue but no longer a public health emergency of international concern (PHEIC) (World Health Organization 2023).

In our interconnected global landscape, evolving pandemic policies have fostered deeper interactions between China and the international community. China’s approach to pandemic management has been people-centered and data-driven. Despite advancements in addressing public crises, China acknowledges its AI capabilities are not yet on par with more technologically developed regions like Europe and the United States. Throughout the pandemic, China actively engaged with global experts to address these gaps while supporting less advanced countries by constructing infrastructure and sharing medical resources, embodying the principle of a “community of shared destiny” (China Daily 2020a, 2020b; Zeng 2021).

Achieving effective collaboration can be hindered by geopolitical tensions, intellectual property concerns, and varying regulatory standards. Strategies to mitigate these challenges include establishing international AI health forums, promoting shared ethical standards, and creating bilateral agreements that respect each entity’s interests and regulatory frameworks. In conclusion, while AI technology has played a pivotal role in the pandemic response, disparities in global technological development underscore the importance of international cooperation. History teaches that adversarial competition hinders societal progress. In global public security crises, ethical imperatives of symbiosis and coexistence must guide international relations, becoming foundational norms for global governance—heralding a new era of ethical civilization (Xie 2020).

Overall, this section offers a roadmap for enhancing AI integration into healthcare settings globally and within China. By fostering global collaboration, establishing robust ethical frameworks, strengthening data protection, and improving public trust and transparency, AI can be deployed more effectively and ethically. Successful implementation of these recommendations requires continuous evaluation and adaptation to ensure AI technologies meet the evolving challenges of healthcare delivery and pandemic response, balancing innovation with ethical considerations.

Discussion: future research directions

The integration of AI into China’s response to the COVID-19 pandemic has showcased both remarkable advancements and significant challenges. This paper has detailed various AI applications—from medical diagnostics to social sentiment management—and highlighted the practical and ethical issues arising from their use. Understanding these complexities is essential for informing future AI deployment in public health emergencies.

Future research should focus on developing frameworks that balance AI’s benefits with the protection of individual rights. Investigations into long-term impacts of AI implementations during the pandemic are necessary to inform strategies that manage societal changes catalyzed by technology. Additionally, exploring international standards for AI use in public health crises is critical. Research should aim to establish global protocols for data sharing, privacy, and cross-border AI interventions, ensuring ethical deployment across diverse cultural and legal landscapes.

Moreover, interdisciplinary studies combining technology, ethics, law, and social science can provide holistic insights into AI’s role in society. Such research can guide the development of AI systems that are not only technologically advanced but also socially responsible and ethically sound.

Conclusions

This paper has critically analyzed the role of AI in managing the COVID-19 pandemic in China, highlighting both its transformative potential and the challenges it presents. AI technologies have been instrumental in healthcare delivery, infection control, and social sentiment analysis. However, their deployment has raised significant issues regarding mental health, public trust in AI-assisted healthcare, and data privacy at the individual level. Societally, challenges include the rapid spread of misinformation, the consequences of technological failures, and potential job displacement due to automation.

Addressing these challenges requires a multifaceted approach. Enhancing public trust through transparency and education is essential. Establishing robust legal and ethical frameworks can safeguard individual rights and societal norms. Technological advancements must focus on improving AI reliability and mitigating risks. Global collaboration is vital to develop unified standards and share best practices.

In conclusion, while AI has played a pivotal role in China’s pandemic response, it is imperative to navigate the complexities it introduces thoughtfully. Balancing technological innovation with ethical considerations ensures that AI contributes positively to public health without compromising individual liberties or societal values. By addressing these challenges proactively, we can harness AI’s potential to improve outcomes in future health crises, fostering a synergy between technology and humanity for the common good.