Abstract
The COVID-19 pandemic has accelerated the deployment of artificial intelligence (AI) across various domains, notably in healthcare, epidemic management, and public sentiment analysis. Focusing on China as a case study, this paper critically examines AI’s societal and individual impacts during the pandemic. Through a synthesis of literature and case analyses, we highlight AI’s dualistic role—its potential benefits alongside emerging challenges related to privacy, security, autonomy, and freedom. The study emphasizes the crucial importance of public acceptance, normative frameworks, technological advancement, and global collaboration in navigating these challenges. We advocate for comprehensive social policies to govern AI responsibly, ensuring ethical integrity and efficiency in future public health crises. The insights aim to inform policy decisions, guide healthcare stakeholders, and enrich public discourse, promoting a balanced approach to AI in healthcare.
Similar content being viewed by others
Introduction
In early 2020, the world faced a significant public health crisis precipitated by the novel coronavirus (COVID-19). Due to its extended incubation period and high transmissibility, the virus spread rapidly on a global scale, exacerbated by the interconnectedness of modern systems. As of 26 January 2025, there have been over 777.35 million confirmed cases and more than 7.77 million deaths worldwide (World Health Organization 2025), ranking COVID-19 as the fifth deadliest pandemic in history (Joshi and Shukla 2022). Although “COVID-19 is now an established and ongoing health issue which no longer constitutes a public health emergency of international concern (PHEIC)” (World Health Organization 2023), the pandemic has catalyzed an unprecedented global crisis, impacting individual well-being and the efficacy of health systems and governments, necessitating innovative approaches to address its challenges.
Amid this crisis, artificial intelligence (AI) emerged as a critical tool for addressing the pandemic’s multifaceted challenges, from tracking virus spread to supporting vaccine development and managing healthcare resources (Arora et al. 2021). Defined as the simulation of human intelligence by machines, AI has revolutionized society, economy, and culture (Li 2020; Xu et al. 2021), offering numerous advantages across sectors such as efficiency and speed (Davenport 2018; Spronck et al. 2006), 24/7 availability (Nebeker et al. 2019; Stephens et al. 2019), high precision and accuracy (Dlamini et al. 2020; Partel et al. 2019), cost savings (Salehi and Burgueño 2018; Wamba-Taguimdje et al. 2020), personalized experiences (Ameen et al. 2021; Pataranutaporn et al. 2021), decision-making support (Duan et al. 2019; Jarrahi 2018), and innovation (Cockburn et al. 2019; Verganti et al. 2020). Specifically, during the COVID-19 pandemic, AI has shown potential to improve healthcare outcomes (McCall 2020), enhance surveillance and prediction systems (Alazab et al. 2020; Arora et al. 2021; Hossain et al. 2020; Jin et al. 2020), and facilitate efficient information dissemination (Ahuja et al. 2020; Bunker 2020).
During the early stages of the pandemic, China swiftly implemented AI technologies to manage and mitigate virus spread. Starting in January 2020 with AI-enhanced surveillance and monitoring systems, by February, AI-enabled contact tracing and diagnostic tools played key roles in several provinces. In Guangdong, AI algorithms processed vast amounts of data to trace and predict transmission paths, achieving significant reductions in local transmission rates within the first month. Major cities employed AI-based thermal imaging and facial recognition at transport hubs to efficiently identify symptomatic individuals and enforce quarantine measures, helping to curb the spread in densely populated areas.
Furthermore, AI-driven diagnostic tools, like those developed by Alibaba’s DAMO Academy, demonstrated high accuracy in virus detection through CT scans, reducing diagnosis times. AI-based chatbots delivered real-time pandemic information and health guidelines, improving public communication. By March, national AI platforms coordinated response efforts across regions, integrating diverse data sources. AI-enabled drones were deployed in rural areas for disinfection and broadcasting safety measures, minimizing human contact and further reducing virus transmission. These initiatives illustrate AI’s valuable role in strengthening public health responses and underscore its potential in managing global health crises.
However, the widespread use of AI in pandemic response has brought challenges that must be carefully considered. These challenges impact individuals and society. On an individual level, concerns include the potential impact on mental health during the pandemic (Ćosić et al. 2020), attitudes toward AI-assisted diagnosis and treatment (Abdel Wahed et al. 2020; Hussain et al. 2021), and information privacy (Aman et al. 2021; Hakak et al. 2020). At the societal level, challenges involve the spread of misinformation (Khan et al. 2020; Rashid and Wang 2021), the high cost of technological errors (Taylor et al. 2020; Yahya et al. 2021), and job displacement (Coombs 2020; Dwivedi et al. 2020).
Delving deeper, critical issues emerge. Privacy concerns arise as AI systems rely on large-scale personal data, raising questions about data security and potential misuse of sensitive information (Bai et al. 2021; Gerke et al. 2020). Security issues also exist, as AI systems are vulnerable to hacking and manipulation, posing threats to public safety (Ferrag et al. 2021; Rahman et al. 2020). The deployment of AI raises questions about autonomy and agency, as individuals may feel their decision-making power diminished when relying on AI-assisted systems (Malik et al. 2021; Wang et al. 2021; Zhang et al. 2020a). Moreover, ethical implications regarding utilitarianism and individual freedom in the context of AI deployment during a pandemic require careful examination.
To address these challenges, several strategies can be adopted. Public acceptance of AI technologies can be fostered through education and transparency, aiding individuals in understanding AI’s benefits and limitations (Cresswell et al. 2021; Kim et al. 2021). Establishing social norms can guide ethical AI use and mitigate risks, ensuring responsible deployment aligned with societal values (Anshari et al. 2023; Latkin et al. 2022; Naudé 2020). Technological advancements, such as improved privacy and security measures, can address concerns and build trust (Kumar et al. 2020; Shamman et al. 2023). Furthermore, international cooperation is crucial to develop unified frameworks and standards for AI in health emergencies (Luengo-Oroz et al. 2020; Peiffer-Smadja et al. 2020).
We select China as the focal point for examining AI applications during the COVID-19 pandemic for several reasons. First, as one of the earliest countries to face the outbreak, China provides a template for AI’s role in crisis management. Second, China’s prominence in AI, bolstered by substantial investment and innovation from tech giants like Baidu, Alibaba, and Tencent, positions it as a rich case study. Third, the extensive application of AI technologies across China, driven by its large population and efficient administration, offers a unique vantage point to assess the scalability and impact of AI tools in managing public health crises. Fourth, China’s strategies, including AI deployment, have influenced global public health policies, economics, and logistics. Finally, China’s distinct approach to privacy and data governance provides a comparative perspective to Western ideologies, enriching the ethical debate regarding AI’s role in public health.
Therefore, from China’s perspective, we aim to examine the extensive applications of AI during the COVID-19 pandemic, its significant influences on both individual and societal levels, and the consequent challenges and policy implications. By addressing these topics, this research contributes to existing literature by offering a comprehensive view of AI’s utility and the imperative of addressing critical ethical and social issues to protect individual rights and societal welfare. The insights presented aim to resonate with researchers across disciplines, illuminating the complex interplay between advanced technology and human society amidst a profound historical challenge. This examination informs our understanding of technological interventions and enhances our preparedness for future global health emergencies.
AI applications in various aspects of the COVID-19 pandemic
The COVID-19 pandemic has presented unprecedented challenges to global public health, prompting the adoption of AI technologies across various sectors. In China and internationally, AI has been instrumental in revolutionizing healthcare, streamlining pandemic management, and facilitating public communication. By mid-2020, AI was widely integrated into China’s healthcare ecosystem, notably enhancing the capabilities of public health institutions during the pandemic. A significant proportion of designated COVID-19 hospitals and many urban clinics adopted AI-driven systems for diagnostic imaging, patient monitoring, data management, and predictive analytics.
In densely populated provinces like Guangdong and Zhejiang, numerous local healthcare facilities implemented AI applications, demonstrating extensive geographic and demographic reach. Longitudinal observations indicate that AI adoption in healthcare not only increased significantly during the initial months of the pandemic but also diversified in its applications. Initially focused on diagnosis and contact tracing, AI technology evolved to include predictive health management and operational logistics. This flexible integration of AI was designed to meet the shifting demands of the pandemic, highlighting its vital role in enhancing operational efficiency and improving the responsiveness of healthcare services during and beyond the crisis.
Table 1 provides a comprehensive overview of the AI technologies deployed during China’s COVID-19 response, categorizing them by provider, methodology, functionality, and integration with public health protocols. It illustrates the extensive reach and substantial impact of these AI solutions, emphasizing their pivotal role in improving the efficiency and effectiveness of pandemic response efforts—from diagnostic support to the enforcement of public safety measures.
AI in healthcare
The integration of AI into healthcare has been pivotal in the pandemic response. Pre-pandemic statistics from the World Health Organization indicated significant pressure on China’s healthcare resources, including shortages of physicians, nurses, midwives, and hospital beds (The National Health Workforce Accounts Database 2023). The onset of COVID-19, with nonspecific early symptoms often indistinguishable from other viral infections, necessitated rapid processing of lung CT images for diagnosis, further straining healthcare systems.
AI applications have played a crucial role in alleviating this burden, particularly in the analysis of lung CT scans—a critical diagnostic tool for COVID-19 (Guo et al. 2020; Wang et al. 2021). Baidu’s AI-powered platform, Melody, exemplifies AI’s utility in healthcare, offering remote consultation capabilities that have streamlined diagnostics and patient care during the pandemic. The chatbot assists in symptom collection, preliminary diagnosis, and treatment suggestions, supporting healthcare providers in primary care settings.
Furthermore, AI’s proficiency in parsing CT images has enhanced clinical decision-making and reduced misdiagnosis rates at institutions like the Shanghai Public Health Clinical Center, which implemented the “COVID-19 Intelligent Evaluation System” (Zhang et al. 2020b). This system’s ability to screen suspected COVID-19 patients and assist in early prevention measures demonstrates AI’s potential in medical diagnostics (Guo et al. 2020).
Beyond diagnostics, AI’s deep learning capabilities have expedited the research and development of treatments and vaccines (Arora et al. 2021). By analyzing extensive datasets, AI models identify potential therapeutic compounds more rapidly and economically than traditional research methods. AI’s data-driven approach, leveraging information from hospital networks, equips medical professionals with predictive insights for patient outcomes and resource allocation (Zhang et al. 2020b).
In conclusion, the pandemic has catalyzed the expansion of AI’s role within healthcare, gaining traction and recognition from medical practitioners and the public alike. Through sophisticated analysis and decision-support systems, AI has streamlined diagnostic processes and contributed to advancements in medical research, demonstrating its critical position in the healthcare landscape during times of crisis.
AI in pandemic prevention and control
The rapid transmission of COVID-19 necessitated prompt and effective measures to reduce contact between infected individuals and the broader population. In response, countries including China implemented emergency protocols to restrict movement within and between cities. AI technologies have played a pivotal role in enhancing surveillance and tracking efforts, significantly influencing the identification and management of cases. For example, an AI-driven platform developed by Baidu was instrumental in monitoring the virus’s spread, identifying numerous potential cases within the first few weeks of deployment. This early detection, enabled by AI’s sophisticated data analytics, is believed to have contributed to a noticeable reduction in transmission rates in areas where it was implemented.
Comparative analyses of data before and after the implementation of AI surveillance systems indicate improved response times. Specifically, the average completion time for contact tracing was reduced from ~48 to 24 h and then even less. These observations highlight the enhancements in public health response capabilities attributable to AI interventions, demonstrating increases in both speed and accuracy of pandemic management strategies.
During stringent movement restrictions and logistical challenges, AI-enabled transportation robots from Jingdong (JD.com), one of China’s e-commerce giants, facilitated the delivery of essential supplies while minimizing human contact and the risk of infection. Intelligent disinfection robots and thermal imaging equipment became fixtures in public spaces, contributing to sanitation efforts and enabling rapid fever detection (Tao 2020).
AI algorithms have also been essential in contact tracing and movement analysis, supporting precise containment strategies and efficient resource distribution. The “COVID-19 Analysis and Control Platform,” developed by the Nanjing Edge Intelligence Research Institute, exemplifies this application. It aggregates and analyzes real-time pandemic data across geographic, temporal, and spatial dimensions, providing vital support for regional prevention and control measures (Zhang et al. 2020b).
The economic ramifications of the pandemic have been significant, affecting global trade and financial markets. Ozili and Arun (2023) highlight the impact of extended lockdowns, monetary policy shifts, and travel restrictions on economic activity, as well as the correlation between rising numbers of COVID-19 cases, mortality rates, and global economic indicators such as inflation, unemployment, and the energy commodity index.
In China, the deployment of AI-driven health and travel codes has aided in monitoring the virus’s spread and forecasting population movements. These predictive tools have been instrumental in issuing early warnings of potential outbreaks, facilitating the cautious resumption of economic activities. The strategic use of AI not only underscores its value in public health management but also in mitigating the pandemic’s economic impact by enabling more informed and agile policy responses.
While AI technologies have enhanced surveillance and contact tracing efforts, they have also raised significant privacy and ethical concerns. The extensive use of facial recognition and mobile data tracking has prompted debates about surveillance overreach and the potential normalization of invasive technologies post-pandemic (Huang et al. 2022). Balancing public health priorities with individual rights remains a critical challenge.
AI in managing social sentiment
AI played a crucial role during the COVID-19 pandemic by analyzing social sentiment. AI tools were essential for monitoring public opinion and emotional reactions to pandemic policies and news (Boon-Itt and Skunkan 2020; Hung et al. 2020). The profound impact of COVID-19 on public opinion necessitated accurate and timely information dissemination. Cognitive and social factors, such as morality and law, influence people’s thoughts, emotions, and behaviors, highlighting the media’s critical role during crises (Metzler et al. 2023; Tsao et al. 2021). AI emerged as a key player in curating and prioritizing news content, enabling the public to access factual information while in quarantine.
AI-powered tools employing machine learning were extensively used to monitor social media and digital communication channels, analyzing public sentiment regarding COVID-19. These tools assessed the tone, sentiment, and emotional context of public discussions, effectively tracking shifts in public mood following government announcements or changes in pandemic statistics. This provided critical feedback to policymakers on public reception and compliance (Chandrasekaran et al. 2020; Lwin et al. 2020; Xue et al. 2020). Studies indicate a negative correlation between the volume of media reports and infection cases, suggesting that well-informed populations are better equipped to curb the virus’s spread (Yan et al. 2020; Zhou et al. 2020). Thus, accurate news dissemination is vital for the public’s understanding of COVID-19 and its containment.
With traditional media channels compromised during the pandemic, AI-driven platforms enhanced communication efficiency. AI tools monitored social media to detect and correct misinformation, providing reliable content to alleviate public anxiety. For instance, outlets like China Daily utilized AI to distribute information through WeChat and other platforms, amplifying news reach via technology. Governments used insights from AI-driven sentiment analysis to refine communication strategies, ensuring accurate public health messages and controlling panic and misinformation. This analysis also helped health authorities identify and address public anxieties or misconceptions about vaccines, leading to focused educational campaigns (Hu et al. 2021).
The pandemic also saw a surge in social media tools, including chatbots, pivotal for information dissemination and pandemic monitoring. AI-powered platforms, such as the WHO’s “WHO Health Alert” chatbot on WhatsApp, served as conduits for verified information, mitigating the spread of misinformation (Li and Zhang 2021).
In summary, AI’s role extended beyond supporting health measures to managing social discourse during the pandemic. The technology demonstrated adaptability and scalability, proving critical in navigating complex health crisis challenges by ensuring the public remained informed through reliable and authoritative sources.
Effectiveness of AI technologies
The effectiveness of AI technologies during the COVID-19 pandemic in China can be evaluated through metrics such as diagnostic accuracy, reliability, patient satisfaction, cost-effectiveness, and overall healthcare outcomes. This section synthesizes data from multiple studies to analyze AI’s role.
AI-driven diagnostic tools have shown promising accuracy. Alibaba’s DAMO Academy developed tools achieving a diagnostic accuracy of 96% for COVID-19 detection from CT scans in just 20 s (Nayak et al. 2022; Taleghani and Taghipour, 2021). Another development, COVID-Net—a deep convolutional neural network—achieved 93.3% test accuracy for detecting COVID-19 from chest X-ray images (Wang et al. 2020b). These examples highlight AI’s significant impact and reliability in enhancing diagnostic processes during critical health emergencies.
Prior to AI integration, manual handling of epidemiological data and diagnostics was slow and error-prone. AI adoption revolutionized these processes, enhancing data analysis and enabling timely public health decisions. Jin et al. (2020) developed a deep learning-based system for COVID-19 diagnosis using a dataset of 11,356 CT scans from 9025 subjects, including COVID-19, community-acquired pneumonia (CAP), influenza, and non-pneumonia cases. The AI system’s diagnostic accuracy surpassed that of experienced radiologists, achieving areas under the receiver operating characteristic curve (AUC) of 0.9869 for pneumonia-or-non-pneumonia, 0.9727 for CAP-or-COVID-19, and 0.9585 for influenza-or-COVID-19.
Additionally, the AI system significantly reduced processing time, averaging 2.73 s per analysis compared to 6.5 min by radiologists (Jin et al. 2020). This improvement boosts radiologists’ productivity and expedites the diagnostic process, crucial during health crises. These findings underscore AI’s transformative impact in streamlining healthcare operations and enhancing diagnostic accuracy and efficiency.
The rapid availability of information through AI systems enhanced patient engagement and satisfaction. Liu et al. (2021) conducted a discrete choice experiment to evaluate patients’ preferences for AI-driven diagnostics versus traditional clinician assessments in China. Using various models—including generalized multinomial logit and latent class—they found that 55.8% of respondents preferred AI diagnostics. The model indicated a strong preference for diagnostics with 100% accuracy (OR 4.548, 95% CI 4.048–5.110, P < .001). The latent class model identified accuracy (39.29%) and cost (21.69%) as the highest-valued attributes. These findings suggest growing acceptance of AI in diagnostics, emphasizing the importance of accuracy and cost in patient decision-making.
The global impact of COVID-19 increased mental health disorders, prompting the use of AI-based chatbots for mental health services. Zhu et al. (2021) applied the Theory of Consumption Values to analyze determinants influencing user experience and satisfaction with these chatbots. Surveying 295 users in Wuhan and Chongqing, they found that personalization, enjoyment, learning, and condition significantly enhanced user experience and satisfaction. These insights underscore AI’s critical role in healthcare, both in diagnostics and mental health services, highlighting the need for ongoing technological improvements to meet user needs effectively.
However, high accuracy rates depend on data quality and deployment contexts. Concerns exist about overfitting models to specific datasets, potentially limiting generalizability across populations (Trivedi et al. 2022). Reliance on AI diagnostics without adequate human oversight could lead to missed diagnoses or overdiagnosis, especially when encountering unfamiliar data or rare conditions (Kelly et al. 2019). Rigorous validation of AI systems across diverse settings is essential to mitigate these risks.
Comparative analysis of global AI applications during COVID-19
As per the reviewer’s request, this section presents a structured comparative analysis of AI technologies used in public health across various countries during the COVID-19 pandemic, as summarized in Table 2. The table outlines variations in public health policies, AI adoption levels, technological innovation, outcomes, and the cultural, political, and economic factors influencing these differences among nations such as the United States, United Kingdom, Sweden, Germany, Italy, Japan, South Africa, and Ecuador. It also highlights the distinctiveness of China’s approach.
The analysis reveals diverse levels of AI adoption and impact on public health, shaped by each country’s unique cultural, economic, and political factors:
-
Cultural factors significantly influence AI adoption, particularly public acceptance of surveillance and technology. For example, Japan’s cultural trust in technology contrasts with Sweden’s cautious approach to privacy and surveillance.
-
Political factors, such as government policies, play a crucial role. China’s centralized and authoritative approach facilitated rapid and widespread AI deployment, in stark contrast to the decentralized policies in the United States, leading to varied adoption levels across states.
-
Economic strength enables higher levels of technological innovation and AI adoption. Countries like Germany and the United States have leveraged substantial resources to integrate advanced AI solutions in healthcare—a feat less feasible in economically constrained nations like Ecuador and South Africa.
This analysis emphasizes the necessity of tailoring AI technologies to local contexts, reflecting each country’s cultural norms, economic capabilities, and political frameworks. Such an approach can guide future global health strategies and AI integrations, ensuring they are culturally sensitive and effectively aligned with national public health policies.
In contrast to countries where AI adoption was minimal or hindered by infrastructural limitations, such as Sweden and Ecuador, China’s approach was comprehensive and top-down. Its response was marked by rapid AI integration and a proactive government stance, facilitating extensive deployment across public health systems. Unlike the decentralized approaches observed in the United States and Italy, China’s centralized health policy enabled quick, uniform AI deployment nationwide. This centralization, combined with its advanced technology sector, fostered rapid scaling and innovation, setting it apart from strategies employed in other regions. Wang et al. (2020c) attribute China’s success in combating COVID-19 to adaptable governance, a culture of moral compliance, trusted collaboration between government and people, and an advanced technical framework encompassing AI, blockchain, cloud computing, big data, and 5G.
This comparative analysis not only highlights the unique challenges and successes each country faced but also provides valuable lessons for managing future global health crises through the effective use of AI.
Immediate practical challenges posed by AI applications during the COVID-19 pandemic
The accelerated integration of AI into society during the COVID-19 pandemic necessitates a critical evaluation of its broader implications. Despite effectively addressing numerous pandemic-related challenges, AI deployment has introduced practical issues requiring careful consideration. This section examines the immediate challenges impacting individuals and society arising from AI use during the pandemic.
Practical challenges impacting individuals
Human mental health during the pandemic
The pandemic significantly disrupted social life, especially in countries like China that experienced severe outbreaks. Lockdown measures necessitated rapid development of digital platforms for remote work, education, and administration—such as “cloud office,” “cloud education,” and “cloud management” solutions—to enforce social distancing guidelines.
Quarantine conditions challenge the inherent social nature of humans, as discussed by Aristotle and Marx (Fetscher 1973; Runciman 2000), by limiting physical interaction and replacing it with digital communication that lacks emotional engagement. Studies have reported a surge in mental health issues during the pandemic, including increased anxiety and depression (Dong et al. 2020; Wang et al. 2020d), indicating an urgent need for comprehensive mental health services (Moreno et al. 2020; Vindegaard and Benros 2020; Xiong et al. 2020), particularly for vulnerable groups like children and adolescents (Deng et al. 2023; Kauhanen et al. 2023).
To address this escalating mental health crisis, numerous AI-powered platforms have been developed globally. The efficacy of AI chatbots in providing psychological support is well-documented. Chatbots utilizing Cognitive Behavioral Therapy (CBT) techniques, such as Woebot and Wysa, have effectively managed anxiety and depression, offering significant emotional support (Beatty et al. 2022; Kretzschmar et al. 2019; Meadows et al. 2020). Research indicates these chatbots are generally well-received, enhance engagement, and potentially improve mental health outcomes (Boucher et al. 2021). For example, a randomized controlled trial with Woebot showed a significant reduction in depression symptoms within 2 weeks, measured by the PHQ-9 (Fitzpatrick et al. 2017). An 8-week pilot study found that increased interactions with the chatbot Tess correlated with decreased anxiety symptoms (Klos et al. 2021).
In China, the CBT-based chatbot XiaoE demonstrated significant short-term and long-term effectiveness and a unique ability to foster relationships with users, enhancing engagement during therapy (He et al. 2022). Similarly, trials with PsyChatbot, a novel Chinese psychological chatbot system, confirmed its effectiveness (Chen et al. 2024). The scalability of these AI solutions extends access to mental health support, particularly in areas lacking professional resources. While these platforms illustrate AI’s potential to provide positive support, they highlight the need for a nuanced approach to technology implementation amid pandemic-induced challenges. Balancing technological opportunities with the human need for direct interaction and emotional support is essential.
Despite potential benefits, the use of AI-powered mental health chatbots is controversial. Critics argue that such chatbots may lack the empathy and nuanced understanding that human therapists provide, potentially leading to inadequate support for users in crisis (Berry et al. 2016; Vaidyam et al. 2019). Concerns exist about chatbots’ ability to handle complex mental health issues, especially when users present with co-morbid conditions or suicidal ideation. Privacy and data security issues also arise due to the sensitive nature of mental health information processed by AI systems (Harman et al. 2012; Shenoy and Appel 2017). Users may hesitate to share personal information, fearing data breaches or misuse. Additionally, the absence of human oversight could lead to ethical dilemmas if chatbots provide inappropriate or harmful advice (Denecke et al. 2021; Omarov et al. 2023). These challenges underscore the need for rigorous evaluation, regulation, and integration of AI chatbots as complementary tools rather than replacements for professional mental health services.
Attitudes toward AI-assisted diagnosis and treatment
Public attitudes toward AI in healthcare during the COVID-19 pandemic reflect broader concerns about the technology’s implications, particularly fears of unequal healthcare outcomes due to biased AI algorithms. These challenges influence public trust and acceptance of AI technologies, highlighting the need for transparent and equitable systems.
The health crisis underscored AI’s transformative potential in enhancing case detection and forecasting viral spread, integrating advanced technologies such as AI, IoT, Big Data, and Machine Learning into healthcare delivery (Vaishya et al. 2020). Despite advancements, public reticence toward AI-assisted healthcare persists, rooted in concerns about displacement of healthcare professionals (Coombs 2020; World Economic Forum 2020), dependability of AI-generated decisions (Albahri et al. 2023; Solaiman and Awad 2024), and equitable distribution of medical resources (Ahad et al. 2024; Huang et al. 2021).
In clinical settings, the impersonal nature of AI may clash with the need for empathy and respect, raising ethical questions (Formosa et al. 2022). Data standardization issues compound these dilemmas, as inconsistencies can cast doubt on AI diagnostic accuracy (Jiang and Wang 2021). While online medical consultations have increased convenience, they often fail to capture nuances of face-to-face evaluations, leading to discrepancies in diagnosis and treatment advice, potentially eroding trust in virtual healthcare services. Bashshur et al. (2020) emphasize that telemedicine may lack the thoroughness of in-person consultations, possibly resulting in diagnostic errors. Greenhalgh et al. (2020) note that the absence of physical examination contributes to these discrepancies. A survey by the American Medical Association (2021) reveals many patients have mixed experiences with telemedicine, citing issues related to lack of personal connection and comprehensive care compared to in-person visits. Online consultations inherently limit holistic evaluation of a patient’s health, affecting self-awareness and potentially delaying critical treatment interventions.
Personal information leakage
The pandemic necessitated sharing personal information to support public health initiatives, under government assurances of privacy protection. However, increased data flow elevated the risk of breaches. For instance, sensitive details of over 7000 individuals from Wuhan or Hubei were inadvertently exposed, leading to discrimination and fraud exploiting public distress (Southern Metropolis Daily 2023). Such breaches often occur during online information dissemination and AI-assisted medical processes, with telecommunications fraud being an immediate consequence.
Fraud’s resurgence in the digital era is profound, driven by technological advancements and globalization of criminal practices. During the pandemic, fraudulent schemes exploiting uncertainties surged (Cross 2021; Ma and McKinnon 2022). Early 2020 saw a proliferation of COVID-19-themed phishing attacks, including scamming, brand impersonation, and Business Email Compromise, where criminals impersonated health officials to spread false information and exploit public fear (Minnaar 2020).
Cybercriminals exploit psychological vulnerabilities, leveraging pandemic-induced anxiety to facilitate cyber fraud (Ma and McKinnon 2022). They masquerade as health authorities, issuing counterfeit directives, and exploit the lack of rigorous data protection and ethical frameworks to deceive individuals using accurate personal data.
The pandemic’s circumstances make the public particularly susceptible to deception, as individuals tend to follow perceived authority without question, increasing the risk of falling victim to fraud. These incidents underscore the delicate balance between employing AI for pandemic response and ensuring individual privacy, highlighting the need for stringent data protection and security measures.
Han (2023) notes that Telecommunications and Network Fraud (TNF) typically occurs without physical interaction, facilitated by digital communication tools, allowing criminals to target victims across borders. The psychological impact of TNF heightens public anxiety and skepticism toward AI-driven technologies, affecting societal trust.
In conclusion, while AI technologies offer significant benefits in managing COVID-19 complexities, their implementation must balance social and psychological impacts on individuals, ethical dimensions of healthcare, and the necessity for rigorous personal data safeguards.
Practical challenges impacting society
Rapid dissemination of social sentiment
Managing social sentiment during health crises like the COVID-19 pandemic is crucial for influencing public behavior and compliance with health guidelines. The rise of AI-enhanced news media has drastically amplified information dissemination, overshadowing traditional print media. Computational propaganda, driven by big data and AI, manipulates social sentiment by collecting, analyzing, and targeting data on digital platforms, often using bots to mimic human interaction and spread information. While this capability has been instrumental in circulating vital information about the virus, it has also facilitated the rapid spread of misinformation.
Early in the pandemic, false reports—for instance, claims about the efficacy of the traditional Chinese medicine “Shuanghuanglian oral liquid” against COVID-19—triggered widespread panic buying and hoarding (Huang and Zhao 2020; Zhang et al. 2022b). The difficulty in discerning accurate information amidst an “infodemic” has profound societal consequences, leading to confusion and hindering pandemic response efforts (Hartley and Vu 2020; Rocha et al. 2021).
On February 2, 2020, the World Health Organization (WHO) highlighted the “infodemic” as a parallel crisis to the pandemic and a significant barrier to effective public health responses (Gallotti et al. 2020; Naeem and Bhatti 2020; World Health Organization 2020). Kouzy et al. (2020) found that among sampled tweets about COVID-19, 24.8% contained misinformation and 17.4% featured unverifiable information, undermining public trust. Similarly, Cinelli et al. (2020) analyzed COVID-19-related information across social media platforms and identified varying levels of misinformation. The sheer volume and difficulty of verifying online information exacerbate this crisis. Misinformation proliferates on platforms like WeChat, Weibo, and TikTok, often outpacing official responses. The anonymity of the internet complicates identifying rumor sources, underscoring the need for proactive media approaches to foster positive discourse and counteract the “infodemic” (Chen and Zhang 2021; Xi 2020).
China employs a centralized approach to managing public opinion during the pandemic, using state-controlled media and digital platforms to disseminate uniform health messages nationwide (Lv et al. 2022; Xu et al. 2020; Zou et al. 2020). Extensive surveillance and data analytics monitor virus spread and enforce quarantine measures. This strategy enabled rapid dissemination of crucial information about hygiene practices and lockdowns, aiding initial containment efforts.
In contrast, the United States adopts a decentralized approach, with state governments and private media playing key roles (Bergquist et al. 2020; Park and Fowler 2021). This diversity encourages scientific debate and innovation, allowing states to tailor strategies to local needs. However, decentralization can lead to inconsistent messaging and polarization, especially when federal and state directives conflict. During the pandemic, conflicting messages about mask-wearing and social distancing led to confusion and politicization (Barry et al. 2021; Hornik et al. 2021).
The U.S. also faced strategic errors early in the pandemic (Bergquist et al. 2020; Carter and May 2020; Schuchat 2020). Emphasizing the use of mechanical ventilators based on preliminary data may have led to overuse and higher mortality in some groups (Dar et al. 2021; Wells et al. 2020). Additionally, some states returned COVID-19-positive patients to elder care facilities, causing outbreaks (Elman et al. 2020; Monahan et al. 2020). A lack of focus on the most vulnerable populations and insufficient consideration of natural immunity complicated public health strategies (Moghadas et al. 2021).
Cultural differences impact the effectiveness of AI-driven public health interventions. In China, high trust in government supports stringent measures and surveillance, emphasizing collective welfare (Gozgor 2022; Wang et al. 2023; Zhao and Hu 2017). In the U.S., lower trust in government and media, rooted in values of individual freedom, can hinder acceptance of such measures. The for-profit nature of the U.S. healthcare system further complicates trust, as interventions may be perceived as prioritizing corporate interests (Horwitz 2005).
This comparative analysis highlights the need for culturally sensitive and adaptable public health policies. China’s centralized approach allows rapid technology deployment but requires careful management of scientific debate to maintain trust. In the U.S., improving messaging consistency and transparency in AI use could enhance compliance and trust. Both countries must support open scientific dialog and adjust policies based on emerging data to ensure effective, ethical, and accepted public health strategies.
Higher cost of technological errors
Since its inception in 1956, AI has evolved into a cornerstone of modern science, playing an integral role in pandemic management. However, as McLuhan (1994, p. 45) suggests, technology extends our capabilities but necessitates new societal balances. The pandemic has exposed the immaturity of certain AI applications and the repercussions of technological failures, such as personal data breaches during critical periods.
In China, reliance on health codes and travel codes for public access has revealed vulnerabilities. System outages and erroneous health status updates caused significant disruptions in areas like Beijing, Xi’an, and Guangzhou, preventing commuters from accessing workplaces and individuals from using transportation (Cheng et al. 2023; Wu et al. 2020; Zhou et al. 2021). These incidents highlight the need for robust maintenance mechanisms and illustrate the societal impact when technology fails (Cong 2021; Jin et al. 2022). Though resolved within hours, these issues sparked debates on social media about the reliability of AI-driven public health systems, temporarily shaking public confidence.
Using the Chinese search term “health code,” Yu et al. (2022) analyzed data from Zhihu, a prominent Chinese Q&A platform, focusing on three types of digital errors: (1) algorithmic bugs from unintended coding consequences; (2) territorial bugs due to discrepancies between health code apps and spatial configurations; and (3) corporeal bugs arising from mismatches between app assumptions and actual user profiles. Their research underscores how these errors affected user experiences with China’s contact tracing systems during the pandemic, enhancing understanding of their impact on algorithmic governance and platform urbanism.
AI replacement of human workers
During the peak of the COVID-19 pandemic, AI technology emerged as a critical tool to alleviate strain on human resources in prevention and control efforts. Deploying drones for large-scale disinfection, using facial recognition coupled with infrared temperature measurement for efficient screening, and introducing autonomous nucleic acid testing apparatus exemplify AI’s role in replacing human labor for basic tasks (Zhang et al. 2020b). These applications reduced viral transmission risks and relieved healthcare staff workloads.
However, the pervasive use of AI across various sectors has triggered concerns about a potential “tech backlash” (Sacasas 2018) and the emergence of “digital scars” (Gallego and Kurer 2022; Marabelli et al. 2021). AI’s advancement into the job market, particularly in replacing low-skill jobs, has become a significant issue (Abdullah and Fakieh 2020; Huang and Rust 2018; Jaiswal et al. 2021; Wang and Siau 2019). The economic repercussions of the pandemic, exacerbated by stringent containment measures like lockdowns, have increased unemployment rates and adversely impacted livelihoods, placing employment concerns alongside health anxieties.
AI’s integration across sectors raises concerns about job security, especially for roles involving routine and repetitive tasks. While AI offers increased efficiency and innovation, it poses a significant risk of displacing workers in basic positions. The McKinsey Global Institute claims that “the pandemic accelerated existing trends in remote work, e-commerce, and automation, with up to 25 percent more workers than previously estimated potentially needing to switch occupations”, suggesting substantial labor market transformation (Lund et al. 2021). This transition may challenge workers in basic roles who may struggle with reskilling or moving into new job categories.
This development is particularly acute in China, where a significant portion of the population is employed in jobs susceptible to automation. While technological progress can free humans from monotonous labor, job displacement from AI adoption has constrained the livelihoods of those affected. Therefore, although AI has been indispensable in pandemic response efforts, its impact on employment necessitates a careful approach from policymakers to balance technological innovation with social welfare and economic stability.
The impact of AI on jobs during the pandemic must also be considered within the broader context of economic and health crises. The pandemic resulted in severe staff shortages, notably in healthcare and retail, driven by health risks and increased demand (Bourgeault et al. 2020; Costa Dias et al. 2020). In some cases, AI technologies were introduced not to replace workers but to support and bridge these gaps. Additionally, the influence of privatization and private equity in healthcare has led to staff cuts and restructuring, impacting employment independently of AI (Alayed et al. 2024; Alonso et al. 2016; Goodair and Reeves 2024). This suggests AI’s role during the pandemic was intertwined with broader economic and social factors affecting jobs.
In conclusion, while AI has the potential to significantly alter the labor market, addressing transitional challenges and supporting displaced workers is crucial. A balanced perspective must recognize both AI’s capacity to displace jobs and its utility in filling labor shortages during crises. Ongoing research and policy dialog are essential to manage these challenges, ensuring AI integration promotes economic stability and workforce development. Policymakers and industry leaders must collaborate on strategies that facilitate reskilling and equitable job creation to mitigate the negative impacts of job displacement.
Ethical considerations in AI deployment during the COVID-19 response
The COVID-19 pandemic, characterized by its abrupt onset and evolving viral strains, has seen decreased infection and fatality rates due to global vaccination efforts and natural epidemiological trends (Kitahara et al. 2023; Liu et al. 2022). Despite this progress, the virus persists, and AI has been instrumental in the multifaceted pandemic response. However, AI remains an evolving technology with challenges in standardization and ethical application. This section explores broader ethical implications—such as privacy, safety, autonomy, and utilitarian concerns—that require higher-level discourse to inform the framework within which these challenges are addressed.
Privacy concerns
In the digital age, data is paramount, but the ease of data collection and manipulation raises significant privacy concerns. The handling of personal information has become critical in Internet governance, with data breaches and privacy violations at the forefront. Privacy in AI is crucial for safeguarding personal data, especially in health monitoring and contact tracing applications used during the pandemic. AI systems handling sensitive information necessitate robust protections to prevent misuse or accidental exposure. For instance, systems tracking individuals’ movements or health statuses must ensure secure data collection strictly for public health objectives, minimizing potential misuse or unauthorized access (Fahey and Hino 2020; Ribeiro-Navarrete et al. 2021; Romero and Young 2022).
AI’s role in the pandemic heavily relies on data analysis involving complex “black box” algorithms that often lack transparency (Durán and Jongsma 2021; Rai 2020; von Eschenbach 2021). Transparency involves making AI processes and decisions clear to all stakeholders, including the public. The opacity of many AI systems poses challenges for auditing and understanding decision-making processes. In healthcare, transparency is essential for maintaining trust, accountability, adherence to ethical standards, and ensuring biases or errors are not perpetuated (Durán and Jongsma 2021; Wischmeyer 2020).
Technical constraints can hinder full disclosure of AI processes, leading to uncertainties in personal data protection. During the pandemic, multiple stakeholders—including AI developers, data repositories, and social media platforms—engaged in data processing. Human intervention introduces additional risks for privacy breaches, underscoring the need for high ethical standards among all personnel with data access.
In China, the widespread use of health code apps, which tracked individuals’ health status and movement history, raised serious privacy concerns (Huang et al. 2022). Implemented via platforms like Alipay and WeChat, these apps collected vast amounts of personal data to assess COVID-19 exposure risks. While effective in controlling the pandemic, incidents of data breaches and unauthorized data sharing were reported, highlighting the tension between public health objectives and individual privacy rights (Cong 2021; Wu et al. 2020).
Despite the public’s willingness to share personal information for pandemic mitigation, legal frameworks for data protection remain imperfect. Industry self-regulatory practices are often insufficient, and public awareness of data security is lacking (Chen 2020). Safeguarding privacy requires vigilant oversight and collaborative efforts among stakeholders to address gaps in the legal system and foster a culture of data security consciousness.
Privacy and transparency are interconnected; a lack of transparency can heighten privacy concerns. If AI system operations are unclear, stakeholders may doubt the security and use of their personal data. Enhancing transparency can mitigate privacy issues by clarifying data management practices. Effective measures involve adopting a “privacy by design” approach in AI development, including anonymization techniques to protect identities (Diver and Schafer 2017; Yanisky-Ravid and Hallisey 2019). Increasing AI interpretability through explainable AI (XAI) can make outputs more comprehensible to users, ensuring compliance with data protection laws like GDPR (General Data Protection Regulation) and HIPAA (Health Insurance Portability and Accountability Act) (Albahri et al. 2023).
Safety concerns
AI deployment during the COVID-19 pandemic demonstrated its ability to alleviate public health burdens but also highlighted safety concerns. Healthcare AI systems delivering personalized recommendations require advanced data integration and interpretation methods surpassing current standards (Kulikowski and Maojo 2021). The rapid mutation of the novel coronavirus complicates AI reliability, necessitating continual system updates that integrate scientific advancements, expert knowledge, and policy support.
These challenges had direct public health implications. AI models initially trained on data from earlier strains risked obsolescence as the virus mutated, potentially compromising diagnostics, prognostics, and patient management accuracy. For example, AI diagnostic tools effective against the original strain might fail to detect new symptoms or patterns from emerging variants (Al Meslamani et al. 2024). Inaccurate AI predictions can result in inappropriate treatments, care delays, or harmful outcomes (Malik et al. 2021).
In the rush to deploy AI diagnostic tools in China, some systems faced criticism for inaccuracies and biases. AI models trained predominantly on data from specific regions may not have performed well when applied to populations in other provinces with varied demographics (Dong et al. 2021; Ye et al. 2020). Reports indicated that AI-assisted CT scan analysis sometimes resulted in false positives or negatives, causing unnecessary anxiety or missed cases (Mlynska et al. 2023). Moreover, the lack of standardized protocols for AI tool validation meant some products entered the market without rigorous testing, raising safety concerns about their reliability and effectiveness in real-world settings (Zhang et al. 2022a).
AI’s reliance on data for decision-making introduces the possibility of bias, especially if algorithm designers inadvertently embed prejudices into the code. This issue was evident in medical image analysis during the pandemic, where AI systems trained on data from specific institutions developed biases that compromised performance on diverse datasets (Sha and Jing 2021). Such biases can propagate inequality in patient diagnosis and healthcare resource distribution, sparking debate over their nature and impact (Delgado et al. 2022; El Naqa et al. 2021; Estiri et al. 2022; Röösli et al. 2021; Williams et al. 2020).
These concerns reflect public apprehension about AI safety and the potential consequences of technological errors. As discussed in the section “Immediate practical challenges posed by AI applications during the COVID-19 pandemic”, AI’s impact on individuals and society is profound, and mistakes can be costly. To mitigate these risks, continuous reflection and proactive management of AI safety are imperative.
Autonomy and agency concerns
The discourse on autonomy and agency in AI has been extensively explored (Formosa 2021; Laitinen and Sahlgren 2021; Prunkl 2022), raising critical questions about the autonomy granted to AI systems and their potential status as moral agents (Brożek and Janik 2019; Chesterman 2020; Floridi 2014). The ethical implications of AI as autonomous decision-makers are controversial. While some argue that AI can act without human biases, they acknowledge that AI lacks the capacity to comprehend the moral weight of its actions (Bonnefon et al. 2023; Pauketat and Anthis 2022).
In healthcare, particularly during the COVID-19 pandemic, AI’s role in decision-making has been both beneficial and scrutinized due to the need for nuanced judgments in resource allocation (Neves et al. 2020). Healthcare professionals benefit from AI’s analytical capabilities amid high workloads and ethical dilemmas, but reliance on AI raises concerns about transparency, accountability, and potential algorithmic biases (Neves et al. 2020). Integrating AI into healthcare requires a deliberate blend of medical expertise, ethical principles, and human judgment, with AI augmenting rather than replacing human decision-makers.
While AI autonomy in healthcare has helped manage pandemic-related demands and alleviate manual labor pressures, it raises concerns about the reliability of AI-driven diagnostics and potential erosion of trust in human providers (Bonnefon et al. 2023). In social media, AI’s influence over user autonomy has been magnified during the pandemic, with algorithms shaping information landscapes and potentially limiting the diversity of user experiences (Sahebi and Formosa 2022).
In China, the collectivist culture and emphasis on societal harmony often supersede individual autonomy (Hofstede Insights 2024). During the pandemic, the public generally accepted AI technologies for surveillance and tracking as necessary for the greater good. However, the mandatory use of health code apps highlights the ethical dilemma between public health priorities and individual rights, as collective welfare tends to outweigh personal autonomy (Yu 2022).
AI-driven management of public opinions can significantly influence perceptions and behaviors, especially during health crises when accurate information is crucial. Without stringent oversight, such systems risk manipulating opinions, suppressing dissent, and reducing viewpoint diversity, undermining human autonomy. The EU AI Act establishes a regulatory framework demanding high standards of transparency, accountability, and fairness in AI applications, emphasizing transparency in operations and decision-making processes during health emergencies. The OECD AI Principles advocate for ethical AI that upholds human rights and democratic values, promoting inclusive growth and sustainable development.
In democratic societies, where freedom of expression and access to diverse information are essential, the role of AI in public opinion management requires careful consideration. AI systems should support, not replace, human decision-making by providing verified information and diverse viewpoints to enable informed discourse. Ethically managing social sentiments with AI during health crises is complex but vital. By adhering to strict ethical standards and implementing measures that ensure transparency, inclusivity, and respect for individual autonomy, AI can enhance public health management. This approach ensures AI supports, rather than undermines, democratic values and personal freedoms crucial to society.
Utilitarianism and freedom concerns
Utilitarianism, as framed by the “Greatest Happiness Principle” (Long 1990; Rosen 1998), posits that actions should aim to maximize general utility, asserting that “each individual counts for exactly as much as another if each experiences an equal quantity of utility of the same kind or quality” (Riley 2009). Jeremy Bentham and John Stuart Mill suggest that the principle of utility operates under the notion that each person counts for one and no more (Mill 1969).
The COVID-19 pandemic underscores the necessity of global cooperation, as advocated by China’s concept of a shared human destiny (Zeng 2021), which is crucial in managing global health crises (Yuan 2020). Regional collaboration in East Asia—through fiscal policy, health resource sharing, and technological applications for testing—exemplifies an effective response to concurrent health and economic shocks (Kimura et al. 2020). Post-pandemic, cooperative strategies remain vital for enhancing regional preparedness for public health emergencies (Zhu and Liu 2021).
China’s application of AI technologies during the pandemic can be viewed through a utilitarian lens, aiming to achieve the greatest good for the greatest number (Herron and Manuel 2022). Measures like widespread surveillance, mandatory health codes, and AI-driven quarantine enforcement were justified as necessary to protect public health. However, these actions often came at the expense of individual freedoms and privacy (Ishmaev et al. 2021). The tension between collective welfare and personal rights was evident when individuals faced restrictions based on algorithmic assessments without clear avenues for appeal (Liang 2020). This raises ethical questions about the proportionality of such measures and the need for safeguards to protect individual liberties even in emergencies.
Cultural and policy differences influence public reception of pandemic measures. Western liberal democracies often exhibit lower compliance compared to collectivist societies like China (Dohle et al. 2020; Guglielmi et al. 2020; Wang et al. 2020a). For instance, mask-wearing sparked debate and protests in Western countries during peak pandemic periods (Cherry et al. 2021; Betsch et al. 2020; MacIntyre et al. 2021). These differences highlight the diverse cultural-psychological structures shaping national responses and reveal tensions between individual and collective interests.
Western philosophy traditionally values human freedom as subjectivity, which can conflict with pandemic restrictions (Xie 2021). In contrast, influenced by Confucianism, Chinese society aligns with social norms and collective values, demonstrating heightened collective consciousness in pandemic management. The contrasting approaches of Western individualism versus Chinese collectivism reflect their cultural-psychological structures and expose limitations in policies shaped by these influences.
In summary, while Western societies prioritize individual freedom—sometimes overlooking broader societal or ecological interests—China’s collectivist stance must balance collective action with respect for individual autonomy and minority rights (Blokland 2019; Franck 1997; Ho and Chiu 1994; Hui and Villareal 1989; Wang and Liu 2010). Deploying AI in pandemic response demands understanding these cultural nuances and appreciating human values to foster better international cooperation and anticipate varying national attitudes toward health crises.
Recommendations for addressing AI challenges in pandemic management
This section presents strategic recommendations to enhance the deployment and acceptance of AI technologies in healthcare, particularly for pandemic response. These recommendations are divided into general suggestions applicable globally and specific ones targeted at China. The goal is to leverage AI effectively while addressing ethical, legal, and societal challenges to optimize pandemic response and routine healthcare delivery.
Public acceptance
The rapid development of AI during the pandemic has led to inevitable shortcomings. As Yang and Mo (2022) note, societal risks evolve with technological advancements. Public trust in AI has been challenged, with concerns ranging from the reliability of AI-assisted diagnoses to the professionalism of AI-driven remote consultation platforms (see Table 3). Skepticism arises from inherent biases and a lack of familiarity with the technology’s capabilities, particularly regarding safety and reliability. However, history demonstrates that patience and tolerance are key to integrating new technologies into society, and AI is no exception. The application of AI during COVID-19 represents a synergistic effort between human expertise and technological innovation, showcasing adaptability in facing novel crises.
For example, a robot equipped with diagnostic tools was used to treat the first hospitalized COVID-19 patient in the U.S. (The Guardian 2020). Alibaba DAMO Academy and Alibaba Cloud’s AI, trained on over 5000 COVID-19 cases, achieved a 96% diagnostic accuracy in seconds, alleviating pressure on healthcare systems (Calandra and Favareto 2020; Mahmud and Kaiser 2021; Pham et al. 2020). These applications demonstrate AI’s potential in aiding COVID-19 diagnosis and treatment. Additionally, AI has played a role in pandemic management through intelligent temperature monitors, disinfection robots, and autonomous logistics, contributing to various sectors’ response efforts (see Table 3) (Ruan et al. 2021; Shen et al. 2020; Wang and Wang 2019).
Improving public trust and transparency involves clear communication about AI’s role, capabilities, and limitations in healthcare. Public education campaigns and open forums for feedback can demystify AI technologies. Transparent reporting of AI outcomes and involving the public in ethical discussions can help adapt AI applications to better meet societal expectations and needs.
Progress in AI cannot be separated from real-world application and evaluation. Practical deployment allows us to discern the strengths, weaknesses, and future trajectories of these technologies. Recognizing AI’s contributions during the pandemic, such as conserving resources and enhancing response measures, is vital.
Public understanding of science and technology must evolve, acknowledging AI’s interdisciplinary nature and its relationship with other fields. The advent of new technologies should inspire optimism for human progress supported by technological advancement, rather than induce panic. During the COVID-19 pandemic, AI applications have proven instrumental in combating the virus. Nevertheless, ethical considerations regarding technology integration persist, necessitating ongoing reflection and response to emerging risks and challenges.
Social norms
While AI deployment during the pandemic has been instrumental in public health efforts, the absence of a comprehensive regulatory framework has led to significant challenges. Personal privacy breaches are particularly concerning; despite authorities’ measures, incidents of privacy leaks and associated cybercrimes persist due to inadequate legal protections (Chen 2020). As the public becomes more privacy-conscious, the digital age paradoxically lowers barriers to committing cybercrimes, revealing gaps in current legislation (Ajayi 2016; Dashora 2011; McGuire and Dowling 2013; Saini et al. 2012; Zhang et al. 2012). The continued occurrence of privacy breaches during the pandemic underscores the need for enhanced legal frameworks to safeguard citizen privacy effectively (Buil-Gil et al. 2021; Kemp et al. 2021; Khweiled et al. 2021; Naidoo 2020).
Ethical frameworks guide the responsible development and deployment of AI, ensuring these technologies support equitable health outcomes and maintain patient trust. Table 4 illustrates diverse approaches to AI governance, from stringent data privacy laws in China to innovation-centric strategies in the U.S., and highlights efforts toward international cooperation and standardization through bodies like the UN, G7, and bilateral agreements. Implementing these frameworks involves navigating diverse cultural values and legal systems, which can differ markedly across regions. Initiatives like the EU’s AI Act or the WHO’s guidelines on AI ethics can serve as templates. The primary challenge is ensuring these frameworks are comprehensive and enforceable across jurisdictions.
In response to concerns about data privacy and security in AI applications for healthcare and pandemic management, China has strengthened its data protection framework. The Personal Information Protection Law (PIPL), enacted on August 20, 2021, and effective from November 1, 2021, is a comprehensive privacy law akin to the EU’s GDPR but tailored to China’s context. PIPL imposes stringent requirements on the collection, storage, and processing of personal information, ensuring AI applications comply with high data governance standards, thereby addressing critical challenges in deploying AI technologies during the COVID-19 pandemic.
The evolution of AI is closely linked to efforts by key internet companies. In China, Baidu, Alibaba, and Tencent have significantly advanced AI technology and collaborated with the government in pandemic mitigation. However, the focus on data collection, storage, and processing introduces vulnerabilities where personal information may be at risk of theft or misuse (see Table 3). Therefore, protecting privacy must be prioritized across all stages of data handling and by all parties involved, including government and enterprise personnel. This highlights the necessity for professional ethics in AI development and operation.
The pandemic has disrupted societal norms and governmental operations, with AI playing a pivotal role in adapting to these changes. However, as Wang and Wang (2019) assert, technology in social governance carries significant political weight. Pandemic measures have required citizens to surrender certain rights, such as privacy and freedom, for public health. Governments have leveraged AI for improved pandemic control and social governance, raising issues regarding the equitable trade-off between citizen rights and the benefits gained.
Dependence on technology for social management can lead to inertia and complacency. Relying on AI to regulate citizen rights without a comprehensive policy framework can result in inadequate protection of these rights. As AI becomes more entrenched in social governance, the algorithms guiding decisions start to act as social norms, raising critical questions about alignment with legal and ethical standards. It is crucial to approach the intersection of technology and policy with caution to mitigate social risks and ensure governance systems include AI as a responsible entity. From legal, policy, and ethical standpoints, governments must strive to “improve the socialization, legalization, intelligence, and professionalism of social governance” (Xi 2017), especially amid the rapid advancement of AI. Balancing technological growth with the protection of individual rights and societal norms is essential for maintaining trust and upholding democratic values in the digital age.
Technological advancements
The persistent and evolving nature of the COVID-19 pandemic, which continued beyond 2023, defied initial hopes for a swift resolution similar to the SARS outbreak in 2003. The rapid spread and mutation of the virus, coupled with varied international prevention policies, highlight the complexity of global health governance (Xie 2020). AI has emerged as a crucial tool in this ongoing battle but must evolve alongside the changing dynamics of the virus to meet humanity’s shifting needs.
During the pandemic, AI was rapidly deployed across various domains, from intelligent temperature measurement to online health consultation services. This swift integration, driven by urgency, often bypassed the thorough vetting typically associated with new technologies (see Table 3). As technological weaknesses—such as failures in health code platforms—became apparent, the need for continuous AI enhancement and maintenance was evident. AI’s role extends beyond immediate applications; it requires governance through appropriate norms and policies to ensure stability and reliability. Ongoing innovation in science and technology is essential to refine AI’s contribution to pandemic management, emphasizing a dynamic approach to technology development.
Despite rapidly deploying AI-driven contact tracing apps, China faced significant hurdles similar to other countries. A unified national strategy for AI in healthcare can help streamline initiatives across different regions and administrative levels. Initially, the contact tracing app landscape was fragmented, with various regions developing their own systems lacking interoperability. This fragmentation led to public confusion and inefficiencies in data collection and use, delaying effective pandemic responses. Over time, efforts consolidated data on centralized platforms (e.g., WeChat or Alipay) to improve efficacy. Coordination initiatives, such as national health strategies integrating AI diagnostics in remote areas, have shown that centralized planning with local adaptations can be effective.
Human history is marked by advancements in productive forces, with science and technology driving societal transformation. Each scientific and technological revolution reshapes society to varying degrees, and AI’s emergence is a contemporary example (Beniger 2009; Hilbert 2020). The prudent application of technology is crucial for maintaining social stability and public safety amid a pandemic. Practical deployment must be accompanied by vigilant adjustments and maintenance to address society’s pressing needs effectively.
Global collaboration
The COVID-19 pandemic reaffirms the concept of humanity’s shared destiny, emphasizing the necessity of solidarity, unity, and cooperative action as our most effective tools (China Daily 2020a, 2020b). Global collaboration is crucial for pooling resources, knowledge, and data essential for developing AI solutions effective across diverse populations and healthcare systems. China’s commitment to international collaboration is evident through its efforts to assist other nations by constructing modern hospitals and sharing critical information about the virus, significantly contributing to global containment and prevention strategies.
China’s strategic approach to managing the pandemic has significantly influenced global health policy and privacy standards. Countries like South Korea and Singapore adopted contact tracing and surveillance strategies similar to China’s technological methods. These nations implemented extensive tracking and data analysis techniques initially pioneered by China, demonstrating the effectiveness of such strategies in controlling the virus’s spread. Internationally, China’s proactive measures helped shape guidelines issued by the World Health Organization (WHO) and the United Nations (UN), particularly concerning the rapid deployment of public health infrastructure and digital surveillance tools.
Initially, COVID-19 presented a high fatality rate, but over time, infections largely manifested as milder or asymptomatic cases, leading to reductions in both fatality and severity. This shift prompted reevaluation and relaxation of stringent control measures worldwide. For instance, Sweden lifted most pandemic restrictions and ceased extensive testing on February 9, 2022, becoming the first nation to officially declare the pandemic over (Reuters 2022). Approximately a year later, on January 8, 2023, China downgraded COVID-19 to a Class B infectious disease, easing quarantine protocols and adjusting medical policies accordingly. As of May 5, 2023, the WHO recognized COVID-19 as an ongoing health issue but no longer a public health emergency of international concern (PHEIC) (World Health Organization 2023).
In our interconnected global landscape, evolving pandemic policies have fostered deeper interactions between China and the international community. China’s approach to pandemic management has been people-centered and data-driven. Despite advancements in addressing public crises, China acknowledges its AI capabilities are not yet on par with more technologically developed regions like Europe and the United States. Throughout the pandemic, China actively engaged with global experts to address these gaps while supporting less advanced countries by constructing infrastructure and sharing medical resources, embodying the principle of a “community of shared destiny” (China Daily 2020a, 2020b; Zeng 2021).
Achieving effective collaboration can be hindered by geopolitical tensions, intellectual property concerns, and varying regulatory standards. Strategies to mitigate these challenges include establishing international AI health forums, promoting shared ethical standards, and creating bilateral agreements that respect each entity’s interests and regulatory frameworks. In conclusion, while AI technology has played a pivotal role in the pandemic response, disparities in global technological development underscore the importance of international cooperation. History teaches that adversarial competition hinders societal progress. In global public security crises, ethical imperatives of symbiosis and coexistence must guide international relations, becoming foundational norms for global governance—heralding a new era of ethical civilization (Xie 2020).
Overall, this section offers a roadmap for enhancing AI integration into healthcare settings globally and within China. By fostering global collaboration, establishing robust ethical frameworks, strengthening data protection, and improving public trust and transparency, AI can be deployed more effectively and ethically. Successful implementation of these recommendations requires continuous evaluation and adaptation to ensure AI technologies meet the evolving challenges of healthcare delivery and pandemic response, balancing innovation with ethical considerations.
Discussion: future research directions
The integration of AI into China’s response to the COVID-19 pandemic has showcased both remarkable advancements and significant challenges. This paper has detailed various AI applications—from medical diagnostics to social sentiment management—and highlighted the practical and ethical issues arising from their use. Understanding these complexities is essential for informing future AI deployment in public health emergencies.
Future research should focus on developing frameworks that balance AI’s benefits with the protection of individual rights. Investigations into long-term impacts of AI implementations during the pandemic are necessary to inform strategies that manage societal changes catalyzed by technology. Additionally, exploring international standards for AI use in public health crises is critical. Research should aim to establish global protocols for data sharing, privacy, and cross-border AI interventions, ensuring ethical deployment across diverse cultural and legal landscapes.
Moreover, interdisciplinary studies combining technology, ethics, law, and social science can provide holistic insights into AI’s role in society. Such research can guide the development of AI systems that are not only technologically advanced but also socially responsible and ethically sound.
Conclusions
This paper has critically analyzed the role of AI in managing the COVID-19 pandemic in China, highlighting both its transformative potential and the challenges it presents. AI technologies have been instrumental in healthcare delivery, infection control, and social sentiment analysis. However, their deployment has raised significant issues regarding mental health, public trust in AI-assisted healthcare, and data privacy at the individual level. Societally, challenges include the rapid spread of misinformation, the consequences of technological failures, and potential job displacement due to automation.
Addressing these challenges requires a multifaceted approach. Enhancing public trust through transparency and education is essential. Establishing robust legal and ethical frameworks can safeguard individual rights and societal norms. Technological advancements must focus on improving AI reliability and mitigating risks. Global collaboration is vital to develop unified standards and share best practices.
In conclusion, while AI has played a pivotal role in China’s pandemic response, it is imperative to navigate the complexities it introduces thoughtfully. Balancing technological innovation with ethical considerations ensures that AI contributes positively to public health without compromising individual liberties or societal values. By addressing these challenges proactively, we can harness AI’s potential to improve outcomes in future health crises, fostering a synergy between technology and humanity for the common good.
Data availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
References
Abdel Wahed WY, Hefzy EM, Ahmed MI, Hamed NS (2020) Assessment of knowledge, attitudes, and perception of health care workers regarding COVID-19, a cross-sectional study from Egypt. J Community Health 45:1242–1251. https://doi.org/10.1007/s10900-020-00882-0
Abdullah R, Fakieh B (2020) Health care employees’ perceptions of the use of artificial intelligence applications: survey study. J Med Internet Res 22(5):e17620. https://doi.org/10.2196/17620
Ahad A, Jiangbina Z, Tahir M, Shayea I, Sheikh MA, Rasheed F (2024) 6G and intelligent healthcare: taxonomy, technologies, open issues and future research directions. Internet Things 25:101068. https://doi.org/10.1016/j.iot.2024.101068
Ahuja AS, Reddy VP, Marques O (2020) Artificial intelligence and COVID-19: a multidisciplinary approach. Integr Med Res 9(3):100434. https://doi.org/10.1016/j.imr.2020.100434
Ajayi EFG (2016) Challenges to enforcement of cyber-crimes laws and policy. J Internet Inf Syst 6(1):1–12. https://doi.org/10.5897/JIIS2015.0089
Al Meslamani AZ, Sobrino I, de la Fuente J (2024) Machine learning in infectious diseases: potential applications and limitations. Ann Med 56(1):2362869. https://doi.org/10.1080/07853890.2024.2362869
Alayed TM, Alrumeh AS, Alkanhal IA, Alhuthil RT (2024) Impact of privatization on healthcare system: a systematic review. Saudi J Med Med Sci 12(2):125–133. https://doi.org/10.4103/sjmms.sjmms_510_23
Alazab M, Awajan A, Mesleh A, Abraham A, Jatana V, Alhyari S (2020) COVID-19 prediction and detection using deep learning. Int J Comput Inf Syst Ind Manag Appl 12:168–181
Albahri AS, Duhaim AM, Fadhel MA, Alnoor A, Baqer NS, Alzubaidi L et al. (2023) A systematic review of trustworthy and explainable artificial intelligence in healthcare: assessment of quality, bias risk, and data fusion. Inf Fusion 96:156–191. https://doi.org/10.1016/j.inffus.2023.03.008
Alonso JM, Clifton J, D¡az-Fuentes D (2016) Public private partnerships for hospitals: does privatization affect employment?. J Strateg Contract Negot 2(4):313–325. https://doi.org/10.1177/2055563617710619
Aman AHM, Hassan WH, Sameen S, Attarbashi ZS, Alizadeh M, Latiff LA (2021) IoMT amid COVID-19 pandemic: application, architecture, technology, and security. J Netw Comput Appl 174:102886. https://doi.org/10.1016/j.jnca.2020.102886
Ameen N, Tarhini A, Reppel A, Anand A (2021) Customer experiences in the age of artificial intelligence. Comput Hum Behav 114:106548. https://doi.org/10.1016/j.chb.2020.106548
American Medical Association (2021) 2021 Telehealth Survey Report. Available via AMA website. https://www.ama-assn.org/system/files/telehealth-survey-report.pdf
Anshari M, Hamdan M, Ahmad N, Ali E, Haidi H (2023) COVID-19, artificial intelligence, ethical challenges and policy implications. AI Soc 38:707–720. https://doi.org/10.1007/s00146-022-01471-6
Arora G, Joshi J, Mandal RS, Shrivastava N, Virmani R, Sethi T (2021) Artificial intelligence in surveillance, diagnosis, drug discovery and vaccine development against COVID-19. Pathogens 10(8):1048. https://doi.org/10.3390/pathogens10081048
Bai X, Wang H, Ma L, Xu Y, Gan J, Fan Z et al. (2021) Advancing COVID-19 diagnosis with privacy-preserving collaboration in artificial intelligence. Nat Mach Intell 3:1081–1089. https://doi.org/10.1038/s42256-021-00421-z
Barry CL, Anderson KE, Han H, Presskreischer R, McGinty EE (2021) Change over time in public support for social distancing, mask wearing, and contact tracing to combat the COVID-19 pandemic among US adults, April to November 2020. Am J Public Health 111(5):937–948. https://doi.org/10.2105/AJPH.2020.306148
Bashshur R, Doarn CR, Frenk JM, Kvedar JC, Woolliscroft JO (2020) Telemedicine and the COVID-19 pandemic, lessons for the future. Telemed e-Health 26(5):571–573. https://doi.org/10.1089/tmj.2020.29040.rb
Beatty C, Malik T, Meheli S, Sinha C (2022) Evaluating the therapeutic alliance with a free-text CBT conversational agent (Wysa): a mixed-methods study. Front Digit Health 4:847991. https://doi.org/10.3389/fdgth.2022.847991
Beniger J (2009) The control revolution: technological and economic origins of the information society. Harvard University Press, Cambridge, MA
Bourgeault IL, Maier CB, Dieleman M, Ball J, MacKenzie A, Nancarrow S et al. (2020) The COVID-19 pandemic presents an opportunity to develop more sustainable health workforces. Hum Resour Health 18:83. https://doi.org/10.1186/s12960-020-00529-0
Brożek B, Janik B (2019) Can artificial intelligences be moral agents?. New Ideas Psychol 54:101–106. https://doi.org/10.1016/j.newideapsych.2018.12.002
Buil-Gil D, Miró-Llinares F, Moneva A, Kemp S, Díaz-Castaño N (2021) Cybercrime and shifts in opportunities during COVID-19: a preliminary analysis in the UK. Eur Soc 23(sup1):S47–S59. https://doi.org/10.1080/14616696.2020.1804973
Bergquist S, Otten T, Sarich N (2020) COVID-19 pandemic in the United States. Health Policy Technol 9(4):623–638. https://doi.org/10.1016/j.hlpt.2020.08.007
Berry N, Lobban F, Emsley R, Bucci S (2016) Acceptability of interventions delivered online and through mobile phones for people who experience severe mental health problems: a systematic review. J Med Internet Res 18(5):e121. https://doi.org/10.2196/jmir.5250
Betsch C, Korn L, Sprengholz P, Felgendreff L, Eitze S, Schmid P, Böhm R (2020) Social and behavioral consequences of mask policies during the COVID-19 pandemic. Proc Natl Acad Sci USA 117(36):21851–21853. https://doi.org/10.1073/pnas.2011674117
Blokland H (2019) Freedom and culture in Western society. Routledge, London
Bonnefon JF, Rahwan I, Shariff A (2023) The moral psychology of artificial intelligence. Annu Rev Psychol 75:653–675. https://doi.org/10.1146/annurev-psych-030123-113559
Boon-Itt S, Skunkan Y (2020) Public perception of the COVID-19 pandemic on Twitter: sentiment analysis and topic modeling study. JMIR Public Health Surveill 6(4):e21978. https://doi.org/10.2196/21978
Boucher EM, Harake NR, Ward HE, Stoeckl SE, Vargas J, Minkel J et al. (2021) Artificially intelligent chatbots in digital mental health interventions: a review. Expert Rev Med Devices 18(sup1):37–49. https://doi.org/10.1080/17434440.2021.2013200
Bunker D (2020) Who do you trust? The digital destruction of shared situational awareness and the COVID-19 infodemic. Int J Inf Manag 55:102201. https://doi.org/10.1016/j.ijinfomgt.2020.102201
Calandra D, Favareto M (2020) Artificial intelligence to fight COVID-19 outbreak impact: an overview. Eur J Soc Impact Circ Econ 1(3):84–104. https://doi.org/10.13135/2704-9906/5067
Carter DP, May PJ (2020) Making sense of the US COVID-19 pandemic response: a policy regime perspective. Adm Theory Prax 42(2):265–277. https://doi.org/10.1080/10841806.2020.1758991
Chandrasekaran R, Mehta V, Valkunde T, Moustakas E (2020) Topics, trends, and sentiments of tweets about the COVID-19 pandemic: temporal infoveillance study. J Med Internet Res 22(10):e22624. https://doi.org/10.2196/22624
Chen B (2020) Legal consideration of personal information protection in the fight against the COVID-19. Soc Sci J (2):23–32
Chen S, Zhang J (2021) Social media management from the perspective of “infodemic. China Publ J 18:43–46
Chen T, Shen Y, Chen X, Zhang L (2024) PsyChatbot: a psychological counseling agent towards depressed Chinese population based on cognitive behavioural therapy. ACM Trans Asian Low Resour Lang Inf Process. https://doi.org/10.1145/3676962
Cheng ZJ, Zhan Z, Xue M, Zheng P, Lyu J, Ma J et al. (2023) Public health measures and the control of COVID-19 in China. Clin Rev Allergy Immunol 64:1–16. https://doi.org/10.1007/s12016-021-08900-2
Cherry TL, James AG, Murphy J (2021) The impact of public health messaging and personal experience on the acceptance of mask wearing during the COVID-19 pandemic. J Econ Behav Organ 187:415–430. https://doi.org/10.1016/j.jebo.2021.04.006
Chesterman S (2020) Artificial intelligence and the problem of autonomy. Notre Dame J Emerg Tech 1:210–250
China Daily (2020a) Xi Jinping and UN Secretary-General Guterres make phone call. March 13, 2020
China Daily (2020b) Xi Jinping sends condolence message to Chancellor Angela Merkel on COVID-19 outbreak in Germany. March 22, 2020
Cinelli M, Quattrociocchi W, Galeazzi A, Valensise CM, Brugnoli E, Schmidt AL et al. (2020) The COVID-19 social media infodemic. Sci Rep 10(1):1–10. https://doi.org/10.1038/s41598-020-73510-5
Cockburn IM, Henderson R, Stern S (2019) The impact of artificial intelligence on innovation: an exploratory analysis. In: Agrawal A, Gans J, Goldfarb A (eds) The economics of artificial intelligence: an agenda. University of Chicago Press, Chicago, p 115–148
Cong W (2021) From pandemic control to data-driven governance: the case of China’s health code. Front Polit Sci 3:627959. https://doi.org/10.3389/fpos.2021.627959
Coombs C (2020) Will COVID-19 be the tipping point for the intelligent automation of work? A review of the debate and implications for research. Int J Inf Manag 55:102182. https://doi.org/10.1016/j.ijinfomgt.2020.102182
Ćosić K, Popović S, Šarlija M, Kesedžić I (2020) Impact of human disasters and COVID-19 pandemic on mental health: potential of digital psychiatry. Psychiatr Danub 32(1):25–31. https://doi.org/10.24869/psyd.2020.25
Costa Dias M, Joyce R, Postel‐Vinay F, Xu X (2020) The challenges for labour market policy during the Covid‐19 pandemic. Fisc Stud 41(2):371–382. https://doi.org/10.1111/1475-5890.12233
Cresswell K, Tahir A, Sheikh Z, Hussain Z, Domínguez Hernández A, Harrison E et al. (2021) Understanding public perceptions of COVID-19 contact tracing apps: artificial intelligence-enabled social media analysis. J Med Internet Res 23(5):e26618. https://doi.org/10.2196/26618
Cross C (2021) Theorising the impact of COVID-19 on the fraud victimisation of older persons. J Adult Prot 23(2):98–109. https://doi.org/10.1108/JAP-08-2020-0035
Dar M, Swamy L, Gavin D, Theodore A (2021) Mechanical-ventilation supply and options for the COVID-19 pandemic. leveraging all available resources for a limited resource in a crisis. Ann Am Thorac Soc 18(3):408–416. https://doi.org/10.1513/AnnalsATS.202004-317CME
Dashora K (2011) Cyber crime in the society: problems and preventions. J Altern Perspect Soc Sci 3(1):240–259
Davenport TH (2018) The AI advantage: how to put the artificial intelligence revolution to work. MIT Press, Cambridge
Delgado J, de Manuel A, Parra I, Moyano C, Rueda J, Guersenzvaig A et al. (2022) Bias in algorithms of AI systems developed for COVID-19: a scoping review. Bioethical Inq 19:407–419. https://doi.org/10.1007/s11673-022-10200-z
Denecke K, Abd-Alrazaq A, Househ M (2021) Artificial intelligence for chatbots in mental health: opportunities and challenges. In: Househ M, Borycki E, Kushniruk A (eds) Multiple perspectives on artificial intelligence in healthcare. Lecture notes in bioengineering. Springer, Cham, p 115–128
Deng J, Zhou F, Hou W, Heybati K, Lohit S, Abbas U et al. (2023) Prevalence of mental health symptoms in children and adolescents during the COVID‐19 pandemic: a meta‐analysis. Ann NY Acad Sci 1520(1):53–73. https://doi.org/10.1111/nyas.14947
Diver L, Schafer B (2017) Opening the black box: Petri nets and privacy by design. Int Rev Law Comput Technol 31(1):68–90. https://doi.org/10.1080/13600869.2017.1275123
Dlamini Z, Francies FZ, Hull R, Marima R (2020) Artificial intelligence (AI) and big data in cancer and precision oncology. Comput Struct Biotechnol J 18:2300–2311. https://doi.org/10.1016/j.csbj.2020.08.019
Dohle S, Wingen T, Schreiber M (2020) Acceptance and adoption of protective measures during the COVID-19 pandemic: the role of trust in politics and trust in science. Soc Psychol Bull 15(4):1–23. https://doi.org/10.32872/spb.4315
Dong J, Wu H, Zhou D, Li K, Zhang Y, Ji H et al. (2021) Application of big data and artificial intelligence in COVID-19 prevention, diagnosis, treatment and management decisions in China. J Med Syst 45(9):84. https://doi.org/10.1007/s10916-021-01757-0
Dong R, Zhou X, Jiao X, Guo B, Sun L, Wang Q (2020) Psychological status in medical isolation persons during outbreak of COVID-19. Rehabil Med 30(1):7–10. https://doi.org/10.3724/SP.J.1329.2020.01004
Duan Y, Edwards JS, Dwivedi YK (2019) Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. Int J Inf Manag 48:63–71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021
Durán JM, Jongsma KR (2021) Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J Med Ethics 47(5):329–335. https://doi.org/10.1136/medethics-2020-106820
Dwivedi YK, Hughes DL, Coombs C, Constantiou I, Duan Y, Edwards JS et al. (2020) Impact of COVID-19 pandemic on information management research and practice: transforming education, work and life. Int J Inf Manag 55:102211. https://doi.org/10.1016/j.ijinfomgt.2020.102211
El Naqa I, Li H, Fuhrman J, Hu Q, Gorre N, Chen W, Giger ML (2021) Lessons learned in transitioning to AI in the medical imaging of COVID-19. J Med Imaging 8(S1):010902. https://doi.org/10.1117/1.JMI.8.S1.010902
Elman A, Breckman R, Clark S, Gottesman E, Rachmuth L, Reiff M et al. (2020) Effects of the COVID-19 outbreak on elder mistreatment and response in New York City: initial lessons. J Appl Gerontol 39(7):690–699. https://doi.org/10.1177/0733464820924853
Estiri H, Strasser ZH, Rashidian S, Klann JG, Wagholikar KB, McCoy JrTH, Murphy SN (2022) An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes. J Am Med Inform Assoc 29(8):1334–1341. https://doi.org/10.1093/jamia/ocac070
Fahey RA, Hino A (2020) COVID-19, digital privacy, and the social limits on data-focused public health responses. Int J Inf Manag 55:102181. https://doi.org/10.1016/j.ijinfomgt.2020.102181
Ferrag MA, Shu L, Choo KKR (2021) Fighting COVID-19 and future pandemics with the Internet of Things: security and privacy perspectives. IEEE/CAA J Autom Sin 8(9):1477–1499. https://doi.org/10.1109/JAS.2021.1004087
Fetscher I (1973) Karl Marx on human nature. Soc Res 40(3):443–467. http://www.jstor.org/stable/40970148
Fitzpatrick KK, Darcy A, Vierhile M (2017) Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health 4(2):e7785. https://doi.org/10.2196/mental.7785
Floridi L (2014) Artificial agents and their moral nature. In: Kroes P, Verbeek PP (eds) The moral status of technical artefacts. Philosophy of engineering and technology, Vol 17. Springer, Dordrecht, p 185–212. https://doi.org/10.1007/978-94-007-7914-3_11
Formosa P (2021) Robot autonomy vs. human autonomy: social robots, artificial intelligence (AI), and the nature of autonomy. Minds Mach 31:595–616. https://doi.org/10.1007/s11023-021-09579-2
Formosa P, Rogers W, Griep Y, Bankins S, Richards D (2022) Medical AI and human dignity: contrasting perceptions of human and artificially intelligent (AI) decision-making in diagnostic and medical resource allocation contexts. Comput Hum Behav 133:107296. https://doi.org/10.1016/j.chb.2022.107296
Franck TM (1997) Is personal freedom a Western value? Am J Int Law 91(4):593–627. https://doi.org/10.2307/2998096
Gallego A, Kurer T (2022) Automation, digitalization, and artificial intelligence in the workplace: implications for political behavior. Annu Rev Polit Sci 25:463–484. https://doi.org/10.1146/annurev-polisci-051120-104535
Gallotti R, Valle F, Castaldo N, Sacco P, De Domenico M (2020) Assessing the risks of ‘infodemics’ in response to COVID-19 epidemics. Nat Hum Behav 4(12):1285–1293. https://doi.org/10.1038/s41562-020-00994-6
Gerke S, Shachar C, Chai PR et al. (2020) Regulatory, safety, and privacy concerns of home monitoring technologies during COVID-19. Nat Med 26:1176–1182. https://doi.org/10.1038/s41591-020-0994-1
Goodair B, Reeves A (2024) The effect of health-care privatisation on the quality of care. Lancet Public Health 9(3):e199–e206. https://doi.org/10.1016/S2468-2667(24)00003-3
Gozgor G (2022) Global evidence on the determinants of public trust in governments during COVID-19. Appl Res Qual Life 17(2):559–578. https://doi.org/10.1007/s11482-020-09902-6
Greenhalgh T, Wherton J, Shaw S, Morrison C (2020) Video consultations for COVID-19. BMJ 368:m998. https://doi.org/10.1136/bmj.m998
Guglielmi S, Dotti Sani GM, Molteni F, Biolcati F, Chiesi AM, Ladini R et al. (2020) Public acceptability of containment measures during the COVID-19 pandemic in Italy: How institutional confidence and specific political support matter. Int J Sociol Soc Policy 40(9/10):1069–1085. https://doi.org/10.1108/IJSSP-07-2020-0342
Guo H, Zhan H, Zhang Y, Gao X (2020) The application value of artificial intelligence pneumonia assistant diagnosis system in CT screening of suspected COVID-19 patients. J Pract Med 36(13):1729–1732
Hakak S, Khan WZ, Imran M, Choo KKR, Shoaib M (2020) Have you been a victim of COVID-19-related cyber incidents? survey, taxonomy, and mitigation strategies. IEEE Access 8:124134–124144. https://doi.org/10.1109/ACCESS.2020.3006172
Han B (2023) Individual frauds in China: exploring the impact and response to telecommunication network fraud and pig butchering scams. Dissertation, University of Portsmouth
Harman LB, Flite CA, Bond K (2012) Electronic health records: privacy, confidentiality, and security. AMA J Ethics 14(9):712–719. https://doi.org/10.1001/virtualmentor.2012.14.9.stas1-1209
Hartley K, Vu MK (2020) Fighting fake news in the COVID-19 era: policy insights from an equilibrium model. Policy Sci 53(4):735–758. https://doi.org/10.1007/s11077-020-09405-z
He Y, Yang L, Zhu X, Wu B, Zhang S, Qian C, Tian T (2022) Mental health chatbot for young adults with depressive symptoms during the COVID-19 pandemic: single-blind, three-arm randomized controlled trial. J Med Internet Res 24(11):e40719. https://doi.org/10.2196/40719
Herron TL, Manuel T (2022) Ethics of US government policy responses to the COVID‐19 pandemic: a utilitarianism perspective. Bus Soc Rev 127:343–367. https://doi.org/10.1111/basr.12259
Hilbert M (2020) Digital technology and social change: the digital transformation of society from a historical perspective. Dialogues Clin Neurosci 22(2):189–194. https://doi.org/10.31887/DCNS.2020.22.2/mhilbert
Ho DY-F, Chiu C-Y (1994) Component ideas of individualism, collectivism, and social organization: an application in the study of Chinese culture. In: Kim U, Triandis HC, Kâğitçibaşi Ç, Choi S-C, Yoon G (eds) Individualism and collectivism: theory, method, and applications. Sage Publications, Inc., Newbury Park, CA, USA, p 137–156
Hofstede Insights (2024) Country comparison: China. Retrieved from https://www.hofstede-insights.com/country-comparison/china
Hornik R, Kikut A, Jesch E, Woko C, Siegel L, Kim K (2021) Association of COVID-19 misinformation with face mask wearing and social distancing in a nationally representative US sample. Health Commun 36(1):6–14. https://doi.org/10.1080/10410236.2020.1847437
Horwitz JR (2005) Making profits and providing care: comparing nonprofit, for-profit, and government hospitals. Health Aff 24(3):790–801. https://doi.org/10.1377/hlthaff.24.3.790
Hossain MS, Muhammad G, Guizani N (2020) Explainable AI and mass surveillance system-based healthcare framework to combat COVID-19 like pandemics. IEEE Netw 34(4):126–132. https://doi.org/10.1109/MNET.011.2000458
Hu T, Wang S, Luo W, Zhang M, Huang X, Yan Y et al. (2021) Revealing public opinion towards COVID-19 vaccines with Twitter data in the United States: spatiotemporal perspective. J Med Internet Res 23(9):e30854. https://doi.org/10.2196/30854
Huang CH, Batarseh FA, Boueiz A, Kulkarni A, Su PH, Aman J (2021) Measuring outcomes in healthcare economics using artificial intelligence: with application to resource management. Data Policy 3:e30. https://doi.org/10.1017/dap.2021.29
Huang G, Hu A, Chen W (2022) Privacy at risk? Understanding the perceived privacy protection of health code apps in China. Big Data Soc 9(2):20539517221135132. https://doi.org/10.1177/20539517221135132
Huang MH, Rust RT (2018) Artificial intelligence in service. J Serv Res 21(2):155–172. https://doi.org/10.1177/1094670517752459
Huang Y, Zhao N (2020) Generalized anxiety disorder, depressive symptoms and sleep quality during COVID-19 outbreak in China: a web-based cross-sectional survey. Psychiatry Res 288:112954. https://doi.org/10.1016/j.psychres.2020.112954
Hui CH, Villareal MJ (1989) Individualism-collectivism and psychological needs: their relationships in two cultures. J Cross Cult Psychol 20(3):310–323. https://doi.org/10.1177/0022022189203005
Hung M, Lauren E, Hon ES, Birmingham WC, Xu J, Su S et al. (2020) Social network analysis of COVID-19 sentiments: application of artificial intelligence. J Med Internet Res 22(8):e22590. https://doi.org/10.2196/22590
Hussain A, Tahir A, Hussain Z, Sheikh Z, Gogate M, Dashtipour K et al. (2021) Artificial intelligence–enabled analysis of public attitudes on Facebook and Twitter toward COVID-19 vaccines in the United Kingdom and the United States: observational study. J Med Internet Res 23(4):e26627. https://doi.org/10.2196/26627
Ishmaev G, Dennis M, van den Hoven MJ (2021) Ethics in the COVID-19 pandemic: myths, false dilemmas, and moral overload. Ethics Inf Technol 23(Suppl 1):19–34. https://doi.org/10.1007/s10676-020-09568-6
Jaiswal A, Arun CJ, Varma A (2021) Rebooting employees: upskilling for artificial intelligence in multinational corporations. Int J Hum Resour Manag 33(6):1179–1208. https://doi.org/10.1080/09585192.2021.1891114
Jarrahi MH (2018) Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making. Bus Horiz 61(4):577–586. https://doi.org/10.1016/j.bushor.2018.03.007
Jiang C, Wang B (2021) The application of big data and social governance innovation during the epidemic. J Party Sch CPC Urumqi Munic Comm 3:28–34
Jin C, Chen W, Cao Y, Xu Z, Tan Z, Zhang X et al. (2020) Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat Commun 11:5088. https://doi.org/10.1038/s41467-020-18685-1
Jin H, Li B, Jakovljevic M (2020) How China controls the COVID-19 epidemic through public health expenditure and policy?. J Med Econ 25(1):437–449. https://doi.org/10.1080/13696998.2022.2054202
Joshi P, Shukla S (2022) The history of pandemics and evolution so far. In: Jain A, Sharma A, Wang J, Ram M (eds) Use of AI, robotics and modelling tools to fight COVID-19. River Publishers, New York, p 1–15
Kauhanen L, Wan Mohd Yunus WMA, Lempinen L, Peltonen K, Gyllenberg D, Mishina K et al. (2023) A systematic review of the mental health changes of children and young people before and during the COVID-19 pandemic. Eur Child Adolesc Psychiatry 32(6):995–1013. https://doi.org/10.1007/s00787-022-02060-0
Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D (2019) Key challenges for delivering clinical impact with artificial intelligence. BMC Med 17:195. https://doi.org/10.1186/s12916-019-1426-2
Kemp S, Buil-Gil D, Moneva A, Miró-Llinares F, Díaz-Castaño N (2021) Empty streets, busy internet: a time-series analysis of cybercrime and fraud trends during COVID-19. J Contemp Crim Justice 37(4):480–501. https://doi.org/10.1177/10439862211027986
Khan R, Shrivastava P, Kapoor A, Tiwari A, Mittal A (2020) Social media analysis with AI: sentiment analysis techniques for the analysis of Twitter COVID-19 data. J Crit Rev 7(9):2761–2774
Khweiled R, Jazzar M, Eleyan D (2021) Cybercrimes during COVID-19 pandemic. Int J Inf Eng Electron Bus 13(2):1–10. https://doi.org/10.5815/ijieeb.2021.02.01
Kim SS, Kim J, Badu-Baiden F, Giroux M, Choi Y (2021) Preference for robot service or human service in hotels? impacts of the COVID-19 pandemic. Int J Hosp Manag 93:102795. https://doi.org/10.1016/j.ijhm.2020.102795
Kimura F, Thangavelu SM, Narjoko D, Findlay C (2020) Pandemic (COVID‐19) policy, regional cooperation and the emerging global production network. Asian Econ J 34(1):3–27. https://doi.org/10.1111/asej.12198
Kitahara K, Nishikawa Y, Yokoyama H, Kikuchi Y, Sakoi M (2023) An overview of the reclassification of COVID-19 under the Infectious Diseases Control Law in Japan. Glob Health Med 5(2):70–74. https://doi.org/10.35772/ghm.2023.01023
Klos MC, Escoredo M, Joerin A, Lemos VN, Rauws M, Bunge EL (2021) Artificial intelligence-based chatbot for anxiety and depression in university students: pilot randomized controlled trial. JMIR Form Res 5(8):e20678. https://doi.org/10.2196/20678
Kouzy R, Abi Jaoude J, Kraitem A, El Alam MB, Karam B, Adib E et al. (2020) Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on Twitter. Cureus 12(3). https://doi.org/10.7759/cureus.7255
Kretzschmar K, Tyroll H, Pavarini G, Manzini A, Singh I, NeurOx Young People’s Advisory Group (2019) Can your phone be your therapist? young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed Inform Insights 11:1178222619829083. https://doi.org/10.1177/1178222619829083
Kulikowski C, Maojo VM (2021) COVID-19 pandemic and artificial intelligence: challenges of ethical bias and trustworthy reliable reproducibility? BMJ Health Care Inform 28:e100438. https://doi.org/10.1136/bmjhci-2021-100438
Kumar A, Gupta PK, Srivastava A (2020) A review of modern technologies for tackling COVID-19 pandemic. Diabetes Metab Syndr Clin Res Rev 14(4):569–573. https://doi.org/10.1016/j.dsx.2020.05.008
Laitinen A, Sahlgren O (2021) AI systems and respect for human autonomy. Front Artif Intell 4:151. https://doi.org/10.3389/frai.2021.705164
Latkin CA, Dayton L, Kaufman MR, Schneider KE, Strickland JC, Konstantopoulos A (2022) Social norms and prevention behaviors in the United States early in the COVID-19 pandemic. Psychol Health Med 27(1):162–177. https://doi.org/10.1080/13548506.2021.2004315
Li J, Zhang W (2021) Communication functions and challenges of chatbots in public crises—an observation based on the novel coronavirus pneumonia outbreak. Media 13:47–49
Li R (2020) Artificial intelligence revolution: how AI will change our society, economy, and culture. Simon and Schuster, New York
Liang F (2020) COVID-19 and health code: how digital platforms tackle the pandemic in China. Soc Media Soc 6(3). https://doi.org/10.1177/2056305120947657
Liu T, Tsang W, Huang F, Lau OY, Chen Y, Sheng J et al. (2021) Patients’ preferences for artificial intelligence applications versus clinicians in disease diagnosis during the SARS-CoV-2 pandemic in China: discrete choice experiment. J Med Internet Res 23(2):e22841. https://doi.org/10.2196/22841
Liu Y, Yu Y, Zhao Y, He D (2022) Reduction in the infection fatality rate of Omicron variant compared with previous variants in South Africa. Int J Infect Dis 120:146–149. https://doi.org/10.1016/j.ijid.2022.04.029
Long DG (1990) ‘Utility’ and the ‘Utility Principle’: Hume, Smith, Bentham, Mill. Utilitas 2(1):12–39. https://doi.org/10.1017/S0953820800000431
Luengo-Oroz M, Hoffmann Pham K, Bullock J, Kirkpatrick R, Luccioni A, Rubel S et al. (2020) Artificial intelligence cooperation to support the global response to COVID-19. Nat Mach Intell 2:295–297. https://doi.org/10.1038/s42256-020-0184-3
Lund S, Madgavkar A, Manyika J, Smit S, Ellingrud K, Meaney M, Robinson O (2021) The future of work after COVID-19. McKinsey Global Institute. https://www.mckinsey.com/featured-insights/future-of-work/the-future-of-work-after-covid-19. Accessed on 20 February 2021
Lv A, Luo T, Duckett J (2022) Centralization vs. decentralization in COVID-19 responses: lessons from China. J Health Polit Policy Law 47(3):411–427. https://doi.org/10.1215/03616878-9626908
Lwin MO, Lu J, Sheldenkar A, Smit S, Ellingrud K, Meaney M, Robinson O (2020) Global sentiments surrounding the COVID-19 pandemic on Twitter: analysis of Twitter trends. JMIR Public Health Surveill 6(2):e19447. https://doi.org/10.2196/19447
Ma KWF, McKinnon T (2022) COVID-19 and cyber fraud: emerging threats during the pandemic. J Financ Crime 29(2):433–446. https://doi.org/10.1108/JFC-01-2021-0016
MacIntyre CR, Nguyen PY, Chughtai AA, Trent M, Gerber B, Steinhofel K, Seale H (2021) Mask use, risk-mitigation behaviours and pandemic fatigue during the COVID-19 pandemic in five cities in Australia, the UK and USA: a cross-sectional survey. Int J Infect Dis 106:199–207. https://doi.org/10.1016/j.ijid.2021.03.056
Mahmud M, Kaiser MS (2021) Machine learning in fighting pandemics: a COVID-19 case study. In: Santosh K, Joshi A (eds) COVID-19: prediction, decision-making, and its impacts. Lecture notes on data engineering and communications technologies, Vol. 60. Springer, Singapore, p 77–81. https://doi.org/10.1007/978-981-15-9682-7_9
Malik YS, Sircar S, Bhat S, Ansari MI, Pande T, Kumar P et al. (2021) How artificial intelligence may help the COVID‐19 pandemic: pitfalls and lessons for the future. Rev Med Virol 31(5):1–11. https://doi.org/10.1002/rmv.2205
Marabelli M, Vaast E, Li JL (2021) Preventing the digital scars of COVID-19. Eur J Inf Syst 30(2):176–192. https://doi.org/10.1080/0960085X.2020.1863752
McCall B (2020) COVID-19 and artificial intelligence: protecting health-care workers and curbing the spread. Lancet Digit Health 2(4):e166–e167. https://doi.org/10.1016/S2589-7500(20)30054-6
McGuire M, Dowling S (2013) Cyber-crime: a review of the evidence. Summary of key findings and implications. Research Report 75, Home Office, UK, p 1–35
McLuhan M (1994) Understanding media: the extensions of man. MIT Press, Cambridge, MA (First published 1964)
Meadows R, Hine C, Suddaby E (2020) Conversational agents and the making of mental health recovery. Digit Health 6:2055207620966170. https://doi.org/10.1177/2055207620966170
Metzler H, Rimé B, Pellert M, Niederkrotenthaler T, Di Natale A, Garcia D (2023) Collective emotions during the COVID-19 outbreak. Emotion 23(3):844–858. https://doi.org/10.1037/emo0001111
Mill JS (1969) Utilitarianism. In: Robson JM (ed) Collected works of John Stuart Mill, Vol. X. Routledge and University of Toronto Press, London and Toronto
Minnaar A (2020) Gone phishing’: the cynical and opportunistic exploitation of the coronavirus pandemic by cybercriminals. Acta Criminol Afr J Criminol Vict 33(3):28–53. https://hdl.handle.net/10520/ejc-crim-v33-n3-a3
Mlynska L, Malouhi A, Ingwersen M, Güttler F, Gräger S, Teichgräber U (2023) Artificial intelligence for assistance of radiology residents in chest CT evaluation for COVID-19 pneumonia: a comparative diagnostic accuracy study. Acta Radiol 64(6):2104–2110. https://doi.org/10.1177/02841851231162085
Moghadas SM, Sah P, Shoukat A, Meyers LA, Galvani AP (2021) Population immunity against COVID-19 in the United States. Ann Intern Med 174(11):1586–1591. https://doi.org/10.7326/M21-2721
Monahan C, Macdonald J, Lytle A, Apriceno M, Levy SR (2020) COVID-19 and ageism: how positive and negative responses impact older adults and society. Am Psychol 75(7):887–896. https://doi.org/10.1037/amp0000699
Moreno C, Wykes T, Galderisi S, Nordentoft M, Crossley N, Jones N, Arango C (2020) How mental health care should change as a consequence of the COVID-19 pandemic. Lancet Psychiatry 7(9):813–824. https://doi.org/10.1016/S2215-0366(20)30307-2
Naeem SB, Bhatti R (2020) The COVID‐19 ‘infodemic’: a new front for information professionals. Health Inf Libr J 37(3):233–239. https://doi.org/10.1111/hir.12311
Naidoo R (2020) A multi-level influence model of COVID-19 themed cybercrime. Eur J Inf Syst 29(3):306–321. https://doi.org/10.1080/0960085X.2020.1771222
Naudé W (2020) Artificial intelligence vs COVID-19: limitations, constraints and pitfalls. AI Soc 35:761–765. https://doi.org/10.1007/s00146-020-00978-0
Nayak J, Naik B, Dinesh P, Vakula K, Dash PB, Pelusi D (2022) Significance of deep learning for COVID-19: state-of-the-art review. Res Biomed Eng 38:243–266. https://doi.org/10.1007/s42600-021-00135-6
Nebeker C, Torous J, Bartlett Ellis RJ (2019) Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med 17:137. https://doi.org/10.1186/s12916-019-1377-7
Neves NM, Bitencourt FB, Bitencourt AG (2020) Ethical dilemmas in COVID-19 times: how to decide who lives and who dies? Rev Assoc Med Bras 66(Suppl 2):106–111. https://doi.org/10.1590/1806-9282.66.S2.106
Omarov B, Narynov S, Zhumanov Z (2023) Artificial intelligence-enabled chatbots in mental health: a systematic review. Comput Mater Contin 74(3):5015–5122. https://doi.org/10.32604/cmc.2023.034655
Ozili PK, Arun T (2023) Spillover of COVID-19: impact on the global economy. In: Akkucuk U (ed) Managing inflation and supply chain disruptions in the global economy. IGI Global, Hershey, PA, USA, p 41–61
Park S, Fowler L (2021) Political and administrative decentralization and responses to COVID-19: comparison of the United States and South Korea. Int J Organ Theory Behav 24(4):289–299. https://doi.org/10.1108/IJOTB-02-2021-0022
Partel V, Kakarla SC, Ampatzidis Y (2019) Development and evaluation of a low-cost and smart technology for precision weed management utilizing artificial intelligence. Comput Electron Agric 157:339–350. https://doi.org/10.1016/j.compag.2018.12.048
Pataranutaporn P, Danry V, Leong J, Punpongsanon P, Novy D, Maes P, Sra M (2021) AI-generated characters for supporting personalized learning and well-being. Nat Mach Intell 3:1013–1022. https://doi.org/10.1038/s42256-021-00417-9
Pauketat JV, Anthis JR (2022) Predicting the moral consideration of artificial intelligences. Comput Human Behav 136:107372. https://doi.org/10.1016/j.chb.2022.107372
Peiffer-Smadja N, Maatoug R, Lescure F-X, D’ortenzio E, Pineau J, King JR (2020) Machine learning for COVID-19 needs global collaboration and data-sharing. Nat Mach Intell 2:293–294. https://doi.org/10.1038/s42256-020-0181-6
Pham QV, Nguyen DC, Huynh-The T, Hwang WJ, Pathirana PN (2020) Artificial intelligence (AI) and big data for coronavirus (COVID-19) pandemic: a survey on the state-of-the-art. IEEE Access 8:130820–130839. https://doi.org/10.1109/ACCESS.2020.3009328
Prunkl C (2022) Human autonomy in the age of artificial intelligence. Nat Mach Intell 4(2):99–101. https://doi.org/10.1038/s42256-022-00449-9
Rahman A, Hossain MS, Alrajeh NA, Alsolami F (2020) Adversarial examples—Security threats to COVID-19 deep learning systems in medical IoT devices. IEEE Internet Things J 8(12):9603–9610. https://doi.org/10.1109/JIOT.2020.3013710
Rai A (2020) Explainable AI: from black box to glass box. J Acad Mark Sci 48:137–141. https://doi.org/10.1007/s11747-019-00710-5
Rashid MT, Wang D (2021) CovidSens: a vision on reliable social sensing for COVID-19. Artif Intell Rev 54:1–25. https://doi.org/10.1007/s10462-020-09852-3
Reuters (2022) Sweden declares pandemic over, despite warnings from scientists. Available online: https://www.reuters.com/world/europe/sweden-declare-pandemic-over-despite-warnings-scientists-2022-02-09/. Accessed 28 Oct 2023
Ribeiro-Navarrete S, Saura JR, Palacios-Marqués D (2021) Towards a new era of mass data collection: assessing pandemic surveillance technologies to preserve user privacy. Technol Forecast Soc Change 167:120681. https://doi.org/10.1016/j.techfore.2021.120681
Riley J (2009) The interpretation of maximizing utilitarianism. Soc Philos Policy 26(1):286–325. https://doi.org/10.1017/S0265052509090128
Rocha YM, de Moura GA, Desidério GA et al. (2021) The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. J Public Health 31:1007–1016. https://doi.org/10.1007/s10389-021-01658-z
Romero RA, Young SD (2022) Ethical perspectives in sharing digital data for public health surveillance before and shortly after the onset of the COVID-19 pandemic. Ethics Behav 32(1):22–31. https://doi.org/10.1080/10508422.2021.1884079
Röösli E, Rice B, Hernandez-Boussard T (2021) Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19. J Am Med Inform Assoc 28(1):190–192. https://doi.org/10.1093/jamia/ocaa210
Rosen F (1998) Individual sacrifice and the greatest happiness: Bentham on utility and rights. Utilitas 10(2):129–143. https://doi.org/10.1017/S0953820800006051
Ruan K, Wu Z, Xu Q (2021) Smart cleaner: a new autonomous indoor disinfection robot for combating the COVID-19 pandemic. Robotics 10(3):87. https://doi.org/10.3390/robotics10030087
Runciman WG (2000) The social animal. University of Michigan Press, Ann Arbor, MI
Sacasas LM (2018) The tech backlash we really need. The New Atlantis 55:35–42. http://www.jstor.org/stable/26487782
Sahebi S, Formosa P (2022) Social media and its negative impacts on autonomy. Philos Technol 35:70. https://doi.org/10.1007/s13347-022-00567-7
Saini H, Rao YS, Panda TC (2012) Cyber-crimes and their impacts: a review. Int J Eng Res Appl 2(2):202–209
Salehi H, Burgueño R (2018) Emerging artificial intelligence methods in structural engineering. Eng Struct 171:170–189. https://doi.org/10.1016/j.engstruct.2018.05.084
Schuchat A (2020) Public health response to the initiation and spread of pandemic COVID-19 in the United States, February 24-April 21, 2020. MMWR Morb Mortal Wkly Rep 69(18):551–556. https://doi.org/10.15585/mmwr.mm6918e2
Sha D, Jing J (2021) Practical scenarios of artificial intelligence in the prevention and control of COVID-19. Sci Technol Rev 39(15):135–141. https://doi.org/10.3981/j.issn.1000-7857.2021.15.014
Shamman AH, Hadi AA, Ramul AR et al. (2023) The artificial intelligence (AI) role for tackling against COVID-19 pandemic. Mater Today Proc 80(3):3663–3667. https://doi.org/10.1016/j.matpr.2021.07.357
Shen Y, Guo D, Long F, Mateos LA, Ding H, Xiu Z et al. (2020) Robots under COVID-19 pandemic: a comprehensive survey. IEEE Access 9:1590–1615. https://doi.org/10.1109/ACCESS.2020.3045792
Shenoy A, Appel JM (2017) Safeguarding confidentiality in electronic health records. Camb Q Healthc Ethics 26(2):337–341. https://doi.org/10.1017/S0963180116000931
Solaiman E, Awad C (2024) Trust and dependability in Blockch-AI-n-based medical Internet of Things applications: research challenges and future directions. IT Prof 26(3):87–93. https://doi.org/10.1109/MITP.2024.3378582
Southern Metropolis Daily. Available online: https://static.nfapp.southcn.com/content/202001/28/c3029598.html. Accessed 28 Oct 2023
Spronck P, Ponsen M, Sprinkhuizen-Kuyper I, Postma E (2006) Adaptive game AI with dynamic scripting. Mach Learn 63(2):217–248. https://doi.org/10.1007/s10994-006-6205-6
Stephens TN, Joerin A, Rauws M, Werk LN (2019) Feasibility of pediatric obesity and prediabetes treatment support through Tess, the AI behavioral coaching chatbot. Transl Behav Med 9(3):440–447. https://doi.org/10.1093/tbm/ibz043
Taleghani N, Taghipour F (2021) Diagnosis of COVID-19 for controlling the pandemic: a review of the state-of-the-art. Biosens Bioelectron 174:112830. https://doi.org/10.1016/j.bios.2020.112830
Tao F (2020) Virus, intelligent technology and space: a philosophical reflection on COVID-19 pandemic. J Sichuan Norm Univ Soc Sci Ed 47(5):5–12. https://doi.org/10.13734/j.cnki.1000-5315.2020.05.001
Taylor W, Abbasi QH, Dashtipour K, Ansari S, Shah SA, Khalid A, Imran MA (2020) A review of the state of the art in non-contact sensing for COVID-19. Sensors 20(19):5665. https://doi.org/10.3390/s20195665
The Guardian (2020) Coronavirus outbreak: doctors use robot to treat first known US patient. Available online: https://www.theguardian.com/us-news/2020/jan/22/coronavirus-doctors-use-robot-to-treat-first-known-us-patient. Accessed 28 Oct 2023
The National Health Workforce Accounts Database (2023) Available online: https://apps.who.int/nhwaportal/. Accessed 1 Dec 2023
Trivedi A, Robinson C, Blazes M, Ortiz A, Desbiens J, Gupta S, Lavista Ferres JM (2022) Deep learning models for COVID-19 chest X-ray classification: Preventing shortcut learning using feature disentanglement. PLoS ONE 17(10):e0274098. https://doi.org/10.1371/journal.pone.0274098
Tsao SF, Chen H, Tisseverasinghe T, Yang Y, Li L, Butt ZA (2021) What social media told us in the time of COVID-19: a scoping review. Lancet Digit Health 3(3):e175–e194. https://doi.org/10.1016/S2589-7500(20)30315-0
Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB (2019) Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry 64(7):456–464. https://doi.org/10.1177/0706743719828977
Vaishya R, Javaid M, Khan IH, Haleem A (2020) Artificial intelligence (AI) applications for COVID-19 pandemic. Diabetes Metab Syndr 14(4):337–339. https://doi.org/10.1016/j.dsx.2020.04.012
Verganti R, Vendraminelli L, Iansiti M (2020) Innovation and design in the age of artificial intelligence. J Prod Innov Manag 37(3):212–227. https://doi.org/10.1111/jpim.12523
Vindegaard N, Benros ME (2020) COVID-19 pandemic and mental health consequences: systematic review of the current evidence. Brain Behav Immun 89:531–542. https://doi.org/10.1016/j.bbi.2020.05.048
von Eschenbach WJ (2021) Transparency and the black box problem: why we do not trust AI. Philos Technol 34:1607–1622. https://doi.org/10.1007/s13347-021-00477-0
Wamba-Taguimdje SL, Fosso Wamba S, Kala Kamdjoug JR, Tchatchouang Wanko CE (2020) Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects. Bus Process Manag J 26(7):1893–1924. https://doi.org/10.1108/BPMJ-10-2019-0411
Wang B, Jin S, Yan Q, Xu H, Luo C, Wei L et al. (2021) AI-assisted CT imaging analysis for COVID-19 screening: building and deploying a medical AI system. Appl Soft Comput 98:106897. https://doi.org/10.1016/j.asoc.2020.106897
Wang C, Tang N, Zhen D, Wang XR, Zhang J, Cheong Y, Zhu Q (2023) Need for cognitive closure and trust towards government predicting pandemic behavior and mental health: Comparing United States and China. Curr Psychol 42(26):22823–22836. https://doi.org/10.1007/s12144-022-03327-0
Wang G, Liu ZB (2010) What collective? collectivism and relationalism from a Chinese perspective Chin J Commun 3(1):42–63. https://doi.org/10.1080/17544750903528799
Wang J, Jing R, Lai X, Zhang H, Lyu Y, Knoll MD, Fang H (2020a) Acceptance of COVID-19 vaccinationduring the COVID-19 pandemic in China. Vaccines 8(3):482. https://doi.org/10.3390/vaccines8030482
Wang L, Lin ZQ, Wong A (2020b) COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci Rep 10:19549. https://doi.org/10.1038/s41598-020-76550-z
Wang L, Yan B, Boasson V (2020c) A national fight against COVID‐19: lessons and experiences from China. Aust N Z J Public Health 44(6):502–507. https://doi.org/10.1111/1753-6405.13042
Wang Y, Wang C, Liao Z, Zhang X, Zhao M (2020d) A comparative analysis of anxiety and depression level among people and epidemic characteristics between COVID-19 and SARS. Life Sci Res 24(3):180–186. https://doi.org/10.16605/j.cnki.1007-7847.2020.03.002
Wang W, Siau K (2019) Artificial intelligence, machine learning, automation, robotics, future of work and future of humanity: a review and research agenda. J Database Manag 30(1):61–79. https://doi.org/10.4018/JDM.2019010104
Wang X, Wang L (2019) Technological Leviathan”: potential risks of artificial intelligence embedded in social governance and government response. E Gov 5:86–93. https://doi.org/10.16582/j.cnki.dzzw.2019.05.009
Wells CR, Fitzpatrick MC, Sah P, Shoukat A, Pandey A, El-Sayed AM et al. (2020) Projecting the demand for ventilators at the peak of the COVID-19 outbreak in the USA. Lancet Infect Dis 20(10):1123–1125. https://doi.org/10.1016/S1473-3099(20)30315-7
Williams JC, Anderson N, Mathis M, Sanford IIIE, Eugene J, Isom J (2020) Colorblind algorithms: racism in the era of COVID-19. J Natl Med Assoc 112(5):550–552. https://doi.org/10.1016/j.jnma.2020.05.010
Wischmeyer T (2020) Artificial intelligence and transparency: opening the Black Box. In: Wischmeyer T, Rademacher T (eds) Regulating artificial intelligence. Springer, Cham, p 75–101. https://doi.org/10.1007/978-3-030-32361-5_4
World Economic Forum (2020) The future of jobs report 2020. Available online: https://www.weforum.org/reports/the-future-of-jobs-report-2020/digest. Accessed 18 Dec 2020
World Health Organization (2020) Managing the COVID-19 infodemic: promoting healthy behaviors and mitigating the harm from misinformation and disinformation. Available online: https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic. Accessed 12 Oct 2020
World Health Organization (2023) Available online: https://www.who.int/news/item/05-05-2023-statement-on-the-fifteenth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic. Accessed 8 Jul 2023
World Health Organization (2025) Available online: https://covid19.who.int/. Accessed 7 Feb 2025
Wu J, Wang J, Nicholas S, Maitland E, Fan Q (2020) Application of big data technology for COVID-19 prevention and control in China: lessons and recommendations. J Med Internet Res 22(10):e21980. https://doi.org/10.2196/21980
Xi J (2017) Report at the 19th National Congress of the Communist Party of China (October 18, 2017). People’s Publishing House, Beijing, p 49
Xi J (2020) Speech at the meeting of the Standing Committee of the Political Bureau of the Central Committee to study the work of responding to the COVID-19 pandemic. Seeking Truth 4:4–12
Xie X (2020) Promoting the construction of a community of human destiny with the international ethics of symbiosis and coexistence: the COVID-19 pandemic as the background of analysis. Academics 7:22–31
Xie Y (2021) Reflection on “subjectivity” under the new epidemic situation. Acad Ethica 10(1):194–204
Xiong J, Lipsitz O, Nasri F, Lui LM, Gill H, Phan L et al. (2020) Impact of COVID-19 pandemic on mental health in the general population: a systematic review. J Affect Disord 277:55–64. https://doi.org/10.1016/j.jad.2020.08.001
Xu W, Wu J, Cao L (2020) COVID-19 pandemic in China: context, experience and lessons. Health Policy Technol 9(4):639–648. https://doi.org/10.1016/j.hlpt.2020.08.006
Xu Y, Liu X, Cao X, Huang C, Liu E, Qian S et al. (2021) Artificial intelligence: a powerful paradigm for scientific research. Innovation 2(4). https://doi.org/10.1016/j.xinn.2021.100179
Xue J, Chen J, Hu R, Chen C, Zheng C, Su Y, Zhu T (2020) Twitter discussions and emotions about the COVID-19 pandemic: machine learning approach. J Med Internet Res 22(11):e20550. https://doi.org/10.2196/20550
Yahya BM, Yahya FS, Thannoun RG (2021) COVID-19 prediction analysis using artificial intelligence procedures and GIS spatial analyst: a case study for Iraq. Appl Geomat 13:481–491. https://doi.org/10.1007/s12518-021-00365-4
Yan Q, Tang Y, Yan D, Wang J, Yang L, Yang X, Tang S (2020) Impact of media reports on the early spread of COVID-19 epidemic. J Theor Biol 502:110385. https://doi.org/10.1016/j.jtbi.2020.110385
Yang X, Mo L (2022) Research on dimensions of trust and mechanism of AI communication. Acad Res 3:43–50
Yanisky-Ravid S, Hallisey SK (2019) Equality and privacy by design: a new model of artificial intelligence data transparency via auditing, certification, and safe harbor regimes. Fordham Urban LJ 46:428
Ye Q, Zhou J, Wu H (2020) Using information technology to manage the COVID-19 pandemic: development of a technical framework based on practical experience in China. JMIR Med Inform 8(6):e19515. https://doi.org/10.2196/19515
Yu H (2022) Living in the era of codes: a reflection on China’s health code system. BioSocieties 19:1–18. https://doi.org/10.1057/s41292-022-00290-8
Yu Y, Brady D, Zhao B (2022) Digital geographies of the bug: a case study of China’s contact tracing systems in the COVID-19. Geoforum 137:94–104. https://doi.org/10.1016/j.geoforum.2022.10.007
Yuan J (2020) Important practice of fighting against novel coronavirus pneumonia: building a community with a shared future for mankind. Int J Soc Sci Educ Res 9(1):2–6. https://doi.org/10.1186/s40249-020-00650-1
Zeng L (2021) Conceptual analysis of China’s Belt and Road Initiative: a road towards a regional community of common destiny. In: Contemporary International Law and China’s peaceful development. Modern China and International Economic Law. Springer, Singapore, p 339–345. https://doi.org/10.1007/978-981-15-8657-6_17
Zhang J, Li Y, Zhou J, Zhu Y, Li L (2022a) Supervision system of AI-based software as a medical device Strateg Study Chin Acad Eng 24(1):198–204. https://doi.org/10.15302/J-SSCAE-2022.01.021
Zhang K, Liu X, Shen J, Li Z, Sang Y, Wu X et al (2020a) Clinically applicable AI systemfor accurate diagnosis, quantitative measurements, and prognosis of COVID-19 pneumonia using computed tomography. Cell 181(6):1423–1433. https://doi.org/10.1016/j.cell.2020.04.045
Zhang X, Zhou Y, Zhou F, Pratap S (2022b) Internet public opinion dissemination mechanism of COVID-19: evidence from the Shuanghuanglian event. Data Technol Appl 56(2):283–302. https://doi.org/10.1108/DTA-11-2020-0275
Zhang Y, Xiao Y, Ghaboosi K, Zhang J, Deng H (2012) A survey of cyber crimes. Secur Commun Netw 5(4):422–437. https://doi.org/10.1002/sec.331
Zhang Z, Ji K, Zhao B (2020b) Artificial intelligence for improving public health security risk management:why is it possible and what can be done? Jianghai Acad J 3:13–18+254
Zhao D, Hu W (2017) Determinants of public trust in government: empirical evidence from urban China. Int Rev Admin Sci 83(2):358–377. https://doi.org/10.1177/0020852315582136
Zhou SL, Jia X, Skinner SP, Yang W, Claude I (2021) Lessons on mobile apps for COVID-19 from China. J Saf Sci Resil 2(2):40–49. https://doi.org/10.1016/j.jnlssr.2021.04.002
Zhou W, Wang A, Xia F, Xiao Y, Tang S (2020) Effects of media reporting on mitigating spread of COVID-19 in the early phase of the outbreak. Math Biosci Eng 17(3):2693–2707. https://doi.org/10.3934/mbe.2020147
Zhu H, Liu X (2021) The research on the cooperation mechanism of China, Japan, and South Korea in response to public health emergencies in the post-epidemic era. Dongjiang J 38(2):70–77+128. https://doi.org/10.19410/j.cnki.cn22-5016/c.2021.02.009
Zhu Y, Janssen M, Wang R, Liu Y (2021) It is me, chatbot: working to address the COVID-19 outbreak-related mental health issues in China. Int J Hum Comput Interact 38(12):1182–1194. https://doi.org/10.1080/10447318.2021.1988236
Zou H, Shu Y, Feng T (2020) How Shenzhen, China avoided widespread community transmission: a potential model for successful prevention and control of COVID-19. Infect Dis Poverty 9:89. https://doi.org/10.1186/s40249-020-00714-2
Acknowledgements
During the writing and revision process of this paper, we received invaluable guidance and support from several professors in the Department of Philosophy at Xi’an Jiaotong University. We would like to express our heartfelt gratitude to Professors Jianping Gong, Xin Chang, Wei Wang, Jiaxin Wang, Jian Wang, Zhiwei Zhang, and Jun Liu. Their insightful suggestions and guidance on topic selection, structural design, and content refinement were instrumental in the successful completion of this paper. This work was supported by the MOE (Ministry of Education in China) Project of Humanities and Social Sciences (Grant No. 19YJC720006) and the National Social Science Foundation of China (Grants No. 20CZX059 and No. 20FZXB047).
Author information
Authors and Affiliations
Contributions
XD and BS designed the research and drafted the manuscript. CX, JX, and FY reviewed and revised the manuscript. All authors have given their final approval for the version to be published. All authors have agreed to take responsibility for all aspects of the work to ensure that questions about the accuracy or integrity of any part of the work are properly investigated and resolved.
Corresponding authors
Ethics declarations
Competing interests
XD and FY were members of the Editorial Board of this journal at the time of acceptance for publication. The manuscript was assessed in line with the journal’s standard editorial processes.
Ethical approval
It does not apply to this article as it does not contain any studies with human participants.
Informed consent
It does not apply to this article as it does not contain any studies with human participants.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ding, X., Shang, B., Xie, C. et al. Artificial intelligence in the COVID-19 pandemic: balancing benefits and ethical challenges in China’s response. Humanit Soc Sci Commun 12, 245 (2025). https://doi.org/10.1057/s41599-025-04564-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-025-04564-x