Abstract
The deployment of generative artificial intelligence (GAI) has attracted substantial research attention, yet its impact on employee innovation remains debated. Based on Technology-Task Fit (TTF) theory, this study investigates how trust in GAI affects employees’ exploitative and exploratory innovation. Analysis of survey data from 302 Chinese employees reveals that trust in GAI is positively related to exploitative innovation, while demonstrating an inverted U-shaped relationship with both exploratory innovation and the complementarity of these two innovation types. Furthermore, employee-GAI fit amplifies the positive effect of trust in GAI on exploitative innovation and moderates the inverted U-shaped relationship with exploratory innovation by shifting the turning point to the right. Employee-innovation task fit weakens the inverted U-shaped relationship between trust in GAI and exploratory innovation. These findings advance original theoretical insights into the relationship between trust in GAI and ambidextrous innovation and offer actionable guidance for innovation management practitioners.
Similar content being viewed by others
Introduction
GAI tools, such as ChatGPT, Gemini, Kimi, and Sora, have reduced accessibility barriers, broadened application scenarios, and enhanced creative capabilities, fundamentally reshaping employee workflows and business operations (Liang et al., 2022). Current data indicates that GAI influences 40% of working hours, with 80% of employees worldwide having utilized these tools in work contexts (Statista, 2023). This rapid adoption has led management scholars to emphasize the urgency of investigating AI’s transformative organizational impacts, particularly regarding employee adaptation mechanisms (Pietronudo et al., 2022). Trust has become a key factor shaping cognitive and behavioral responses to GAI implementation (Roesler et al., 2024). Oracle’s global survey reveals that 64% of employees place greater trust in AI systems than human managers, with over half preferring algorithmic decision support. These phenomena, encompassing both “algorithm aversion” and “algorithm appreciation,” demonstrate the complex and often paradoxical nature of employee trust in GAI. Crucially, research confirms that appropriate trust levels enable effective GAI adoption for innovation, yet over-trust introduces innovation risks and under-trust reduces its effectiveness (Vössing et al., 2022; Choung et al., 2023). This dual nature underscores the theoretical and practical necessity to systematically examine how trust in GAI shapes innovation outcomes while addressing potential trust-related paradoxes.
Scholarly perspectives on the impact of trust in GAI on employee innovation remain polarized. Advocates of human-AI collaboration highlight synergistic effects, contending that GAI’s automation of routine operations and complex data synthesis free employees to concentrate on creative tasks (Hu et al., 2021; Tacheva and Ramasubramanian, 2024; Kong et al., 2024). Conversely, studies on GAI ethics caution that innovation barriers such as algorithmic bias, privacy breaches, intellectual property violations, technical pressures, and overreliance systematically constrain novel idea generation (Wach et al., 2023; Banh and Strobel, 2023).
These theoretical divergences arise from two significant gaps in the literature. First, although trust in GAI exhibits differential impacts on innovation types, existing research frequently conflates exploitative and exploratory innovation (Johnson et al., 2022). Based on TTF theory (Howard and Rose, 2018), we propose an analytical framework that clarifies the distinct operation of trust in GAI across these innovation types. TTF theory asserts that technology effectiveness is determined by the fit between technology characteristics and task characteristics. Specifically, exploitative innovation, which is focused on incremental improvements, fits with GAI’s strengths in executing structured tasks, whereas exploratory innovation, which harnesses GAI’s creative potential to generate novel knowledge, encounters increased risks due to technical uncertainty. These distinct mechanisms require separate analytical approaches for each type of innovation. Furthermore, emerging GAI theories stress the importance of defining boundary conditions, particularly concerning contextual adaptability and user characteristics (Pei et al., 2024).
As GAI evolves, a common issue is the misfit between employees’ GAI capabilities and the rapid pace of GAI development, posing challenges for adapting to new roles and innovation requirements. This misfit among employees, GAI, and innovation tasks ultimately leads to innovation stagnation. Building on TTF theory, besides the technology-task fit between trust in GAI and ambidextrous innovation, two additional fits are equally critical: employee-GAI fit and employee-innovation task fit. This study integrates these perspectives and proposes that trust in GAI is positively related to exploitative innovation while exhibiting an inverted U-shaped relationship with both exploratory innovation and the complementarity of exploitative and exploratory innovation. Moreover, it investigates the moderating roles of employee-GAI fit and employee-innovation task fit in the relationship between trust in GAI and ambidextrous innovation.
Our research advances three important theoretical insights: (1) it furnishes a detailed comprehension of how trust in GAI influences exploitative and exploratory innovation, thereby elucidating inconsistencies in previous research on its impact on innovation; (2) it enriches the literature by examining how employee-GAI fit and employee-innovation task fit moderate the effects of trust in GAI on ambidextrous innovation; and (3) it extends TTF theory into the emerging ___domain of GAI innovation management.
Theory and hypotheses
TTF theory
TTF theory explains the underlying mechanisms of technology effectiveness by examining the fit among individuals, tasks, and technology across three dimensions: technology-task fit, individual-task fit, and individual-technology fit. In this study, “individual” refers to employees, “task” to innovation tasks, and “technology” to GAI. GAI-innovation task fit reflects the degree to which GAI’s technical functions fit with the requirements of innovation tasks (Goodhue and Thompson, 1995; Ammenwerth et al., 2006). Employee-GAI fit captures the fit between employees’ capabilities in leveraging GAI and GAI’s technical functions, while employee-innovation task fit assesses how well employees’ expertise, experience, and skills fit with the requirements of innovation tasks (Liu et al., 2011).
The differential impact of trust in GAI on exploitative and exploratory innovation arises from variations in how well GAI’s technical functions match the task requirements of each innovation type. Trust enables GAI to function as an effective “agent” in exploitative innovation (Kaplan et al., 2023), where objectives are well-defined, outcomes predictable, and risks relatively low (Jansen et al., 2006). Such tasks align with GAI’s strengths in data analysis, pattern recognition, and automation (Yan and Guan, 2018). Conversely, exploratory innovation entails high complexity and uncertainty, undefined solutions, and unpredictable outcomes (Limaj and Bernroider, 2019). These tasks require creativity, risk-taking, and adaptability—areas where GAI excels in generating novel ideas and synthesizing cross-___domain knowledge under changing conditions. However, GAI’s technical limitations and associated risks constrain its ability to fully support risk-reduction tasks, making the relationship between trust in GAI and exploratory innovation more complex.
Employee-GAI fit and employee-innovation task fit introduce new demands on employees, necessitating both the progressive development of GAI capabilities and the alignment of competencies with evolving task requirements in a dynamic division of labor (Hubschmid-Vierheilig et al., 2020). Howard and Rose (2018) suggest that achieving fit across these three dimensions predicts enhanced performance outcomes. Conversely, any misfit between employees and GAI, or between employees and innovation tasks, results in negative emotional responses, job insecurity, lower performance, and diminished innovation resilience (Arias-Pérez and Vélez-Jaramillo, 2021). Furthermore, human-AI collaboration research substantiates that decision-making efficacy is contingent on both individual and task characteristics (Vincent, 2021). Thus, we propose that employee-GAI fit and employee-innovation task fit moderate the relationship between trust in GAI and ambidextrous innovation. Building on these insights, we construct a hypothesized theoretical model (see Fig. 1).
Trust in GAI
AI technology is undergoing a paradigm shift from analytical to generative models (Sætra, 2023; Baabdullah, 2024). GAI refers to a computer-assisted system capable of generating text, images, audio, and video (Kanitz et al., 2023). It is distinguished by democratization, versatility, and creativity: (1) GAI’s social attributes (e.g., conversational intelligence, social intelligence, and anthropomorphism) make AI accessible to non-experts, including employees, end-users, and SMEs, for the first time (Bilgram and Laarmann, 2023); (2) the large language models underpinning GAI serve as a universal technology applicable across diverse industries (Ma and Huo, 2023); and (3) GAI demonstrates creativity comparable to humans, with its potential most evident in innovation scenarios (Bilgram and Laarmann, 2023).
Trust remains a central topic in scholarly discourse. Trust in GAI stems from interpersonal trust and trust in technology: (1) Drawing on theories of social responses toward computing, intelligent IT artifacts like GAI are perceived as embodying moral attributes such as benevolence (Thatcher et al., 2013). GAI’s anthropomorphic characteristics enable trust to be interpreted through the lens of interpersonal trust, which is defined as one party’s willingness to accept vulnerability grounded in the belief that the other party will prioritize their interests without external oversight (Mayer et al., 1995). (2) As a next-generation technology, GAI also introduces a distinct form of trust in technology (Thiebes et al., 2021), reflecting the belief that an agent will assist individuals in navigating uncertainty and achieving desired outcomes (Lee and See, 2004). Synthesizing these perspectives, this study defines trust in GAI as an individual’s willingness to rely on GAI, based on the belief that it will perform tasks beneficially in uncertain and vulnerable environments.
Research highlights its dual-edged nature of trust in GAI, encompassing positive and negative outcomes. On the positive side, trust fosters collaboration, connectivity, and operational efficiency, enhancing human-AI team performance (Gillath et al., 2021; Khoa et al., 2023). On the negative side, it entails shifts in decision-making authority, overreliance on GAI, diminished creativity, and challenges in managing technological risks (Glikson and Woolley, 2020; Feng et al., 2024). However, existing studies have yet to offer a comprehensive examination of the combined impact of these contrasting effects.
Trust in GAI and exploitative innovation
Exploitative innovation primarily focuses on refining existing knowledge and improving established products (Jansen et al., 2006). GAI’s technical functions in market feedback analysis and knowledge updating fit well with these tasks, supporting the expectation that trust in GAI positively impacts exploitative innovation. First, exploitative innovation relies on employees’ established knowledge, requiring deep understanding and accumulation of existing fields (Yan and Guan, 2018). Trust in GAI facilitates access to the latest ___domain-specific information, reducing barriers and costs associated with knowledge updates. Chen et al. (2021) suggest that substitutive knowledge coupling is positively related to exploitative innovation. By leveraging GAI to retrieve up-to-date knowledge, employees can integrate new insights while filtering out obsolete information, thereby deepening their knowledge base (Wu et al., 2023).
Second, exploitative innovation demands accurate identification of market needs and swift responses (Berraies et al., 2021; Wael Al-Khatib, 2023). However, employees often face resource constraints when managing vast datasets (Singh et al., 2024). Trust in GAI enables efficient analysis of both structured and unstructured data, including purchasing records, social media feedback, customer reviews, and offline monitoring. Through sentiment analysis and audience segmentation, GAI identifies patterns, relationships, and emerging trends (Shrestha et al., 2021; Wael Al-Khatib, 2023). These insights enhance the prediction of purchasing behaviors and demand shifts across the product lifecycle (Akter et al., 2023). Employees can then integrate GAI-generated insights with market intelligence, strategic objectives, and brand positioning to refine product features and iterations (Bouschery et al., 2023). As such, we hypothesize that:
H1: Trust in GAI is positively related to exploitative innovation.
Trust in GAI and exploratory innovation
Trust in GAI exerts dual effects: it fosters a “GAI empowerment” mechanism when fitted with creative tasks but introduces a “GAI limitation” mechanism when misfitted with risk-reduction tasks. These opposing forces give rise to an inverted U-shaped relationship between trust in GAI and exploratory innovation (see Fig. 2a).
This figure illustrates how trust in GAI affects exploratory innovation: a the inverted U-shaped relationship; b the moderating effect of employee-GAI fit; and c the moderating effect of employee-innovation task fit. a Trust in GAI promotes exploratory innovation through the positive mechanism (GAI empowerment), while also inhibiting it through the negative mechanism (GAI limitation). The combined effect results in an inverted U-shaped relationship. b When employee-GAI fit is high (dashed line), the positive mechanism is amplified, shifting the turning point of the inverted U-shaped curve to the right. c When employee-innovation task fit is high (dashed line), the positive mechanism is amplified while the negative mechanism is mitigated, thereby shifting the turning point to the right and flattening the inverted U-shaped curve.
“GAI empowerment” mechanism
Exploratory innovation revolves around expanding knowledge and developing novel products (Jansen et al., 2006). GAI’s technical functions in knowledge search and creative idea generation fit well with these tasks. First, exploratory innovation thrives on novel knowledge, emerging technologies, and divergent thinking (Bachmann et al., 2021). Nonetheless, human knowledge is constrained by cognitive biases, fixed thinking modes, and personal experience. With its expansive knowledge base and large language model, GAI explores a significantly broader search space. Trust in GAI facilitates the rapid and comprehensive acquisition of external insights and cross-___domain knowledge (Haase and Hanel, 2023). This empowers employees to focus on knowledge exploration and opportunity identification (Tacheva and Ramasubramanian, 2024). Chen et al. (2021) demonstrated that combining knowledge from different fields fosters novel idea generation and breakthrough innovation. By synthesizing diverse knowledge sources, GAI contributes to the creation of unconventional ideas that challenge existing paradigms and drive the development of novel products (Yan and Guan, 2018; Haase and Hanel, 2023).
Beyond knowledge complementation, GAI directly generates creative ideas (Roesler et al., 2024). While creativity was traditionally considered a uniquely human trait (Amabile, 2020), recent studies challenge this notion (Haase and Hanel, 2023). Some research suggests that GAI’s creative output surpasses human creativity (Hermann and Puntoni, 2024), whereas others find no significant difference (Noy and Zhang, 2023) or assert that human ingenuity remains superior (Koivisto and Grassini, 2023). Despite these mixed findings, there is a consensus that GAI possesses a form of creativity distinct from traditional AI, generating innovative solutions previously unforeseeable by employees (Hermann and Puntoni, 2024). When integrated with predictive modeling, trust in GAI aids in identifying flaws in product prototypes, reducing ineffective iterations in traditional R&D processes (Wamba et al., 2023).
“GAI limitation” mechanism
Exploratory innovation is inherently risky and uncertain, with core tasks involving risk mitigation to prevent failure (Yan and Guan, 2018). However, GAI, often described as a “black box” due to its dynamic, opaque, and unpredictable nature (Anthony et al., 2023), lacks human intuition and subconscious decision-making (Jarrahi, 2018; Magni et al., 2024). These limitations are particularly pronounced in exploratory innovation, which involves emotional, social, and ethical dimensions. Trust, in this context, entails vulnerability and exposure to uncontrollable risks (Lewis and Marsh, 2022). Consequently, rather than mitigating risk, trust in GAI introduces new risks and uncertainties, resulting in a misfit with task requirements. These risks include: (1) ethical risks, such as algorithmic bias, security vulnerabilities, data discrimination, misuse, botshit, and unclear accountability (Wach et al., 2023); (2) overreliance, reducing employees’ independent decision-making and creative problem-solving abilities (Keding and Meissner, 2021; Eke, 2023); (3) cognitive biases, including the tendency to accept GAI’s first suggestion due to the “einstellung effect”, limiting alternative solutions (Doshi and Hauser, 2024); and (4) feasibility neglect, where GAI-generated ideas may overlook resource constraints, technical limitations, and market realities, hindering practical implementation. As trust in GAI increases, these risks escalate, further misfitting GAI’s technical functions with the risk management requirements of exploratory innovation tasks.
In summary, trust in GAI amplifies the benefits of the “GAI empowerment” mechanism. However, as trust reaches excessive levels, the “GAI limitation” mechanism intensifies, ultimately outweighing its advantages. Evidence suggests that over-trust leads to overreliance and misuse of GAI, while under-trust prevents employees from fully leveraging its potential (Kaplan et al., 2023). The optimal level of trust strikes a balance between these competing forces, allowing employees to maximize GAI’s contributions to exploratory innovation. As such, we hypothesize that:
H2: There is a curvilinear relationship between trust in GAI and exploratory innovation, such that trust in GAI initially has a positive effect on exploratory innovation but this positive influence flattens out and then declines at a high level of trust in GAI.
Trust in GAI and ambidextrous innovation
From the perspective of situational ambidexterity, exploratory and exploitative innovation are not mutually exclusive but rather complementary and mutually reinforcing (Harmancioglu et al., 2020). This complementarity-based perspective provides a comprehensive lens for analyzing how trust in GAI influences ambidextrous innovation for two reasons: On one hand, trust in GAI enables the system to handle repetitive, data-intensive tasks, allowing employees to concentrate on addressing existing market demands while simultaneously identifying latent opportunities. This division of labor fosters the concurrent advancement of exploitative and exploratory innovation (Tacheva and Ramasubramanian, 2024). On the other hand, trust in GAI facilitates access to external knowledge and resources, helping to mitigate the inherent tensions between exploratory and exploitative innovation (Fahnenstich and Rieger, 2024). Scholars such as Berg (2016) and Li et al. (2023) argue that organizations or individuals capable of balancing both exploratory and exploitative innovation are better positioned to achieve synergistic outcomes and improved performance.
As discussed above, trust in GAI positively influences exploitative innovation, while its relationship with exploratory innovation follows an inverted U-shaped pattern. Our research adopts the measurement framework proposed by Lubatkin (2006), which assesses the complementarity of exploitative and exploratory innovation by summing their respective dimensions (see Fig. 3). Before reaching the turning point of the exploratory innovation curve (point a), the “GAI empowerment” mechanism drives exploratory innovation in parallel with the growth of exploitative innovation, reinforcing their complementarity. GAI facilitates exploratory innovation when exploitative innovation is robust, as accumulated knowledge and experience provide resources for exploration (Li et al., 2023). Conversely, in the presence of extensive exploratory innovation, GAI enhances exploitative innovation by accelerating the commercialization of novel ideas and technologies (Harmancioglu et al., 2020). This interdependence creates a dynamic cycle that sustains both short-term profitability and long-term growth (Berraies et al., 2021).
Beyond point a, the continued linear growth in exploitative innovation partly offsets the nonlinear decline in exploratory innovation caused by the “GAI limitation” mechanism. Nevertheless, as the negative effects of the “GAI limitation” mechanism intensify, this compensatory effect gradually weakens, eventually reaching an optimal level at point b. Once trust in GAI exceeds point b, the complementarity of exploitative and exploratory innovation decreases (Chen et al., 2015). Over-trust leads to overreliance on GAI’s recommendations, causing narrowed resource allocation and neglecting alternative exploratory opportunities (Parasuraman and Manzey, 2010). Furthermore, over-trust in GAI undermines employees’ independent judgment and critical thinking, limiting their ability to address the nuanced challenges inherent in managing exploitative and exploratory innovation (Keding and Meissner, 2021). These factors disrupt the transfer of knowledge and resources between the two innovation types. Hence, we hypothesize that:
H3: There is a curvilinear relationship between trust in GAI and the complementarity of exploitative and exploratory innovation, such that trust in GAI initially has a positive effect on the complementarity of exploitative and exploratory innovation but this positive influence flattens out and then declines at a high level of trust in GAI.
The moderating role of employee-GAI fit and employee-innovation task fit
From the perspective of TTF theory, the effectiveness of GAI depends on its inherent characteristics and fits with the capabilities of its users (Ammenwerth et al., 2006; Liu et al., 2011). The integration of GAI into the workplace requires employees to develop new capabilities (Kanbach et al., 2024), including (1) realization capability, which refers to understanding GAI; (2) utilization capability, which involves interpreting, explaining, and applying GAI-generated outputs; and (3) maintenance capability, which pertains to managing, regulating, and adapting GAI in dynamic business environments (Chowdhury et al., 2023). When employee-GAI fit is high, the system seamlessly integrates into innovation practices, minimizing friction and resistance (Chowdhury et al., 2023). This fit enables trust in GAI to be translated more effectively into work performance, thereby encouraging active engagement with GAI for task completion (Lichtenthaler, 2019). Greater engagement strengthens GAI’s technical functions in analyzing market feedback and updating knowledge, making it fit more closely with the requirements of exploitative innovation tasks (Galati and Bigliardi, 2017). On the contrary, low employee-GAI fit creates barriers, as the system may conflict with employees’ established work habits and workflows. Hence, we hypothesize that:
H4a: The positive relationship between trust in GAI and exploitative innovation is strengthened by employee-GAI fit.
We propose that employee-GAI fit amplifies the “GAI empowerment” mechanism of trust in GAI on exploratory innovation while leaving the “GAI limitation” mechanism unaffected. This moderating effect shifts the turning point of the inverted U-shaped relationship to the right. A higher level of employee-GAI fit enhances employees’ capabilities to leverage GAI for exploring new domains and acquiring novel knowledge (Eapen et al., 2023; Chowdhury et al., 2023). Such engagement promotes the discovery of innovative solutions that traditional approaches might overlook (Tran and Murphy, 2023). Moreover, a strong employee-GAI fit facilitates seamless collaboration, reduces cognitive load, and allows employees to focus on creative tasks rather than learning to operate GAI. Nevertheless, the “GAI limitation” mechanism is primarily influenced by technological constraints and employees’ cognitive biases, rather than improvements in employees’ GAI capabilities. Hence, we posit the following hypothesis:
H4b: Employee-GAI fit moderates the inverted U-shaped relationship between trust in GAI and exploratory innovation by shifting the turning point to the right.
We propose that employee-innovation task fit strengthens the positive effect of trust in GAI on exploitative innovation (see Fig. 2b). The rise of GAI is reshaping innovation tasks, requiring a redefinition of roles and a recalibration of employees’ responsibilities (Chowdhury et al., 2023). Within exploitative innovation tasks, employees shift their focus from traditional data collection to integrating their task competencies with GAI-driven market insights (Haefner et al., 2021; Haase and Hanel, 2023). When employees’ expertise, experience, and skills fit with task requirements, they can effectively incorporate GAI-generated outputs into planning and execution. This fit enables them to address challenges efficiently, thus advancing the exploitative innovation process (Jia et al., 2024). Besides, a strong employee-innovation task fit enhances self-efficacy beliefs (Hua and Liu, 2017), which are closely related to improved innovation outcomes (Nieves and Quintana, 2018). Employees who are confident in using GAI for exploitative tasks are better equipped to adapt to market fluctuations. Conversely, when employees’ competencies misfit task requirements, exploitative innovation progress is hindered. As such, we hypothesize that:
H5a: There is a positive moderating effect of employee-innovation task fit on the relationship between trust in GAI and exploitative innovation.
Employee-innovation task fit also moderates the inverted U-shaped relationship between trust in GAI and exploratory innovation by amplifying the “GAI empowerment” mechanism while suppressing the “GAI limitation” mechanism (see Fig. 2c). Within exploratory innovation tasks, employees transition from relying solely on intuition and experience to forming a creative synergy with GAI (Vinchon et al., 2023; Haase and Hanel, 2023). GAI’s technical functions in knowledge search and creative idea generation streamline task completion and enhance the quality of innovative outputs (Huang et al., 2019). When employee-innovation task fit is high, employees effectively integrate their expertise, intuition, and emotional engagement with GAI-generated insights to identify novel opportunities and explore new directions (Wilson and Daugherty, 2019).
Additionally, employee-innovation task fit mitigates the risks associated with GAI by enabling employees to take responsibility for tasks beyond GAI’s technical functions, such as emotional engagement, ethical considerations, and adapting to dynamic situations (Jia et al., 2024; Yin et al., 2024). Employees’ unique experience, emotional intelligence, and abstract reasoning allow them to navigate these complexities, fostering originality and counteracting the constraints of GAI (Berraies et al., 2021). Moreover, employees with strong ___domain expertise and experience rely less on GAI, reducing dependency and alleviating concerns that overreliance may stifle innovation (Saßmannshausen et al., 2021). As such, we hypothesize that:
H5b: Employee-innovation task fit negatively moderates the inverted U-shaped relationship between trust in GAI and exploratory innovation.
Methodology
Sample and data collection
China is home to over 120 firms developing large language models, such as Alibaba Cloud’s Tongyi Qianwen, Baidu’s ERNIE Bot, and ByteDance’s Doubao, providing an optimal empirical setting for our survey. The questionnaire design followed a systematic three-step approach (Wei et al., 2022). First, we developed the questionnaire based on a comprehensive literature review and expert interviews. Next, we refined the questions through in-depth discussions with three employees from Internet firms. A pilot study involving 43 employees further improved the questionnaire’s clarity and validity.
To minimize biases stemming from regional economic disparities, our survey targeted provinces and municipalities across three tiers of competitiveness in China’s new-generation AI technology industry. This included first-tier regions such as Beijing, Guangdong, and Shandong; second-tier regions like Liaoning, Anhui, and Tianjin; and third-tier regions comprising Jilin, Heilongjiang, and Henan. Employees working in R&D, design, and marketing departments in the Internet, finance, manufacturing, and telecommunication industries were invited to participate, as they are more digitally literate and have direct access to GAI technology.
The survey employed a mixed-mode approach: (1) on-site distribution, where questionnaires were handed out during MBA classes and via MBA students to their colleagues, and (2) online distribution, where targeted respondents were invited directly or through departmental heads, with additional support from data collection agencies (Credamo). The survey was conducted from November 2023 to May 2024.
To mitigate common method bias (CMB), the questionnaire was administered in two waves, with an interval of more than two weeks. The first wave covered control variables, trust in GAI, employee-innovation task fit, and employee-GAI fit, while the second wave measured exploratory and exploitative innovation. Each questionnaire was assigned a unique sample identifier to ensure accurate matching across both time points. At the beginning of the survey, an informed consent statement emphasized voluntary participation. A total of 443 responses were collected. After excluding incomplete responses due to respondents changing jobs or resigning mid-survey, and those exhibiting discernible response patterns (such as S, I, or Z types), 302 valid questionnaires remained. The effective response rate was 68.17%. Table 1 presented the descriptive statistics and demographic details of the respondents.
Measures
All scales were based on existing literature (see Table 2). We assessed all measures using a seven-point Likert scale, ranging from “strongly disagree” to “strongly agree”.
Following Keding and Meissner (2021), we adapted 4 items to measure trust in GAI. Consistent with Mom (2007), we used six items for exploitative innovation and five for exploratory innovation. There are two measurement approaches for the complementarity of exploitative and exploratory innovation, first by multiplying exploitative and exploratory innovation (He et al., 2004), and second by summing the two (Lubatkin, 2006). Lubatkin (2006) conducted empirical research comparing these two measurement approaches and found that the summation method yielded the best results, with no significant information loss when combined into a single index. Therefore, we adopted the second measurement approach. A 4-item scale was adapted from Kim and Gatling (2019) to measure employee-GAI fit. Employee-innovation task fit was evaluated through four items based on Saks and Ashforth (1997) and Hua and Liu (2017).
It has been shown that employee innovation may be affected by variables at the individual and organizational levels (Ng and Yam, 2019). Thus, this study employed employee gender, age, and education, along with firm size, firm ownership, and firm age, as control variables. Employee gender and firm ownership are dummy variables, while other control variables are assigned values ranging from 1 to N.
Reliability and validity
Data analysis was conducted using SPSS 23.0 and AMOS 26.0. As Table 2 showed, the Cronbachs’α for key variables exceeded 0.70, indicating high reliability. All loadings were greater than 0.6, and the average variance extracted (AVE) for the key variables was greater than 0.5, indicating high convergent validity. The square roots of the AVE were all greater than the off-diagonal elements (Fornell and Larcker, 1981), indicating sufficient discriminant validity (see Table 4). Furthermore, with residual correlations included to address shared measurement error, the five-factor model demonstrated a good fit to the data (χ2/df = 2.595, RMSEA = 0.073, CFI = 0.917, TLI = 0.901, SRMR = 0.050).
Common method bias
Harman’s single-factor test results indicated that the highest univariate contribution was 32.981%, which was below the critical value of 50% (Podsakoff et al., 2003). Meanwhile, the fit of the single-factor model (χ2/df = 10.857, RMSEA = 0.181, CFI = 0.442, TLI = 0.389, SRMR = 0.152) was not as good as that of the model with five-factor measures. It can be concluded that CMB was not a serious problem.
Analysis and results
Table 3 presented descriptive statistics and correlations for the variables in the model. Table 4 provided the results of the test on exploitative innovation. To control the problem of multicollinearity, this study mean-centered the variables included in the interaction terms. M2 shows that the coefficient of trust in GAI on exploitative innovation was positive and significant (β = 0.271, p < 0.001), while M3 shows that the coefficient of the squared term of trust in GAI on exploitative innovation was non-significant (β = −0.048, p = 0.231 > 0.05). The results indicate that trust in GAI positively influences exploitative innovation. As such, H1 was supported. To more intuitively observe the relationship between trust in GAI and exploitative innovation, a scatter plot was created with trust in GAI as the horizontal axis variable and exploitative innovation as the vertical axis variable, as shown in Fig. 4. It revealed that most data points fall within the 95% confidence interval, and the linear fit was statistically significant. In addition, a nearly linear relationship between trust in GAI and exploitative innovation was observed, further supporting H1.
The coefficient of trust in GAI with employee-GAI fit in M5 was positive and significant (β = 0.278, p < 0.001), supporting H4a. Figure 5 illustrated that when the employee-GAI fit was high, the relationship between trust in GAI and exploitative innovation was stronger, as evidenced by a steeper slope, compared to when the fit was low. This finding suggests that as the level of employee-GAI fit increases, the positive impact of trust in GAI on exploitative innovation intensifies. It can be inferred that employee-GAI fit enhances the positive effect of trust in GAI and exploitative innovation. Moreover, the empirical results of the moderating effect of employee-innovation task fit on trust in GAI affecting exploitative innovation are reported in M7. The coefficient of trust in GAI and employee-GAI fit was non-significant (β = 0.070, p = 0.124 > 0.05). H5a was not supported. This may be because GAI has a higher degree of automated completion of exploitative innovation tasks and does not require as much updating of employee’s expertise, skills and experience.
The regression equation of trust in GAI on exploratory innovation was assumed to be EI2 = β0 + β1 TAI + β2 TAI2. The main effect was an inverted U-shaped relationship needs to satisfy the following three conditions (Haans et al., 2016): (1) the coefficient of the squared term of TAI was significantly negative; (2) when TAI was the minimum value, the slope was significantly positive, and when TAI was the maximum value, the slope was significantly negative; and (3) the turning point should be in the range of valid values of TAI. We standardized variables included in the interaction terms. The results were shown in Table 5. As shown in M9, the coefficient of the squared term of TAI on EI2 was significantly negative (β2 = −0.341, p < 0.001). The slope of the curve k = 0.185 − 0.682 TAI, where TAI ranged from −2.359 to 2.060. When TAI = −2.359, the value of k was significantly positive, while when TAI = 2.060, the value of k was significantly negative. The turning points of TAI are -β1/2β2 = 0.271, which was in the valid range of values of TAI (see Fig. 6a). Thus, H2 was supported.
This figure illustrates how employee-GAI fit and employee-innovation task fit moderate the relationship between trust in GAI and exploratory innovation: a main effect of trust in GAI; b employee-GAI fit as a moderator; and c employee-innovation task fit as a moderator. a Positive and negative mechanisms coexist, and their combined effect results in an inverted U-shaped curve with a turning point at 0.271. b A new positive mechanism emerges, shifting the turning point of the inverted U-shaped curve to 0.600. c New positive and negative mechanisms emerge, shifting the turning point of the inverted U-shaped curve to 0.984 and flattening the curve.
Similarly, the regression equation of trust in GAI on the complementarity of exploitative and exploratory innovation was assumed to be EI1 + EI2 = δ0 + δ1 TAI + δ2 TAI2. M14 revealed a significantly negative coefficient for the squared term of TAI’s influence on EI1 + EI2 (δ2 = −0.355, p < 0.001). The curve slope was characterized by k = 0.482 − 0.710TAI, with k significantly positive at TAI = −2.359 and significantly negative at TAI = 2.060. The calculated turning point for TAI = 0.679, lay within the established range of TAI, providing support for H3.
Regarding the moderating effect of employee-GAI fit on the relationship between trust in GAI and exploratory innovation, the hypothetical regression equation was EI2 = θ0 + θ1 TAI + θ2 TAI2 + θ3 EGF + θ4 TAI * EGF + e. According to the results of M10, θ1 = 0.166 (p = 0.003 < 0.01), θ2 = −0.367 (p < 0.001), θ4 = 0.275 (p < 0.001). The horizontal coordinate of the turning point was -(θ1 + θ4 EGF)/2θ2. At high levels of EGF, the turning point was shifted to the right to 0.600. The coefficient of the interaction between EGF and the squared term of TAI on EI2 was positive and non-significant (θ = 0.087, p = 0.08 > 0.05). These results suggested that employee-GAI fit affected only the turning point of the curve through its influence on the “GAI empowerment” mechanism, without impacting the “GAI limitation” mechanism (see Fig. 6b). Therefore, H4b was supported.
Likewise, the hypothetical regression equation was EI2 = λ0 + λ1 TAI + λ2 TAI2 + λ3 TAI * ETF + λ4 TAI2 * ETF + λ5 ETF + e. According to M12, λ3 = 0.213 (p < 0.001), λ4 = 0.132(p < 0.01), showing that ETF significantly moderated the relationship between TAI and EI2. The mean of ETF plus or minus one standard deviation was used as the criterion for grouping. Under high ETF conditions, the simple slope at the low point of TAI was k = 1.285 > 0, and the simple slope at the high point of TAI was k = −0.328 < 0. By setting the first-order equation of TAI to zero, the turning point was derived as TAI* = (-λ1 − λ3 EGF)/(2λ2 + 2λ4 EGF). This yielded turning points of 0.984 for high EGF and −0.053 for low EGF. The turning point shifted to the right as λ1λ3 − λ2λ4 > 0 (Haans et al., 2016). These results suggested that employee-GAI fit influenced the positive “GAI empowerment” mechanism by shifting the turning point to the right and impacted the “GAI limitation” mechanism by flattening the curve (see Fig. 6c). H5b was therefore supported.
Discussion
Conclusion
This paper investigates how trust in GAI influences employees’ exploitative and exploratory innovation. Grounded in TTF theory, we constructed a theoretical model and empirically tested it using survey data from 302 employees in China, yielding several significant findings.
First, aligning with prior research indicating that trust in GAI positively impacts productivity, business performance, and human-AI team effectiveness while potentially leading to negative outcomes such as technology obsolescence, time loss, misuse, security violations, and disenfranchisement (Khoa et al., 2023; Feng et al., 2024), we reveal a more nuanced “double-edged sword” effect of trust in GAI on innovation. Johnson et al. (2022) highlighted that GAI’s effects vary because exploitative and exploratory innovation require different sets of functions. Therefore, it is crucial to examine their distinct influences. Further empirical studies by wael Al-Khatib (2023) and Singh et al. (2024) indicated a positive correlation between GAI adoption and ambidextrous innovation, yet they do not address the risks and challenges posed by GAI in various innovation contexts nor explore the complementarity of exploitative and exploratory innovation. These research gaps motivate our study.
(1) Our analysis reveals a positive relationship between trust in GAI and exploitative innovation. According to TTF theory, this effect can be attributed to the high certainty of exploitative innovation tasks (Jansen et al., 2006). Trust in GAI enables employees to utilize GAI efficiently for repetitive and data-intensive activities, such as updating knowledge and tracking market feedback, which fits with tasks aimed at enhancing existing knowledge and products. Consistent with our findings, prior research has interpreted AI in innovation through the lens of dynamic capabilities. For example, Dong and Fan (2024) and Kumar et al. (2024) found that AI capability positively impacts exploitative innovation, and Wael Al-Khatib et al. (2023) demonstrated that GAI capability enhances exploitative innovation in digital supply chains. These findings provide indirect support for our conclusions.
(2) Our study reveals a curvilinear relationship between trust in GAI and exploratory innovation. This relationship is driven by the linear positive effects of the “GAI empowerment” mechanism and the curvilinear negative effects of the “GAI limitation” mechanism. Based on TTF theory, GAI’s technical functions in knowledge search and creative idea generation enhance idea uniqueness and facilitate knowledge integration across domains, which fits well with innovation tasks focused on expanding new knowledge and products. However, these functions have limitations when addressing tasks requiring emotional complexity, intricate social dynamics, and ethical judgments. Consequently, over-trust in GAI may lead to idea homogeneity, overreliance, feasibility neglect, and moral risks, making it challenging to fit risk-reduction tasks. It should be noted that studies on human-AI collaboration in innovation often do not distinguish explicitly between exploitative and exploratory innovation. Instead, discussions tend to focus on exploratory innovation, generally assuming that GAI is positively associated with innovation. For instance, Eapen et al. (2023) identified five mechanisms through which GAI promotes high-quality innovation, while Bilgram and Laarmann (2023) emphasized GAI’s critical role throughout the innovation process. In contrast, Sætra (2023) cautioned against GAI’s risks in complex problem-solving, and meta-analyses by Hancock et al. (2011) and Kaplan et al. (2023) revealed that both over-trust and under-trust in AI diminish the value of human-AI interactions. Our study integrates these perspectives, demonstrating that trust in GAI simultaneously facilitates and constrains exploratory innovation through its empowerment and limitation mechanisms.
(3) This study identifies an inverted U-shaped relationship between trust in GAI and the complementarity of exploitative and exploratory innovation. Extending the contextual ambidexterity framework, we find that the concurrent rise of exploitative innovation and the “GAI empowerment” mechanism in exploratory innovation enhances their complementarity. Notably, our findings introduce a novel insight: when the linear growth in exploitative innovation fails to offset the “GAI limitation” mechanism in exploratory innovation, it constrains their complementarity. This discovery contributes to ongoing discussions on the challenges of achieving complementarity of exploitative and exploratory innovation. Prior research has largely attributed ambidexterity to organizational and structural factors, such as innovation climate, organizational trust, structural differentiation, and leadership (Berraies and Zine El Abidine, 2019). Our study advances this literature by identifying trust in GAI as a critical antecedent variable influencing innovation complementarity.
Second, while Vincent (2021) highlighted the moderating role of individual and task characteristics in human-AI collaboration, this study extends this understanding in the context of GAI and innovation: (1) For the relationship between trust in GAI and exploitative innovation, employee-GAI fit strengthens the positive impact, whereas employee-innovation task fit does not. This may be because exploitative innovation relies more on analyzing structured market data, which fits with the strengths of GAI. Thus, the role of trust in GAI in exploitative innovation depends more on employees’ capabilities to utilize GAI effectively than on their skills or experience. (2) For the relationship between trust in GAI and exploratory innovation, employee-GAI fit amplifies the “GAI empowerment” mechanism. In contrast, employee-innovation task fit enhances the “GAI empowerment” mechanism and mitigates the “GAI limitation” mechanism. Existing research presents two opposing views on the skills and expertise necessary for effective GAI utilization. Jia et al. (2024) suggested that GAI fosters creativity, particularly among higher-skilled employees, whereas Noy and Zhang (2023) indicated that GAI provides greater benefits to lower-skilled employees. Kanbach et al. (2024) argued that GAI democratizes technology and knowledge access, reducing the need for expertise in innovation. Our research reconciles these views by demonstrating that GAI’s accessibility and usability enable innovation across skill levels. In fact, the role of trust in GAI in exploratory innovation is not solely determined by employees’ skills but also by the fit among employees’ GAI capabilities, task requirements, and technical functions.
Theoretical contributions
Our study contributes to the literature in three key aspects. First, our findings offer a thorough insight into how trust in GAI shapes exploitative and exploratory innovation, thereby enriching existing knowledge. Although studies on GAI in the workplace are increasing, the significance of trust in GAI as an innovation driver remains underexplored. Recent calls for research include Usai et al. (2021), who urged further investigation into the factors determining efficient technology utilization from cognitive and managerial perspectives; Mariani and Dwivedi (2024), who emphasized the necessity to delve into the causal link between GAI and innovation outcomes; and Baabdullah (2024), who highlighted the need for empirical studies on GAI and innovation. In response, our research provides robust empirical evidence on how trust in GAI influences both exploitative and exploratory innovation. More importantly, this study uncovers potential constraints on exploratory innovation under high trust conditions. Building on these insights, it advances the research on the positive relationship between GAI adoption and ambidextrous innovation in three significant ways (Singh et al., 2024): by examining both the positive and negative effects of GAI, by differentiating the roles and outcomes of GAI in exploitative and exploratory innovation, and by analyzing GAI’s influence on the complementarity between these two types of innovation.
Although existing studies on human-AI collaboration and GAI ethics address the multifaceted roles of trust in GAI in innovation, a deeper and more nuanced understanding remains lacking. We contribute to the literature by empirically showing that trust in GAI facilitates employees in refining existing products. Notably, we uncover an inverted U-shaped relationship between trust in GAI and exploratory innovation, suggesting that the relationship is more complicated than previously assumed. This novel finding demonstrates a “double-edged sword” effect of trust in GAI on innovation and helps reconcile conflicting perspectives in the literature. In doing so, we enrich scholarly discussion on GAI’s dual role as a facilitator and constraint in innovation.
Second, we extend the literature by examining the moderating role of employee-GAI fit and employee-innovation task fit. By integrating technology, individual, and task characteristics, our research offers a holistic understanding of how trust in GAI shapes innovation. Prior studies suggest that the benefits of GAI require employees to develop the necessary capabilities and refine their task competencies to translate insights into tangible innovations (Jia et al., 2024; Mariani and Dwivedi, 2024). There is limited understanding of how enhancing employees’ capabilities fosters GAI-driven innovation. Our findings identify two crucial factors influencing employee-GAI collaboration in innovation: employee-GAI fit and employee-innovation task fit. We confirm the strategic value of these fits in leveraging trust in GAI for innovation and elucidate their differentiated roles in exploitative versus exploratory innovation. These insights underscore that harnessing GAI for innovation is a sophisticated process, contingent on employees’ GAI capabilities to effectively deploy the technology alongside its technical functions and on their expertise, experience, and skills that fit with innovation task requirements.
Third, this research expands the application and depth of TTF theory. Traditionally employed to examine the interplay among tasks, technology, user responses, and performance (Howard and Rose, 2018), TTF theory is now extended to the ___domain of GAI technology and innovation behavior. This extension represents a significant advancement in applying TTF theory to emerging technologies. Specifically, we refine the theory’s fit mechanism by analyzing how trust in GAI influences ambidextrous innovation. Our findings reveal not only the positive effects of good fit but also the negative consequences of poor fit on innovation. This bidirectional perspective provides a more comprehensive understanding of the fit mechanism within TTF theory.
Practical implications
These findings yield several management implications for innovation practices. First, fostering trust in GAI enables employees to harness its technological potential in supporting both exploitative and exploratory innovation. It is imperative, especially for those less familiar with GAI technology, to actively engage in collaborative practices and to apply GAI to innovation tasks that align well with its strengths (Kanbach et al., 2024). At the same time, employees should recognize that GAI is not a cure-all for innovation challenges, particularly in domains that require emotional intelligence and ethical judgment where human strengths prevail. Over-trust in GAI may lead to negative outcomes such as employee complacency, increased technological risks, and hindered exploratory innovation, while also impeding the resource sharing and knowledge exchange that are vital for complementing exploitative and exploratory innovation. This issue is particularly relevant for employees proficient in GAI, who may be prone to over-trusting the technology in their pursuit of novelty in innovation (Jia et al., 2024). Meanwhile, firms should integrate human-centric considerations into technology adoption by establishing robust digital infrastructure, providing emotional support, fostering an innovation-friendly culture, and ensuring a psychologically safe learning environment.
Second, successful GAI implementation for innovation further requires careful attention to employee-GAI fit and employee-innovation task fit. Employee-GAI fit highlights the importance of fitting employees’ GAI capabilities with GAI’s technical functions to maximize innovation potential. Firms should offer tailored training programs to address any gaps in employees’ GAI capabilities and provide targeted development opportunities. Similarly, employee-innovation task fit underscores the need to align employees’ task competencies with the requirements of innovation tasks. Employees should enhance their expertise in areas where GAI is less effective. Firms, in turn, should strategically assemble teams based on specialized skills and experience to optimize contributions.
Limitations and future research directions
First, although China serves as an appropriate context for examining the impact of trust in GAI on ambidextrous innovation, future studies should verify our findings in other economic and cultural settings to ensure broader applicability. Second, given that trust in GAI is inherently multidimensional, further studies should explore how its affective and cognitive dimensions distinctly shape employee innovation outcomes (Glikson and Woolley, 2020). Additionally, future work should examine additional individual and task characteristics, such as digital affinity and task complexity, that may uniquely influence the relationship between GAI and innovation (Vincent, 2021).
Data availability
The foundational data used in this study have been uploaded as supplementary files.
References
Akter S, Hossain MA, Sajib S et al. (2023) A framework for AI-powered service innovation capability: review and agenda for future research. Technovation 125:102768. https://doi.org/10.1016/j.technovation.2023.102768
Amabile TM (2020) Creativity, artificial intelligence, and a world of surprises. Acad Manag Discov. 6(2):351–354. https://doi.org/10.5465/amd.2019.0075
Ammenwerth E, Iller C, Mahler C (2006) IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Med Inf Decis Mak 6(1):1. https://doi.org/10.1186/1472-6947-6-3
Anthony C, Bechky AB, Fayard A (2023) “Collaborating” with AI: taking a system view to explore the future of work. Organ Sci 34(5):1672–1694. https://doi.org/10.1287/orsc.2022.1651
Arias-Pérez J, Vélez-Jaramillo J (2021) Understanding knowledge hiding under technological turbulence caused by artificial intelligence and robotics. J Knowl Manag 26(2):1476–1491. https://doi.org/10.1108/JKM-01-2021-0058
Baabdullah AM (2024) Generative conversational AI agent for managerial practices: the role of IQ dimensions, novelty seeking and ethical concerns. Technol Forecast Soc Chang 198:122951. https://doi.org/10.1016/j.techfore.2023.122951
Bachmann JT, Ohlies I, Flatten T (2021) Effects of entrepreneurial marketing on new ventures’ exploitative and exploratory innovation: the moderating role of competitive intensity and firm size. Ind Mark Manag 92:87–100. https://doi.org/10.1016/j.indmarman.2020.10.002
Banh L, Strobel G (2023) Generative artificial intelligence. Electron Mark. 33(1):63. https://doi.org/10.1007/s12525-023-00680-1
Berg JM (2016) Balancing on the creative highwire: Forecasting the success of novel ideas in organizations. Admin Sci Q 61(3):433–468. https://doi.org/10.1177/0001839216642211
Berraies S, Hamza KA, Chtioui R (2021) Distributed leadership and exploratory and exploitative innovations: mediating roles of tacit and explicit knowledge sharing and organizational trust. J Knowl Manag 25(5):1287–1318. https://doi.org/10.1108/JKM-04-2020-0311
Berraies S, Zine El Abidine S (2019) Do leadership styles promote ambidextrous innovation? Case of knowledge-intensive firms. J Knowl Manag 23(5):836–859. https://doi.org/10.1108/JKM-09-2018-0566
Bilgram V, Laarmann F (2023) Accelerating innovation with generative AI: AI-augmented digital prototyping and innovation methods. IEEE Eng Manag Rev 51(2):18–25. https://doi.org/10.1109/EMR.2023.3272799
Bouschery SG, Blazevic V, Piller FT (2023) Augmenting human innovation teams with artificial intelligence: exploring transformer-based language models. J Prod Innov Manag 40(1):139–153. https://doi.org/10.1111/jpim.12656
Chen H, Yaoa Y, Zhou H (2021) How does knowledge coupling affect exploratory and exploitative innovation? The chained mediation role of organisational memory and knowledge creation. Technol Anal Strat Manag 33(6):713–727. https://doi.org/10.1080/09537325.2020.1840543
Chen L, Zhang L, Zhao N (2015) Exploring the nonlinear relationship between challenge stressors and employee voice: the effects of leader-member exchange and organisation-based self-esteem. Pers Individ Dif 83:24–30. https://doi.org/10.1016/j.paid.2015.03.043
Choung H, David P, Ross A (2023) Trust and ethics in AI. AI Soc 38:733–745. https://doi.org/10.1007/s00146-022-01473-4
Chowdhury S, Dey P, Joel-Edgar S et al. (2023) Unlocking the value of artificial intelligence in human resource management through AI capability framework. Hum Resour Manag Rev 33:100899. https://doi.org/10.1016/j.hrmr.2022.100899
Dong W, Fan X (2024) Research on the influence mechanism of artificial intelligence capability on ambidextrous innovation. J Electr Syst 20(3):246–262
Doshi AR, Hauser OP (2024) Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci Adv 10(28):eadn5290. https://doi.org/10.1126/sciadv.adn5290
Eapen TT, Finkenstadt DJ, Folk J et al. (2023) How Generative AI can augment human creativity. Harv Bus Rev 101(5):56–64. https://doi.org/10.1108/JSBED-09-2023-508
Eke DO (2023) ChatGPT and the rise of generative AI: Threat to academic integrity? J Responsible Technol 13:100060. https://doi.org/10.1016/j.jrt.2023.100060
Fahnenstich H, Rieger T, Roesler E (2024) Trusting under risk - comparing human to AI decision support agents. Comput Hum Behav 153:108107. https://doi.org/10.1016/j.chb.2023.108107
Feng CM, Botha E, Pitt L (2024) From HAL to GenAI: Optimizing chatbot impacts with CARE. Bus Horiz 67:537–548. https://doi.org/10.1016/j.bushor.2024.04.012
Fornell C, Larcker DF (1981) Evaluating structural equation models with unobservable variables and measurement error. J Mark Res 18(1):39–50. https://doi.org/10.1177/002224378101800104
Galati F, Bigliardi B (2017) Does different NPD Project’s characteristics lead to the establishment of different NPD Networks? A knowledge perspective. Technol Anal Strat Manag 29(10):1196–1209. https://doi.org/10.1080/09537325.2016.1277581
Gillath O, Ai T, Branicky MS et al. (2021) Attachment and trust in artificial intelligence. Comput Hum Behav 115:106607. https://doi.org/10.1016/j.chb.2020.106607
Glikson E, Woolley AW (2020) Human trust in artificial intelligence: review of empirical research. Acad Manag Ann 14(1):627–660. https://doi.org/10.5465/annals.2018.0057
Goodhue DL, Thompson RL (1995) Task-technology fit and individual performance. MIS Q 19(2):213–236. https://doi.org/10.2307/249689
Haans RFJ, Pieters C, He Z (2016) Thinking about U: theorizing and testing U- and inverted U-shaped relationships in strategy research. Strateg Manag J 37(12):1177–1195. https://doi.org/10.1002/smj.2399
Haase J, Hanel PH (2023) Artificial muses: generative artificial intelligence chatbots have risen to human-level creativity. J Creat Behav 33(4):100066. https://doi.org/10.1016/j.yjoc.2023.100066
Haefner N, Wincent J, Parida V et al. (2021) Artificial intelligence and innovation management: a review, framework, and research agenda. Technol Forecast Soc Chang 162:120392. https://doi.org/10.1016/j.techfore.2020.120392
Hancock PA, Billings DR, Schaefer KE (2011) A meta-analysis of factors affecting trust in human-robotinteraction Hum. Factors 53(5):517–527. https://doi.org/10.1177/0018720811417254
Harmancioglu N, Sääksjärvi M, Hultink EJ (2020) Cannibalize and combine? The impact of ambidextrous innovation on organizational outcomes under market competition. Ind Mark Manag 85:44–57. https://doi.org/10.1016/j.indmarman.2019.07.005
He ZL, Wong PK (2004) Exploration vs. exploitation: an empirical test of the ambidexterity hypothesis. Organ Sci 15(4):481–494. https://doi.org/10.1287/orsc.1040.0078
Hermann E, Puntoni S (2024) Artificial intelligence and consumer behavior: from predictive to generative AI. J Bus Res 180:114720. https://doi.org/10.1016/j.jbusres.2024.114720
Howard MC, Rose JC (2018) Refining and extending task - technology fit theory: creation of two task - technology fit scales and empirical clarification of the construct. Inf Manag 56:103134. https://doi.org/10.1016/j.im.2018.12.002
Hu P, Lu Y, Gong Y (2021) Dual humanness and trust in conversational AI: a person-centered approach. Comput Hum Behav 119:106727. https://doi.org/10.1016/j.chb.2021.106727
Hua Y, Liu AMM (2017) An investigation of person-culture fit and person-task fit on ICT adoption in the Hong Kong construction industry. Architect Eng Desig Manag. https://doi.org/10.1080/17452007.2017.1324399
Huang MH, Rust R, Maksimovic V (2019) The feeling economy: managing in the next generation of artificial intelligence (AI). Calif Manag Rev 61(4):43–65. https://doi.org/10.1177/0008125619863436
Hubschmid-Vierheilig E, Rohrer M, Mitsakis F (2020) Digital competence revolution and human resource development in the United Kingdom and Switzerland. Future HRD 1:53–91
Jansen JJP, Van Den Bosch FAJ, Volberda HW (2006) Exploratory innovation, exploitative innovation, and performance: effects of organizational antecedents and environmental moderators. Manag Sci 52(11):1661–1674. https://doi.org/10.1287/mnsc.1060.0576
Jarrahi MH (2018) Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making. Bus Horiz 61(4):577–586. https://doi.org/10.1016/j.bushor.2018.03.007
Jia N, Luo X, Fang Z et al. (2024) When and how artificial intelligence augments employee creativity. Acad Manag J 67(1):5–32. https://doi.org/10.5465/amj.2022.0426
Johnson PC, Laurell C, Ots M et al. (2022) Digital innovation and the effects of artificial intelligence on firms’ research and development - Automation or augmentation, exploration or exploitation? Technol Forecast Soc Chang 179:121636. https://doi.org/10.1016/j.techfore.2022.121636
Kanbach DK, Heiduk L, Blueher G et al. (2024) The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective. Rev Manag Sci 18(2):1189–1220. https://doi.org/10.1007/s11846-023-00696-z
Kanitz R, Gonzalez K, Briker R et al. (2023) Augmenting organizational change and strategy activities: leveraging generative artificial intelligence. J Appl Behav Sci 59:345–363. https://doi.org/10.1177/00218863231168974
Kaplan AD, Kessler TT, Brill JC et al. (2023) Trust in artificial intelligence: meta-analytic findings. Hum Factors 65:337–359. https://doi.org/10.1177/00187208211013988
Keding C, Meissner P (2021) Managerial overreliance on AI-augmented decision-making processes: how the use of AI-based advisory systems shapes choice behavior in R&D investment decisions. Technol Forecast Soc Chang 171:120970. https://doi.org/10.1016/j.techfore.2021.120970
Khoa DT, Gip HQ, Guchait P et al. (2023) Competition or collaboration for human-robot relationship: a critical reflection on future cobotics in hospitality. Int J Contemp Hosp Manag 35(4):2202–2215. https://doi.org/10.1108/IJCHM-04-2022-0434
Kim J, Gatling A (2019) Impact of employees’ job, organizational and technology fit on engagement and organizational citizenship behavior. J Hosp Tour Technol 10(4):323–338. https://doi.org/10.1108/JHTT-04-2018-0029
Koivisto M, Grassini S (2023) Best humans still outperform artificial intelligence in a creative divergent thinking task. Sci Rep 13(1):13601. https://doi.org/10.1038/s41598-023-40858-3
Kong H, Yin Z, Chon K et al. (2024) How does artificial intelligence (AI) enhance hospitality employee innovation? The roles of exploration, AI trust, and proactive personality. J Hosp Mark Manag 33(2):261–287. https://doi.org/10.1080/19368623.2023.2258116
Kumar V, Kumar S, Chatterjee S et al. (2024) Artificial Intelligence (AI) Capabilities and the R&D Performance of Organizations: the moderating role of environmental dynamism. IEEE Trans Eng Manag 71:11522–11532. https://doi.org/10.1109/TEM.2024.3423669
Lee KA, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46(1):50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Lewis PR, Marsh S (2022) What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. Cognit Syst Res 72:33–49. https://doi.org/10.1016/j.cogsys.2021.11.001
Li PP, Liu H, Li Y et al. (2023) Exploration-exploitation duality with both tradeoff and synergy: the curvilinear interaction effects of learning modes on innovation types. Manag Organ Rev 19(3):498–532. https://doi.org/10.1017/mor.2022.49
Liang X, Guo G, Shu L et al. (2022) Investigating the double-edged sword effect of AI awareness on employee’s service innovative behavior. Tour Manag 92:104564. https://doi.org/10.1016/j.tourman.2022.104564
Lichtenthaler U (2019) Extremes of acceptance: employee attitudes toward artificial intelligence. J Bus Strateg 41(1):39–45. https://doi.org/10.1108/JBS-12-2018-0204
Limaj E, Bernroider EWN (2019) The roles of absorptive capacity and cultural balance for exploratory and exploitative innovation in SMEs. J Bus Res 94:137–153. https://doi.org/10.1016/j.jbusres.2017.10.052
Liu Y, Lee Y, Chen AN (2011) Evaluating the effects of task-individual-technology fit in multi-DSS models context: a two-phase view. Decis Support Syst 51(4):688–700. https://doi.org/10.1016/j.dss.2011.03.009
Lubatkin MH, Simsek Z, Ling Y et al. (2006) Ambidexterity and performance in small-to medium-sized firms: the pivotal role of top management team behavioral integration. J Manag 32(5):646–672. https://doi.org/10.1177/0149206306290712
Ma X, Huo Y (2023) Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technol Forecast Soc Chang 75:102362. https://doi.org/10.1016/j.techsoc.2023.102362
Magni F, Park J, Chao MM (2024) Humans as creativity gatekeepers: are we biased against AI creativity? J Bus Psychol 39:643–656. https://doi.org/10.1007/s10869-023-09910-x
Mariani M, Dwivedi YK (2024) Generative artificial intelligence in innovation management: a preview of future research developments. J Bus Res 175:114542. https://doi.org/10.1016/j.jbusres.2024.114542
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20:709–734. https://doi.org/10.2307/258792
Mom TJ, Van Den Bosch FA, Volberda HW (2007) Investigating managers’ exploration and exploitation activities: the influence of top-down, bottom-up, and horizontal knowledge inflows. J Manag Stud 44(7):910–931. https://doi.org/10.1111/j.1467-6486.2007.00697.x
Ng TWH, Yam KC (2019) When and why does employee creativity fuel deviance? Key psychological mechanisms. J Appl Psychol 104(6):1144–1163. https://doi.org/10.1037/apl0000397
Nieves J, Quintana A (2018) Human resource practices and innovation in the hotel industry: the mediating role of human capital. Tour Hosp. Res 18(1):72–83. https://doi.org/10.1177/1467358415624137
Noy S, Zhang W (2023) Experimental evidence on the productivity effects of generative artificial intelligence. Science 381(6654):187–192. https://doi.org/10.1126/science.adh2586
Parasuraman R, Manzey DH (2010) Complacency and bias in human use of automation: an attentional integration. Hum Factors 52(3):381–410. https://doi.org/10.1177/0018720810376055
Pei J, Wang H, Peng Q et al. (2024) Saving face: leveraging artificial intelligence-based negative feedback to enhance employee job performance. Hum Resour Manag 22226. https://doi.org/10.1002/hrm.22226
Pietronudo MC, Croidieu G, Schiavone F (2022) A solution looking for problems? A systematic literature review of the rationalizing influence of artificial intelligence on decision-making in innovation management. Technol Forecast Soc Chang 182:121828. https://doi.org/10.1016/j.techfore.2022.121828
Podsakoff PM, MacKenzie SB, Lee JY et al. (2003) Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 88(5):879–903. https://doi.org/10.1037/0021-9010.88.5.879
Roesler E, Vollmann M, Manzey D et al. (2024) The dynamics of human-robot trust: attitude and behavior - Exploring the effects of anthropomorphism and type of failure. Comput Hum Behav 150:108008. https://doi.org/10.1016/j.chb.2023.108008
Sætra HS (2023) Generative AI: here to stay, but for good? Technol Forecast Soc Chang 75:102372. https://doi.org/10.1016/j.techsoc.2023.102372
Saks AM, Ashforth BE (1997) A longitudinal investigation of the relationships between job information sources, applicant perceptions of fit, and work outcomes. Pers Psychol 50(2):395–426. https://doi.org/10.1111/j.1744-6570.1997.tb00913.x
Saßmannshausen T, Burggräf P, Wagner J et al. (2021) Trust in artificial intelligence within production management - An exploration of antecedents. Ergonomics 64(11):1333–1350. https://doi.org/10.1080/00140139.2021.1909755
Shrestha YR, Krishna V, Von Krogh G (2021) Augmenting organizational decision-making with deep learning algorithms: principles, promises, and challenges. J Bus Res 123:588–603. https://doi.org/10.1016/j.jbusres.2020.09.068
Singh K, Chatterjee S, Mariani M (2024) Applications of generative AI and future organizational performance: the mediating role of explorative and exploitative innovation and the moderating role of ethical dilemmas and environmental dynamism. Technov 133:103021. https://doi.org/10.1016/j.technovation.2024.103021
Statista (2023) Perceived reliability of ChatGPT in South Korea as of March 2023. Statista Dataset. https://www.statista.com/statistics/1381444/south-korea-chatgpt-result-reliability-perception/
Tacheva Z, Ramasubramanian S (2024) Challenging AI Empire: toward a Decolonial and Queer Framework of Data Resurgence. Authorea Preprints. 22012724. https://doi.org/10.31124/advance.2023
Thatcher JB, Carter M, Li X et al. (2013) A classification and investigation of trustees in B-to-C e-commerce: general vs. specific trust. Commun Assoc Inf Syst 32(1):107–134. https://doi.org/10.17705/1CAIS.03204
Thiebes S, Lins S, Sunyaev A (2021) Trustworthy artificial intelligence. Electron Mark 31:447–464. https://doi.org/10.1007/s12525-020-00441-4
Tran H, Murphy PJ (2023) Editorial: generative artificial intelligence and entrepreneurial performance. J Small Bus Enterp Dev 30(5):853–856. https://doi.org/10.1108/JSBED-09-2023-508
Usai A, Fiano A, Messeni Petruzzelli A et al. (2021) Unveiling the impact of the adoption of digital technologies on firms’ innovation performance. J Bus Res 133:327–336. https://doi.org/10.1016/j.jbusres.2021.04.035
Vincent VU (2021) Integrating intuition and artificial intelligence in organizational decision-making. Bus Horiz 64(5):425–438. https://doi.org/10.1016/j.bushor.2021.02.008
Vinchon F, Lubart T, Bartolotta S et al. (2023) Artificial intelligence & creativity: a manifesto for collaboration. J Creat Behav 597. https://doi.org/10.1002/jocb.597
Vössing M, Kühl N, Lind M et al. (2022) Designing transparency for effective human-AI collaboration. Inf Syst Front 24(3):877–895. https://doi.org/10.1007/s10796-022-10284-3
Wach K, Duong CD, Ejdys J et al. (2023) The dark side of generative artificial intelligence: a critical analysis of controversies and risks of ChatGPT. Enter Reg Dev 11:7–30. https://doi.org/10.15678/EBER.2023.110201
wael AL-khatib A (2023) Drivers of generative artificial intelligence to fostering exploitative and exploratory innovation: a TOE framework. Technol Soc 75:102403. https://doi.org/10.1016/j.techsoc.2023.102403
Wamba SF, Queiroz MM, Jabbour CJC et al. (2023) Are both generative AI and ChatGPT game changers for 21st-Century operations and supply chain excellence? Int J Prod Econ 265:109015. https://doi.org/10.1016/j.ijpe.2023.109015
Wei Z, Huang W, Wang Y et al. (2022) When does servitization promote product innovation? The moderating roles of product modularization and organization formalization. Technovation 117:102594. https://doi.org/10.1016/j.technovation.2022.102594
Wilson HJ, Daugherty PR (2019) Collaborative intelligence: Humans and AI are joining forces. Harv Bus Rev. https://hometownhealthonline.com/wp-content/uploads/2019/02/ai2-R1804J-PDF-ENG.pdf
Wu D, Liu T, Yang W et al. (2023) Knowledge coupling and organizational resilience: the moderating effect of market orientation. Sci Technol Soc 28(3):444–462. https://doi.org/10.1177/09717218231178343
Yan Y, Guan J (2018) Social capital, exploitative and exploratory innovations: the mediating roles of ego-network dynamics. Technol Forecast Soc Chang 126:244–258. https://doi.org/10.1016/j.techfore.2017.09.004
Yin M, Jiang S, Niu X (2024) Can AI really help? The double-edged sword effect of AI assistant on employees’ innovation behavior. Comput Hum Behav 150:107987. https://doi.org/10.1016/j.chb.2023.107987
Acknowledgements
This research was funded by the Natural Science Foundation of Heilongjiang Province (Grant No. LH2024G007), Ministry of Education Humanities and Social Science Research Project (Grant No. 24YJC630214), Postdoctoral Fellowship Program of CPSF (Grant No. GZC20233443), Philosophy and Social Science Foundation of Heilongjiang Province (Grant No. 23XZT046), General Program of China Postdoctoral Science Foundation (Grant No. 2024M764208), and Heilongjiang Postdoctoral Fund (Grant No. LBH-Z24149).
Author information
Authors and Affiliations
Contributions
Conceptualization, L.X.Y. and W.T.D.; methodology, W.T.D., S.F., and L.X.Y.; software, L.X.Y. and W.T.D.; validation, W.T.D. and S.F.; writing—original draft preparation, L.X.Y.; writing—review and editing, W.T.D. and S.F. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
This study received ethical approval from the Ethics Committee of Harbin Engineering University (Ethics approval number: 2023072301), granted on 23 July 2023, prior to the commencement of the research. The approval remains valid until 23 July 2025 and covers all aspects of the study, including participant recruitment, data collection, and data analysis. All procedures were conducted in accordance with relevant guidelines and regulations for research involving human participants, including the Declaration of Helsinki, with particular attention to safeguarding participants’ rights, safety, and well-being, as well as ensuring scientific integrity.
Informed consent
Informed consent was obtained from all participants before they began the survey. The survey was conducted between November 2023 and May 2024 using two methods: (1) on-site distribution by W.T.D. and S.F. (MBA mentors), who provided a written informed consent form; and (2) online invitations via Credamo and email, which included an electronic informed consent page. Participants were informed of the study’s purpose, assured that participation was voluntary, and told they could withdraw at any time without penalty. They were also informed that their responses would be anonymized, securely stored, and used solely for academic research. On-site participants signed a written consent form, while online participants indicated consent by checking a box and clicking the “Agree and Continue” button prior to survey access. No identifiable personal data were collected or published.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Lin, X., Wang, T. & Sheng, F. Exploring the dual effect of trust in GAI on employees’ exploitative and exploratory innovation. Humanit Soc Sci Commun 12, 663 (2025). https://doi.org/10.1057/s41599-025-04956-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-025-04956-z