Introduction

GAI tools, such as ChatGPT, Gemini, Kimi, and Sora, have reduced accessibility barriers, broadened application scenarios, and enhanced creative capabilities, fundamentally reshaping employee workflows and business operations (Liang et al., 2022). Current data indicates that GAI influences 40% of working hours, with 80% of employees worldwide having utilized these tools in work contexts (Statista, 2023). This rapid adoption has led management scholars to emphasize the urgency of investigating AI’s transformative organizational impacts, particularly regarding employee adaptation mechanisms (Pietronudo et al., 2022). Trust has become a key factor shaping cognitive and behavioral responses to GAI implementation (Roesler et al., 2024). Oracle’s global survey reveals that 64% of employees place greater trust in AI systems than human managers, with over half preferring algorithmic decision support. These phenomena, encompassing both “algorithm aversion” and “algorithm appreciation,” demonstrate the complex and often paradoxical nature of employee trust in GAI. Crucially, research confirms that appropriate trust levels enable effective GAI adoption for innovation, yet over-trust introduces innovation risks and under-trust reduces its effectiveness (Vössing et al., 2022; Choung et al., 2023). This dual nature underscores the theoretical and practical necessity to systematically examine how trust in GAI shapes innovation outcomes while addressing potential trust-related paradoxes.

Scholarly perspectives on the impact of trust in GAI on employee innovation remain polarized. Advocates of human-AI collaboration highlight synergistic effects, contending that GAI’s automation of routine operations and complex data synthesis free employees to concentrate on creative tasks (Hu et al., 2021; Tacheva and Ramasubramanian, 2024; Kong et al., 2024). Conversely, studies on GAI ethics caution that innovation barriers such as algorithmic bias, privacy breaches, intellectual property violations, technical pressures, and overreliance systematically constrain novel idea generation (Wach et al., 2023; Banh and Strobel, 2023).

These theoretical divergences arise from two significant gaps in the literature. First, although trust in GAI exhibits differential impacts on innovation types, existing research frequently conflates exploitative and exploratory innovation (Johnson et al., 2022). Based on TTF theory (Howard and Rose, 2018), we propose an analytical framework that clarifies the distinct operation of trust in GAI across these innovation types. TTF theory asserts that technology effectiveness is determined by the fit between technology characteristics and task characteristics. Specifically, exploitative innovation, which is focused on incremental improvements, fits with GAI’s strengths in executing structured tasks, whereas exploratory innovation, which harnesses GAI’s creative potential to generate novel knowledge, encounters increased risks due to technical uncertainty. These distinct mechanisms require separate analytical approaches for each type of innovation. Furthermore, emerging GAI theories stress the importance of defining boundary conditions, particularly concerning contextual adaptability and user characteristics (Pei et al., 2024).

As GAI evolves, a common issue is the misfit between employees’ GAI capabilities and the rapid pace of GAI development, posing challenges for adapting to new roles and innovation requirements. This misfit among employees, GAI, and innovation tasks ultimately leads to innovation stagnation. Building on TTF theory, besides the technology-task fit between trust in GAI and ambidextrous innovation, two additional fits are equally critical: employee-GAI fit and employee-innovation task fit. This study integrates these perspectives and proposes that trust in GAI is positively related to exploitative innovation while exhibiting an inverted U-shaped relationship with both exploratory innovation and the complementarity of exploitative and exploratory innovation. Moreover, it investigates the moderating roles of employee-GAI fit and employee-innovation task fit in the relationship between trust in GAI and ambidextrous innovation.

Our research advances three important theoretical insights: (1) it furnishes a detailed comprehension of how trust in GAI influences exploitative and exploratory innovation, thereby elucidating inconsistencies in previous research on its impact on innovation; (2) it enriches the literature by examining how employee-GAI fit and employee-innovation task fit moderate the effects of trust in GAI on ambidextrous innovation; and (3) it extends TTF theory into the emerging ___domain of GAI innovation management.

Theory and hypotheses

TTF theory

TTF theory explains the underlying mechanisms of technology effectiveness by examining the fit among individuals, tasks, and technology across three dimensions: technology-task fit, individual-task fit, and individual-technology fit. In this study, “individual” refers to employees, “task” to innovation tasks, and “technology” to GAI. GAI-innovation task fit reflects the degree to which GAI’s technical functions fit with the requirements of innovation tasks (Goodhue and Thompson, 1995; Ammenwerth et al., 2006). Employee-GAI fit captures the fit between employees’ capabilities in leveraging GAI and GAI’s technical functions, while employee-innovation task fit assesses how well employees’ expertise, experience, and skills fit with the requirements of innovation tasks (Liu et al., 2011).

The differential impact of trust in GAI on exploitative and exploratory innovation arises from variations in how well GAI’s technical functions match the task requirements of each innovation type. Trust enables GAI to function as an effective “agent” in exploitative innovation (Kaplan et al., 2023), where objectives are well-defined, outcomes predictable, and risks relatively low (Jansen et al., 2006). Such tasks align with GAI’s strengths in data analysis, pattern recognition, and automation (Yan and Guan, 2018). Conversely, exploratory innovation entails high complexity and uncertainty, undefined solutions, and unpredictable outcomes (Limaj and Bernroider, 2019). These tasks require creativity, risk-taking, and adaptability—areas where GAI excels in generating novel ideas and synthesizing cross-___domain knowledge under changing conditions. However, GAI’s technical limitations and associated risks constrain its ability to fully support risk-reduction tasks, making the relationship between trust in GAI and exploratory innovation more complex.

Employee-GAI fit and employee-innovation task fit introduce new demands on employees, necessitating both the progressive development of GAI capabilities and the alignment of competencies with evolving task requirements in a dynamic division of labor (Hubschmid-Vierheilig et al., 2020). Howard and Rose (2018) suggest that achieving fit across these three dimensions predicts enhanced performance outcomes. Conversely, any misfit between employees and GAI, or between employees and innovation tasks, results in negative emotional responses, job insecurity, lower performance, and diminished innovation resilience (Arias-Pérez and Vélez-Jaramillo, 2021). Furthermore, human-AI collaboration research substantiates that decision-making efficacy is contingent on both individual and task characteristics (Vincent, 2021). Thus, we propose that employee-GAI fit and employee-innovation task fit moderate the relationship between trust in GAI and ambidextrous innovation. Building on these insights, we construct a hypothesized theoretical model (see Fig. 1).

Fig. 1
figure 1

Hypothesized model.

Trust in GAI

AI technology is undergoing a paradigm shift from analytical to generative models (Sætra, 2023; Baabdullah, 2024). GAI refers to a computer-assisted system capable of generating text, images, audio, and video (Kanitz et al., 2023). It is distinguished by democratization, versatility, and creativity: (1) GAI’s social attributes (e.g., conversational intelligence, social intelligence, and anthropomorphism) make AI accessible to non-experts, including employees, end-users, and SMEs, for the first time (Bilgram and Laarmann, 2023); (2) the large language models underpinning GAI serve as a universal technology applicable across diverse industries (Ma and Huo, 2023); and (3) GAI demonstrates creativity comparable to humans, with its potential most evident in innovation scenarios (Bilgram and Laarmann, 2023).

Trust remains a central topic in scholarly discourse. Trust in GAI stems from interpersonal trust and trust in technology: (1) Drawing on theories of social responses toward computing, intelligent IT artifacts like GAI are perceived as embodying moral attributes such as benevolence (Thatcher et al., 2013). GAI’s anthropomorphic characteristics enable trust to be interpreted through the lens of interpersonal trust, which is defined as one party’s willingness to accept vulnerability grounded in the belief that the other party will prioritize their interests without external oversight (Mayer et al., 1995). (2) As a next-generation technology, GAI also introduces a distinct form of trust in technology (Thiebes et al., 2021), reflecting the belief that an agent will assist individuals in navigating uncertainty and achieving desired outcomes (Lee and See, 2004). Synthesizing these perspectives, this study defines trust in GAI as an individual’s willingness to rely on GAI, based on the belief that it will perform tasks beneficially in uncertain and vulnerable environments.

Research highlights its dual-edged nature of trust in GAI, encompassing positive and negative outcomes. On the positive side, trust fosters collaboration, connectivity, and operational efficiency, enhancing human-AI team performance (Gillath et al., 2021; Khoa et al., 2023). On the negative side, it entails shifts in decision-making authority, overreliance on GAI, diminished creativity, and challenges in managing technological risks (Glikson and Woolley, 2020; Feng et al., 2024). However, existing studies have yet to offer a comprehensive examination of the combined impact of these contrasting effects.

Trust in GAI and exploitative innovation

Exploitative innovation primarily focuses on refining existing knowledge and improving established products (Jansen et al., 2006). GAI’s technical functions in market feedback analysis and knowledge updating fit well with these tasks, supporting the expectation that trust in GAI positively impacts exploitative innovation. First, exploitative innovation relies on employees’ established knowledge, requiring deep understanding and accumulation of existing fields (Yan and Guan, 2018). Trust in GAI facilitates access to the latest ___domain-specific information, reducing barriers and costs associated with knowledge updates. Chen et al. (2021) suggest that substitutive knowledge coupling is positively related to exploitative innovation. By leveraging GAI to retrieve up-to-date knowledge, employees can integrate new insights while filtering out obsolete information, thereby deepening their knowledge base (Wu et al., 2023).

Second, exploitative innovation demands accurate identification of market needs and swift responses (Berraies et al., 2021; Wael Al-Khatib, 2023). However, employees often face resource constraints when managing vast datasets (Singh et al., 2024). Trust in GAI enables efficient analysis of both structured and unstructured data, including purchasing records, social media feedback, customer reviews, and offline monitoring. Through sentiment analysis and audience segmentation, GAI identifies patterns, relationships, and emerging trends (Shrestha et al., 2021; Wael Al-Khatib, 2023). These insights enhance the prediction of purchasing behaviors and demand shifts across the product lifecycle (Akter et al., 2023). Employees can then integrate GAI-generated insights with market intelligence, strategic objectives, and brand positioning to refine product features and iterations (Bouschery et al., 2023). As such, we hypothesize that:

H1: Trust in GAI is positively related to exploitative innovation.

Trust in GAI and exploratory innovation

Trust in GAI exerts dual effects: it fosters a “GAI empowerment” mechanism when fitted with creative tasks but introduces a “GAI limitation” mechanism when misfitted with risk-reduction tasks. These opposing forces give rise to an inverted U-shaped relationship between trust in GAI and exploratory innovation (see Fig. 2a).

Fig. 2: Relationship between Trust in GAI and exploratory innovation.
figure 2

This figure illustrates how trust in GAI affects exploratory innovation: a the inverted U-shaped relationship; b the moderating effect of employee-GAI fit; and c the moderating effect of employee-innovation task fit. a Trust in GAI promotes exploratory innovation through the positive mechanism (GAI empowerment), while also inhibiting it through the negative mechanism (GAI limitation). The combined effect results in an inverted U-shaped relationship. b When employee-GAI fit is high (dashed line), the positive mechanism is amplified, shifting the turning point of the inverted U-shaped curve to the right. c When employee-innovation task fit is high (dashed line), the positive mechanism is amplified while the negative mechanism is mitigated, thereby shifting the turning point to the right and flattening the inverted U-shaped curve.

“GAI empowerment” mechanism

Exploratory innovation revolves around expanding knowledge and developing novel products (Jansen et al., 2006). GAI’s technical functions in knowledge search and creative idea generation fit well with these tasks. First, exploratory innovation thrives on novel knowledge, emerging technologies, and divergent thinking (Bachmann et al., 2021). Nonetheless, human knowledge is constrained by cognitive biases, fixed thinking modes, and personal experience. With its expansive knowledge base and large language model, GAI explores a significantly broader search space. Trust in GAI facilitates the rapid and comprehensive acquisition of external insights and cross-___domain knowledge (Haase and Hanel, 2023). This empowers employees to focus on knowledge exploration and opportunity identification (Tacheva and Ramasubramanian, 2024). Chen et al. (2021) demonstrated that combining knowledge from different fields fosters novel idea generation and breakthrough innovation. By synthesizing diverse knowledge sources, GAI contributes to the creation of unconventional ideas that challenge existing paradigms and drive the development of novel products (Yan and Guan, 2018; Haase and Hanel, 2023).

Beyond knowledge complementation, GAI directly generates creative ideas (Roesler et al., 2024). While creativity was traditionally considered a uniquely human trait (Amabile, 2020), recent studies challenge this notion (Haase and Hanel, 2023). Some research suggests that GAI’s creative output surpasses human creativity (Hermann and Puntoni, 2024), whereas others find no significant difference (Noy and Zhang, 2023) or assert that human ingenuity remains superior (Koivisto and Grassini, 2023). Despite these mixed findings, there is a consensus that GAI possesses a form of creativity distinct from traditional AI, generating innovative solutions previously unforeseeable by employees (Hermann and Puntoni, 2024). When integrated with predictive modeling, trust in GAI aids in identifying flaws in product prototypes, reducing ineffective iterations in traditional R&D processes (Wamba et al., 2023).

“GAI limitation” mechanism

Exploratory innovation is inherently risky and uncertain, with core tasks involving risk mitigation to prevent failure (Yan and Guan, 2018). However, GAI, often described as a “black box” due to its dynamic, opaque, and unpredictable nature (Anthony et al., 2023), lacks human intuition and subconscious decision-making (Jarrahi, 2018; Magni et al., 2024). These limitations are particularly pronounced in exploratory innovation, which involves emotional, social, and ethical dimensions. Trust, in this context, entails vulnerability and exposure to uncontrollable risks (Lewis and Marsh, 2022). Consequently, rather than mitigating risk, trust in GAI introduces new risks and uncertainties, resulting in a misfit with task requirements. These risks include: (1) ethical risks, such as algorithmic bias, security vulnerabilities, data discrimination, misuse, botshit, and unclear accountability (Wach et al., 2023); (2) overreliance, reducing employees’ independent decision-making and creative problem-solving abilities (Keding and Meissner, 2021; Eke, 2023); (3) cognitive biases, including the tendency to accept GAI’s first suggestion due to the “einstellung effect”, limiting alternative solutions (Doshi and Hauser, 2024); and (4) feasibility neglect, where GAI-generated ideas may overlook resource constraints, technical limitations, and market realities, hindering practical implementation. As trust in GAI increases, these risks escalate, further misfitting GAI’s technical functions with the risk management requirements of exploratory innovation tasks.

In summary, trust in GAI amplifies the benefits of the “GAI empowerment” mechanism. However, as trust reaches excessive levels, the “GAI limitation” mechanism intensifies, ultimately outweighing its advantages. Evidence suggests that over-trust leads to overreliance and misuse of GAI, while under-trust prevents employees from fully leveraging its potential (Kaplan et al., 2023). The optimal level of trust strikes a balance between these competing forces, allowing employees to maximize GAI’s contributions to exploratory innovation. As such, we hypothesize that:

H2: There is a curvilinear relationship between trust in GAI and exploratory innovation, such that trust in GAI initially has a positive effect on exploratory innovation but this positive influence flattens out and then declines at a high level of trust in GAI.

Trust in GAI and ambidextrous innovation

From the perspective of situational ambidexterity, exploratory and exploitative innovation are not mutually exclusive but rather complementary and mutually reinforcing (Harmancioglu et al., 2020). This complementarity-based perspective provides a comprehensive lens for analyzing how trust in GAI influences ambidextrous innovation for two reasons: On one hand, trust in GAI enables the system to handle repetitive, data-intensive tasks, allowing employees to concentrate on addressing existing market demands while simultaneously identifying latent opportunities. This division of labor fosters the concurrent advancement of exploitative and exploratory innovation (Tacheva and Ramasubramanian, 2024). On the other hand, trust in GAI facilitates access to external knowledge and resources, helping to mitigate the inherent tensions between exploratory and exploitative innovation (Fahnenstich and Rieger, 2024). Scholars such as Berg (2016) and Li et al. (2023) argue that organizations or individuals capable of balancing both exploratory and exploitative innovation are better positioned to achieve synergistic outcomes and improved performance.

As discussed above, trust in GAI positively influences exploitative innovation, while its relationship with exploratory innovation follows an inverted U-shaped pattern. Our research adopts the measurement framework proposed by Lubatkin (2006), which assesses the complementarity of exploitative and exploratory innovation by summing their respective dimensions (see Fig. 3). Before reaching the turning point of the exploratory innovation curve (point a), the “GAI empowerment” mechanism drives exploratory innovation in parallel with the growth of exploitative innovation, reinforcing their complementarity. GAI facilitates exploratory innovation when exploitative innovation is robust, as accumulated knowledge and experience provide resources for exploration (Li et al., 2023). Conversely, in the presence of extensive exploratory innovation, GAI enhances exploitative innovation by accelerating the commercialization of novel ideas and technologies (Harmancioglu et al., 2020). This interdependence creates a dynamic cycle that sustains both short-term profitability and long-term growth (Berraies et al., 2021).

Fig. 3
figure 3

Relationship between trust in GAI and complementarity of exploitative and exploratory innovation.

Beyond point a, the continued linear growth in exploitative innovation partly offsets the nonlinear decline in exploratory innovation caused by the “GAI limitation” mechanism. Nevertheless, as the negative effects of the “GAI limitation” mechanism intensify, this compensatory effect gradually weakens, eventually reaching an optimal level at point b. Once trust in GAI exceeds point b, the complementarity of exploitative and exploratory innovation decreases (Chen et al., 2015). Over-trust leads to overreliance on GAI’s recommendations, causing narrowed resource allocation and neglecting alternative exploratory opportunities (Parasuraman and Manzey, 2010). Furthermore, over-trust in GAI undermines employees’ independent judgment and critical thinking, limiting their ability to address the nuanced challenges inherent in managing exploitative and exploratory innovation (Keding and Meissner, 2021). These factors disrupt the transfer of knowledge and resources between the two innovation types. Hence, we hypothesize that:

H3: There is a curvilinear relationship between trust in GAI and the complementarity of exploitative and exploratory innovation, such that trust in GAI initially has a positive effect on the complementarity of exploitative and exploratory innovation but this positive influence flattens out and then declines at a high level of trust in GAI.

The moderating role of employee-GAI fit and employee-innovation task fit

From the perspective of TTF theory, the effectiveness of GAI depends on its inherent characteristics and fits with the capabilities of its users (Ammenwerth et al., 2006; Liu et al., 2011). The integration of GAI into the workplace requires employees to develop new capabilities (Kanbach et al., 2024), including (1) realization capability, which refers to understanding GAI; (2) utilization capability, which involves interpreting, explaining, and applying GAI-generated outputs; and (3) maintenance capability, which pertains to managing, regulating, and adapting GAI in dynamic business environments (Chowdhury et al., 2023). When employee-GAI fit is high, the system seamlessly integrates into innovation practices, minimizing friction and resistance (Chowdhury et al., 2023). This fit enables trust in GAI to be translated more effectively into work performance, thereby encouraging active engagement with GAI for task completion (Lichtenthaler, 2019). Greater engagement strengthens GAI’s technical functions in analyzing market feedback and updating knowledge, making it fit more closely with the requirements of exploitative innovation tasks (Galati and Bigliardi, 2017). On the contrary, low employee-GAI fit creates barriers, as the system may conflict with employees’ established work habits and workflows. Hence, we hypothesize that:

H4a: The positive relationship between trust in GAI and exploitative innovation is strengthened by employee-GAI fit.

We propose that employee-GAI fit amplifies the “GAI empowerment” mechanism of trust in GAI on exploratory innovation while leaving the “GAI limitation” mechanism unaffected. This moderating effect shifts the turning point of the inverted U-shaped relationship to the right. A higher level of employee-GAI fit enhances employees’ capabilities to leverage GAI for exploring new domains and acquiring novel knowledge (Eapen et al., 2023; Chowdhury et al., 2023). Such engagement promotes the discovery of innovative solutions that traditional approaches might overlook (Tran and Murphy, 2023). Moreover, a strong employee-GAI fit facilitates seamless collaboration, reduces cognitive load, and allows employees to focus on creative tasks rather than learning to operate GAI. Nevertheless, the “GAI limitation” mechanism is primarily influenced by technological constraints and employees’ cognitive biases, rather than improvements in employees’ GAI capabilities. Hence, we posit the following hypothesis:

H4b: Employee-GAI fit moderates the inverted U-shaped relationship between trust in GAI and exploratory innovation by shifting the turning point to the right.

We propose that employee-innovation task fit strengthens the positive effect of trust in GAI on exploitative innovation (see Fig. 2b). The rise of GAI is reshaping innovation tasks, requiring a redefinition of roles and a recalibration of employees’ responsibilities (Chowdhury et al., 2023). Within exploitative innovation tasks, employees shift their focus from traditional data collection to integrating their task competencies with GAI-driven market insights (Haefner et al., 2021; Haase and Hanel, 2023). When employees’ expertise, experience, and skills fit with task requirements, they can effectively incorporate GAI-generated outputs into planning and execution. This fit enables them to address challenges efficiently, thus advancing the exploitative innovation process (Jia et al., 2024). Besides, a strong employee-innovation task fit enhances self-efficacy beliefs (Hua and Liu, 2017), which are closely related to improved innovation outcomes (Nieves and Quintana, 2018). Employees who are confident in using GAI for exploitative tasks are better equipped to adapt to market fluctuations. Conversely, when employees’ competencies misfit task requirements, exploitative innovation progress is hindered. As such, we hypothesize that:

H5a: There is a positive moderating effect of employee-innovation task fit on the relationship between trust in GAI and exploitative innovation.

Employee-innovation task fit also moderates the inverted U-shaped relationship between trust in GAI and exploratory innovation by amplifying the “GAI empowerment” mechanism while suppressing the “GAI limitation” mechanism (see Fig. 2c). Within exploratory innovation tasks, employees transition from relying solely on intuition and experience to forming a creative synergy with GAI (Vinchon et al., 2023; Haase and Hanel, 2023). GAI’s technical functions in knowledge search and creative idea generation streamline task completion and enhance the quality of innovative outputs (Huang et al., 2019). When employee-innovation task fit is high, employees effectively integrate their expertise, intuition, and emotional engagement with GAI-generated insights to identify novel opportunities and explore new directions (Wilson and Daugherty, 2019).

Additionally, employee-innovation task fit mitigates the risks associated with GAI by enabling employees to take responsibility for tasks beyond GAI’s technical functions, such as emotional engagement, ethical considerations, and adapting to dynamic situations (Jia et al., 2024; Yin et al., 2024). Employees’ unique experience, emotional intelligence, and abstract reasoning allow them to navigate these complexities, fostering originality and counteracting the constraints of GAI (Berraies et al., 2021). Moreover, employees with strong ___domain expertise and experience rely less on GAI, reducing dependency and alleviating concerns that overreliance may stifle innovation (Saßmannshausen et al., 2021). As such, we hypothesize that:

H5b: Employee-innovation task fit negatively moderates the inverted U-shaped relationship between trust in GAI and exploratory innovation.

Methodology

Sample and data collection

China is home to over 120 firms developing large language models, such as Alibaba Cloud’s Tongyi Qianwen, Baidu’s ERNIE Bot, and ByteDance’s Doubao, providing an optimal empirical setting for our survey. The questionnaire design followed a systematic three-step approach (Wei et al., 2022). First, we developed the questionnaire based on a comprehensive literature review and expert interviews. Next, we refined the questions through in-depth discussions with three employees from Internet firms. A pilot study involving 43 employees further improved the questionnaire’s clarity and validity.

To minimize biases stemming from regional economic disparities, our survey targeted provinces and municipalities across three tiers of competitiveness in China’s new-generation AI technology industry. This included first-tier regions such as Beijing, Guangdong, and Shandong; second-tier regions like Liaoning, Anhui, and Tianjin; and third-tier regions comprising Jilin, Heilongjiang, and Henan. Employees working in R&D, design, and marketing departments in the Internet, finance, manufacturing, and telecommunication industries were invited to participate, as they are more digitally literate and have direct access to GAI technology.

The survey employed a mixed-mode approach: (1) on-site distribution, where questionnaires were handed out during MBA classes and via MBA students to their colleagues, and (2) online distribution, where targeted respondents were invited directly or through departmental heads, with additional support from data collection agencies (Credamo). The survey was conducted from November 2023 to May 2024.

To mitigate common method bias (CMB), the questionnaire was administered in two waves, with an interval of more than two weeks. The first wave covered control variables, trust in GAI, employee-innovation task fit, and employee-GAI fit, while the second wave measured exploratory and exploitative innovation. Each questionnaire was assigned a unique sample identifier to ensure accurate matching across both time points. At the beginning of the survey, an informed consent statement emphasized voluntary participation. A total of 443 responses were collected. After excluding incomplete responses due to respondents changing jobs or resigning mid-survey, and those exhibiting discernible response patterns (such as S, I, or Z types), 302 valid questionnaires remained. The effective response rate was 68.17%. Table 1 presented the descriptive statistics and demographic details of the respondents.

Table 1 Sample characteristics (N = 302).

Measures

All scales were based on existing literature (see Table 2). We assessed all measures using a seven-point Likert scale, ranging from “strongly disagree” to “strongly agree”.

Table 2 Measures and validation.

Following Keding and Meissner (2021), we adapted 4 items to measure trust in GAI. Consistent with Mom (2007), we used six items for exploitative innovation and five for exploratory innovation. There are two measurement approaches for the complementarity of exploitative and exploratory innovation, first by multiplying exploitative and exploratory innovation (He et al., 2004), and second by summing the two (Lubatkin, 2006). Lubatkin (2006) conducted empirical research comparing these two measurement approaches and found that the summation method yielded the best results, with no significant information loss when combined into a single index. Therefore, we adopted the second measurement approach. A 4-item scale was adapted from Kim and Gatling (2019) to measure employee-GAI fit. Employee-innovation task fit was evaluated through four items based on Saks and Ashforth (1997) and Hua and Liu (2017).

It has been shown that employee innovation may be affected by variables at the individual and organizational levels (Ng and Yam, 2019). Thus, this study employed employee gender, age, and education, along with firm size, firm ownership, and firm age, as control variables. Employee gender and firm ownership are dummy variables, while other control variables are assigned values ranging from 1 to N.

Reliability and validity

Data analysis was conducted using SPSS 23.0 and AMOS 26.0. As Table 2 showed, the Cronbachs’α for key variables exceeded 0.70, indicating high reliability. All loadings were greater than 0.6, and the average variance extracted (AVE) for the key variables was greater than 0.5, indicating high convergent validity. The square roots of the AVE were all greater than the off-diagonal elements (Fornell and Larcker, 1981), indicating sufficient discriminant validity (see Table 4). Furthermore, with residual correlations included to address shared measurement error, the five-factor model demonstrated a good fit to the data (χ2/df = 2.595, RMSEA = 0.073, CFI = 0.917, TLI = 0.901, SRMR = 0.050).

Common method bias

Harman’s single-factor test results indicated that the highest univariate contribution was 32.981%, which was below the critical value of 50% (Podsakoff et al., 2003). Meanwhile, the fit of the single-factor model (χ2/df = 10.857, RMSEA = 0.181, CFI = 0.442, TLI = 0.389, SRMR = 0.152) was not as good as that of the model with five-factor measures. It can be concluded that CMB was not a serious problem.

Analysis and results

Table 3 presented descriptive statistics and correlations for the variables in the model. Table 4 provided the results of the test on exploitative innovation. To control the problem of multicollinearity, this study mean-centered the variables included in the interaction terms. M2 shows that the coefficient of trust in GAI on exploitative innovation was positive and significant (β = 0.271, p < 0.001), while M3 shows that the coefficient of the squared term of trust in GAI on exploitative innovation was non-significant (β = −0.048, p = 0.231 > 0.05). The results indicate that trust in GAI positively influences exploitative innovation. As such, H1 was supported. To more intuitively observe the relationship between trust in GAI and exploitative innovation, a scatter plot was created with trust in GAI as the horizontal axis variable and exploitative innovation as the vertical axis variable, as shown in Fig. 4. It revealed that most data points fall within the 95% confidence interval, and the linear fit was statistically significant. In addition, a nearly linear relationship between trust in GAI and exploitative innovation was observed, further supporting H1.

Table 3 Descriptive statistics and correlation matrix.
Table 4 Regression results for the study hypothesis on exploitative innovation.
Fig. 4: Scatter Plot of trust in GAI and exploitative innovation.
figure 4

Note: The upper and lower lines represent the 95% confidence interval, and the middle line indicates the fitted line.

The coefficient of trust in GAI with employee-GAI fit in M5 was positive and significant (β = 0.278, p < 0.001), supporting H4a. Figure 5 illustrated that when the employee-GAI fit was high, the relationship between trust in GAI and exploitative innovation was stronger, as evidenced by a steeper slope, compared to when the fit was low. This finding suggests that as the level of employee-GAI fit increases, the positive impact of trust in GAI on exploitative innovation intensifies. It can be inferred that employee-GAI fit enhances the positive effect of trust in GAI and exploitative innovation. Moreover, the empirical results of the moderating effect of employee-innovation task fit on trust in GAI affecting exploitative innovation are reported in M7. The coefficient of trust in GAI and employee-GAI fit was non-significant (β = 0.070, p = 0.124 > 0.05). H5a was not supported. This may be because GAI has a higher degree of automated completion of exploitative innovation tasks and does not require as much updating of employee’s expertise, skills and experience.

Fig. 5
figure 5

The moderating effect of employee-GAI fit in the relationship between trust in GAI and exploitative innovation.

The regression equation of trust in GAI on exploratory innovation was assumed to be EI2 = β0 + β1 TAI + β2 TAI2. The main effect was an inverted U-shaped relationship needs to satisfy the following three conditions (Haans et al., 2016): (1) the coefficient of the squared term of TAI was significantly negative; (2) when TAI was the minimum value, the slope was significantly positive, and when TAI was the maximum value, the slope was significantly negative; and (3) the turning point should be in the range of valid values of TAI. We standardized variables included in the interaction terms. The results were shown in Table 5. As shown in M9, the coefficient of the squared term of TAI on EI2 was significantly negative (β2 = −0.341, p < 0.001). The slope of the curve k = 0.185 − 0.682 TAI, where TAI ranged from −2.359 to 2.060. When TAI = −2.359, the value of k was significantly positive, while when TAI = 2.060, the value of k was significantly negative. The turning points of TAI are -β1/2β2 = 0.271, which was in the valid range of values of TAI (see Fig. 6a). Thus, H2 was supported.

Table 5 Regression results for the study hypothesis on exploratory innovation.
Fig. 6: The moderating role of employee-GAI fit and employee-innovation task fitin trust in GAI and exploratory innovation.
figure 6

This figure illustrates how employee-GAI fit and employee-innovation task fit moderate the relationship between trust in GAI and exploratory innovation: a main effect of trust in GAI; b employee-GAI fit as a moderator; and c employee-innovation task fit as a moderator. a Positive and negative mechanisms coexist, and their combined effect results in an inverted U-shaped curve with a turning point at 0.271. b A new positive mechanism emerges, shifting the turning point of the inverted U-shaped curve to 0.600. c New positive and negative mechanisms emerge, shifting the turning point of the inverted U-shaped curve to 0.984 and flattening the curve.

Similarly, the regression equation of trust in GAI on the complementarity of exploitative and exploratory innovation was assumed to be EI1 + EI2 = δ0 + δ1 TAI + δ2 TAI2. M14 revealed a significantly negative coefficient for the squared term of TAI’s influence on EI1 + EI2 (δ2 = −0.355, p < 0.001). The curve slope was characterized by k = 0.482 − 0.710TAI, with k significantly positive at TAI = −2.359 and significantly negative at TAI = 2.060. The calculated turning point for TAI = 0.679, lay within the established range of TAI, providing support for H3.

Regarding the moderating effect of employee-GAI fit on the relationship between trust in GAI and exploratory innovation, the hypothetical regression equation was EI2 = θ0 + θ1 TAI + θ2 TAI2 + θ3 EGF + θ4 TAI * EGF + e. According to the results of M10, θ1 = 0.166 (p = 0.003 < 0.01), θ2 = −0.367 (p < 0.001), θ4 = 0.275 (p < 0.001). The horizontal coordinate of the turning point was -(θ1 + θ4 EGF)/2θ2. At high levels of EGF, the turning point was shifted to the right to 0.600. The coefficient of the interaction between EGF and the squared term of TAI on EI2 was positive and non-significant (θ = 0.087, p = 0.08 > 0.05). These results suggested that employee-GAI fit affected only the turning point of the curve through its influence on the “GAI empowerment” mechanism, without impacting the “GAI limitation” mechanism (see Fig. 6b). Therefore, H4b was supported.

Likewise, the hypothetical regression equation was EI2 = λ0 + λ1 TAI + λ2 TAI2 + λ3 TAI * ETF + λ4 TAI2 * ETF + λ5 ETF + e. According to M12, λ3 = 0.213 (p < 0.001), λ4 = 0.132(p < 0.01), showing that ETF significantly moderated the relationship between TAI and EI2. The mean of ETF plus or minus one standard deviation was used as the criterion for grouping. Under high ETF conditions, the simple slope at the low point of TAI was k = 1.285 > 0, and the simple slope at the high point of TAI was k = −0.328 < 0. By setting the first-order equation of TAI to zero, the turning point was derived as TAI* = (-λ1 − λ3 EGF)/(2λ2 + 2λ4 EGF). This yielded turning points of 0.984 for high EGF and −0.053 for low EGF. The turning point shifted to the right as λ1λ3 − λ2λ4 > 0 (Haans et al., 2016). These results suggested that employee-GAI fit influenced the positive “GAI empowerment” mechanism by shifting the turning point to the right and impacted the “GAI limitation” mechanism by flattening the curve (see Fig. 6c). H5b was therefore supported.

Discussion

Conclusion

This paper investigates how trust in GAI influences employees’ exploitative and exploratory innovation. Grounded in TTF theory, we constructed a theoretical model and empirically tested it using survey data from 302 employees in China, yielding several significant findings.

First, aligning with prior research indicating that trust in GAI positively impacts productivity, business performance, and human-AI team effectiveness while potentially leading to negative outcomes such as technology obsolescence, time loss, misuse, security violations, and disenfranchisement (Khoa et al., 2023; Feng et al., 2024), we reveal a more nuanced “double-edged sword” effect of trust in GAI on innovation. Johnson et al. (2022) highlighted that GAI’s effects vary because exploitative and exploratory innovation require different sets of functions. Therefore, it is crucial to examine their distinct influences. Further empirical studies by wael Al-Khatib (2023) and Singh et al. (2024) indicated a positive correlation between GAI adoption and ambidextrous innovation, yet they do not address the risks and challenges posed by GAI in various innovation contexts nor explore the complementarity of exploitative and exploratory innovation. These research gaps motivate our study.

(1) Our analysis reveals a positive relationship between trust in GAI and exploitative innovation. According to TTF theory, this effect can be attributed to the high certainty of exploitative innovation tasks (Jansen et al., 2006). Trust in GAI enables employees to utilize GAI efficiently for repetitive and data-intensive activities, such as updating knowledge and tracking market feedback, which fits with tasks aimed at enhancing existing knowledge and products. Consistent with our findings, prior research has interpreted AI in innovation through the lens of dynamic capabilities. For example, Dong and Fan (2024) and Kumar et al. (2024) found that AI capability positively impacts exploitative innovation, and Wael Al-Khatib et al. (2023) demonstrated that GAI capability enhances exploitative innovation in digital supply chains. These findings provide indirect support for our conclusions.

(2) Our study reveals a curvilinear relationship between trust in GAI and exploratory innovation. This relationship is driven by the linear positive effects of the “GAI empowerment” mechanism and the curvilinear negative effects of the “GAI limitation” mechanism. Based on TTF theory, GAI’s technical functions in knowledge search and creative idea generation enhance idea uniqueness and facilitate knowledge integration across domains, which fits well with innovation tasks focused on expanding new knowledge and products. However, these functions have limitations when addressing tasks requiring emotional complexity, intricate social dynamics, and ethical judgments. Consequently, over-trust in GAI may lead to idea homogeneity, overreliance, feasibility neglect, and moral risks, making it challenging to fit risk-reduction tasks. It should be noted that studies on human-AI collaboration in innovation often do not distinguish explicitly between exploitative and exploratory innovation. Instead, discussions tend to focus on exploratory innovation, generally assuming that GAI is positively associated with innovation. For instance, Eapen et al. (2023) identified five mechanisms through which GAI promotes high-quality innovation, while Bilgram and Laarmann (2023) emphasized GAI’s critical role throughout the innovation process. In contrast, Sætra (2023) cautioned against GAI’s risks in complex problem-solving, and meta-analyses by Hancock et al. (2011) and Kaplan et al. (2023) revealed that both over-trust and under-trust in AI diminish the value of human-AI interactions. Our study integrates these perspectives, demonstrating that trust in GAI simultaneously facilitates and constrains exploratory innovation through its empowerment and limitation mechanisms.

(3) This study identifies an inverted U-shaped relationship between trust in GAI and the complementarity of exploitative and exploratory innovation. Extending the contextual ambidexterity framework, we find that the concurrent rise of exploitative innovation and the “GAI empowerment” mechanism in exploratory innovation enhances their complementarity. Notably, our findings introduce a novel insight: when the linear growth in exploitative innovation fails to offset the “GAI limitation” mechanism in exploratory innovation, it constrains their complementarity. This discovery contributes to ongoing discussions on the challenges of achieving complementarity of exploitative and exploratory innovation. Prior research has largely attributed ambidexterity to organizational and structural factors, such as innovation climate, organizational trust, structural differentiation, and leadership (Berraies and Zine El Abidine, 2019). Our study advances this literature by identifying trust in GAI as a critical antecedent variable influencing innovation complementarity.

Second, while Vincent (2021) highlighted the moderating role of individual and task characteristics in human-AI collaboration, this study extends this understanding in the context of GAI and innovation: (1) For the relationship between trust in GAI and exploitative innovation, employee-GAI fit strengthens the positive impact, whereas employee-innovation task fit does not. This may be because exploitative innovation relies more on analyzing structured market data, which fits with the strengths of GAI. Thus, the role of trust in GAI in exploitative innovation depends more on employees’ capabilities to utilize GAI effectively than on their skills or experience. (2) For the relationship between trust in GAI and exploratory innovation, employee-GAI fit amplifies the “GAI empowerment” mechanism. In contrast, employee-innovation task fit enhances the “GAI empowerment” mechanism and mitigates the “GAI limitation” mechanism. Existing research presents two opposing views on the skills and expertise necessary for effective GAI utilization. Jia et al. (2024) suggested that GAI fosters creativity, particularly among higher-skilled employees, whereas Noy and Zhang (2023) indicated that GAI provides greater benefits to lower-skilled employees. Kanbach et al. (2024) argued that GAI democratizes technology and knowledge access, reducing the need for expertise in innovation. Our research reconciles these views by demonstrating that GAI’s accessibility and usability enable innovation across skill levels. In fact, the role of trust in GAI in exploratory innovation is not solely determined by employees’ skills but also by the fit among employees’ GAI capabilities, task requirements, and technical functions.

Theoretical contributions

Our study contributes to the literature in three key aspects. First, our findings offer a thorough insight into how trust in GAI shapes exploitative and exploratory innovation, thereby enriching existing knowledge. Although studies on GAI in the workplace are increasing, the significance of trust in GAI as an innovation driver remains underexplored. Recent calls for research include Usai et al. (2021), who urged further investigation into the factors determining efficient technology utilization from cognitive and managerial perspectives; Mariani and Dwivedi (2024), who emphasized the necessity to delve into the causal link between GAI and innovation outcomes; and Baabdullah (2024), who highlighted the need for empirical studies on GAI and innovation. In response, our research provides robust empirical evidence on how trust in GAI influences both exploitative and exploratory innovation. More importantly, this study uncovers potential constraints on exploratory innovation under high trust conditions. Building on these insights, it advances the research on the positive relationship between GAI adoption and ambidextrous innovation in three significant ways (Singh et al., 2024): by examining both the positive and negative effects of GAI, by differentiating the roles and outcomes of GAI in exploitative and exploratory innovation, and by analyzing GAI’s influence on the complementarity between these two types of innovation.

Although existing studies on human-AI collaboration and GAI ethics address the multifaceted roles of trust in GAI in innovation, a deeper and more nuanced understanding remains lacking. We contribute to the literature by empirically showing that trust in GAI facilitates employees in refining existing products. Notably, we uncover an inverted U-shaped relationship between trust in GAI and exploratory innovation, suggesting that the relationship is more complicated than previously assumed. This novel finding demonstrates a “double-edged sword” effect of trust in GAI on innovation and helps reconcile conflicting perspectives in the literature. In doing so, we enrich scholarly discussion on GAI’s dual role as a facilitator and constraint in innovation.

Second, we extend the literature by examining the moderating role of employee-GAI fit and employee-innovation task fit. By integrating technology, individual, and task characteristics, our research offers a holistic understanding of how trust in GAI shapes innovation. Prior studies suggest that the benefits of GAI require employees to develop the necessary capabilities and refine their task competencies to translate insights into tangible innovations (Jia et al., 2024; Mariani and Dwivedi, 2024). There is limited understanding of how enhancing employees’ capabilities fosters GAI-driven innovation. Our findings identify two crucial factors influencing employee-GAI collaboration in innovation: employee-GAI fit and employee-innovation task fit. We confirm the strategic value of these fits in leveraging trust in GAI for innovation and elucidate their differentiated roles in exploitative versus exploratory innovation. These insights underscore that harnessing GAI for innovation is a sophisticated process, contingent on employees’ GAI capabilities to effectively deploy the technology alongside its technical functions and on their expertise, experience, and skills that fit with innovation task requirements.

Third, this research expands the application and depth of TTF theory. Traditionally employed to examine the interplay among tasks, technology, user responses, and performance (Howard and Rose, 2018), TTF theory is now extended to the ___domain of GAI technology and innovation behavior. This extension represents a significant advancement in applying TTF theory to emerging technologies. Specifically, we refine the theory’s fit mechanism by analyzing how trust in GAI influences ambidextrous innovation. Our findings reveal not only the positive effects of good fit but also the negative consequences of poor fit on innovation. This bidirectional perspective provides a more comprehensive understanding of the fit mechanism within TTF theory.

Practical implications

These findings yield several management implications for innovation practices. First, fostering trust in GAI enables employees to harness its technological potential in supporting both exploitative and exploratory innovation. It is imperative, especially for those less familiar with GAI technology, to actively engage in collaborative practices and to apply GAI to innovation tasks that align well with its strengths (Kanbach et al., 2024). At the same time, employees should recognize that GAI is not a cure-all for innovation challenges, particularly in domains that require emotional intelligence and ethical judgment where human strengths prevail. Over-trust in GAI may lead to negative outcomes such as employee complacency, increased technological risks, and hindered exploratory innovation, while also impeding the resource sharing and knowledge exchange that are vital for complementing exploitative and exploratory innovation. This issue is particularly relevant for employees proficient in GAI, who may be prone to over-trusting the technology in their pursuit of novelty in innovation (Jia et al., 2024). Meanwhile, firms should integrate human-centric considerations into technology adoption by establishing robust digital infrastructure, providing emotional support, fostering an innovation-friendly culture, and ensuring a psychologically safe learning environment.

Second, successful GAI implementation for innovation further requires careful attention to employee-GAI fit and employee-innovation task fit. Employee-GAI fit highlights the importance of fitting employees’ GAI capabilities with GAI’s technical functions to maximize innovation potential. Firms should offer tailored training programs to address any gaps in employees’ GAI capabilities and provide targeted development opportunities. Similarly, employee-innovation task fit underscores the need to align employees’ task competencies with the requirements of innovation tasks. Employees should enhance their expertise in areas where GAI is less effective. Firms, in turn, should strategically assemble teams based on specialized skills and experience to optimize contributions.

Limitations and future research directions

First, although China serves as an appropriate context for examining the impact of trust in GAI on ambidextrous innovation, future studies should verify our findings in other economic and cultural settings to ensure broader applicability. Second, given that trust in GAI is inherently multidimensional, further studies should explore how its affective and cognitive dimensions distinctly shape employee innovation outcomes (Glikson and Woolley, 2020). Additionally, future work should examine additional individual and task characteristics, such as digital affinity and task complexity, that may uniquely influence the relationship between GAI and innovation (Vincent, 2021).