Introduction

The dissemination and exchange of information among people, and between people and society, are important ways of communication and collaboration in human society. The dissemination of information has promoted the progress of human society and the development of civilizations. Moreover, it has also promoted the continuous circulation of human society and natural ecology. With the development of technology, information dissemination has transitioned from traditional single information dissemination to multi-information dissemination. Multi-information dissemination reflects real life more comprehensively1,2. The simultaneous dissemination of multiple pieces of information leads to a competitive relationship between information similar to that between people. For example, there is a competitive relationship between the dissemination of rumors and the dissemination of refuting rumors3,4. Similarly, there is a competitive relationship between the dissemination of positive emotions and the dissemination of negative emotions5. The dissemination of one side restrains the dissemination of the other side. The research regarding the dissemination mechanisms and control measures of multi-information can effectively solve the relevant practical problems.

Studying multi-information dissemination using qualitative analysis methods is commonplace. However, studying this problem from the perspective of quantitative analysis is rare. The construction of early information dissemination models mainly used infectious disease model as a reference. Kermack and McKendrick built the famous SIR Chamber model6 for exploring the Black Death and plague. The diseased population in each chamber in the SIR Model is similar to the population disseminating information. Afterwards, the “threshold theory” was put forward7, which provides a good inspiration for studying the scope and trend of information dissemination. In recent studies, researchers have applied the infectious disease model for the prevention and treatment of HBV virus8. Farhan and Aljuaydi and Shah et al. designed the model in the light of asymptomatic carriers. The fractional modeling approach for vaccination and treatment was used to study the dissemination of HBV9. The application of infectious disease model in mosquito-borne epidemic-prone diseases has also emerged gradually10. Some scholars have used infectious disease model to study the dissemination mode and control measures of yellow fever11. Ndeffo-Mbah et al. calculated the basic regeneration number12 for yellow fever. Jan et al. adopted the fractional differential equations to solve the problem of infectious disease dissemination, which gradually became a research hotspot13,14,15. We can also try to construct fractional differential equations for analyzing the dissemination of information.

In 1950s, few scholars started using the modeling methods and theoretical analysis methods of virus dissemination in biology for studying the dissemination of information16,17. The most famous scholars among them are Daley and Kendall, who improved the infectious disease model and constructed the DK rumor dissemination model18,19,20. The proposed DK model laid the foundation for the expansion of biological research into the field of information dissemination.

Based on the amount of information, the information dissemination models can be divided into single information dissemination21 and multi-information dissemination22. In most cases, there exists a phenomenon of multiple information co-dissemination in social systems. Therefore, it is more practical to study the problem of multi-information dissemination23,24,25. Xiao et al. considered the dynamic changes in anti-rumor information and constructed a driving mechanism of information based on evolutionary game theory. The authors proposed the SKIR model based on the competition between rumors and anti-rumors26. Considering the coexistence of rumors and authoritative information in social networks, Zhang et al. proposed the \(IS_1S_2C_1C_2R_1R_2\) model27 with super disseminators, super authoritative information disseminators, rumor suppressors, and authoritative information suppressors. Shen et al. proposed a new emotion-based two-layer 2E-SIR model for analyzing the impact of cross-platform and dissemination sources on the dissemination of emotional information. The results show that the dissemination speed of positive information is affected by the dissemination of negative information, which leads to negative emotions, and easily leads to social chaos28. With the development of information dissemination fields, optimal control as a mathematical tool has been used in the control of information dissemination models. Previously, optimal control methods of dissemination dynamics have been frequently used to study the spread and control of cancer cells29,30, SARS31,32, and HIV33. Subsequently, scholars used the methods of disease control for studying the dissemination and control of rumors34 and the dissemination and control of information35.

The problem of multi-information dissemination has been extensively studied in previous research. However, there are relatively few systematic studies that consider the two types of information dissemination having a competitive relationship. In fact, when multiple pieces of information coexist, the content expressed by different pieces of information can easily oppose each other. If the negative information that is harmful to social development eventually “defeats” the positive information that is beneficial to social development, social development gets hindered by different obstacles. Therefore, it is important to devise the methods for controlling the negative information. In order to address this problem, in this paper, we construct a competitive information dissemination model, and discuss the conditions of continuous dissemination and disappearance of positive information and negative information based on theoretical analysis and numerical simulations. Moreover, in this study, biological modeling methods and theoretical analysis methods are applied to the field of information dissemination. Apply the research approach of this paper in reverse to the study of the competition among biological populations, we think this might also be an interesting idea.

The rest of this paper is organized as follows. In “The model” section, a competitive information dissemination model SPA2G2R is constructed. In “Stability analysis of the model” section, the basic reproduction number \(R_0,\) the local and global stability of information-free equilibrium point and information-existence equilibrium point are calculated. In “Sensitivity analysis” section, sensitivity analysis of the changes of important control parameters in competitive information dissemination is performed. In “The optimal control model” section, the existence and optimal control strategy of competitive information dissemination model is proposed. In “Numerical simulations” section, the influence of parameter changes on information dissemination with competitive relationship and the effect of optimal control strategy based on numerical simulations are analyzed. The conclusion is presented in the last section.

The model

In this paper, an open virtual community is considered. At any given time t, the group size is variable, and the total group size is denoted by N(t). All groups can be classified into five categories: (1) Susceptible people, represented by S(t), have no access to information but are quick to adopt it; (2) Information disseminators who receive guidance mechanisms such as publicity and education and obtain generally positive information, denoted by P(t); (3) Information disseminators who have not received any guidance mechanism, and such information is typically negative, denoted by A(t); (4) The two types of information dissemination people subject to isolation control are denoted by \(G_1(t)\) and \(G_2(t)\), respectively; (5) People who no longer communicate the two types of information, represented by \(R_1(t)\) and \(R_2(t)\).

In the model developed in this paper, educational guidance is believed to make the group more likely to disseminate positive information. The guidance mechanism is primarily manifested in two ways: on the one hand, it should guide from the source of dissemination, increase publicity efforts, and maximize the dissemination of positive information among those who are exposed to it. On the other hand, it should guide the dissemination process and increase educational efforts so that people who have been exposed to negative information can dissemination to positive information dissemination through self-learning and improving cognition. At the same time, when the management department needs the information to disappear, some disseminators will lose interest in the information due to its limited timeliness and will no longer disseminate it, and the other part of the people who continue to disseminate information will also be faced with information isolation by the relevant management department, which will reduce their enthusiasm for information dissemination, so they will no longer disseminate it. In this paper, it is assumed that positive information disseminators must be influenced by various guidance mechanisms in order to disseminate positive information.

In this paper, we construct a SPA2G2R model to reflect the phenomenon of competitive information dissemination. The flow chart of the model is given in Fig. 1.

Fig. 1
figure 1

The flow chart of the model.

The parameters of the SPA2G2R model are described below:

  • In a social system, the number of individuals generally changes over time. As a result, this paper defines B as the number of individuals entered in the whole system. At the same time, the paper defines \(\mu\) as the proportion of individuals moving out, taking into account the possibility of individuals withdrawing from the social system due to force majeure.

  • When information begins to be disseminated in the social system, easy adopters have a certain probability of contacting the information dissemination group. The contact rate of the dissemination group exposed to positive information is defined as \(\alpha _1\), and the contact rate of the dissemination group exposed to negative information is defined as \(\alpha _2\). At the same time, the easy adopters have a certain probability of \(\theta _1\) being influenced by the guidance mechanism.

  • When two types of information are disseminated in the social system at the same time, there is a certain probability that the two types of information disseminators will contact each other. Thus, the contact rate between the two types of information disseminators is defined as \(\beta\). Similarly, the disseminator that disseminates negative information has a certain probability \(\theta _2\) to be influenced by self-learning or guidance mechanisms such as publicity.

  • When the two types of information are no longer needed in the social system, some information disseminators will have a certain probability of \(\gamma _1\) and \(\gamma _2\) to actively choose to give up the dissemination of information due to the timeliness of information, and the other part of the information disseminators will be controlled by the information isolation of the management department with probabilities \(\lambda _1\) and \(\lambda _2\), thus disseminating into information isolation groups \(G_1\) and \(G_2\). In addition, the information disseminated by the isolated group is not paid attention to, reducing the enthusiasm of disseminators. Finally, disseminators choose not to disseminate information with the probability of \(\varepsilon _1\) and \(\varepsilon _2\).

The parameters of SPA2G2R model are summarized in Table 1.

Table 1 The parameters description of SPA2G2R model.

The system dynamics are mathematically expressed as follows:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {\frac{{dS}}{{dt}}} = {B - {\alpha _1}{\theta _1}SP - {\alpha _2}SA - \mu S,}\\ {\frac{{dP}}{{dt}}} = {{\alpha _1}{\theta _1}SP + \beta {\theta _2}AP - {\lambda _1}P - {\gamma _1}P - \mu P,}\\ {\frac{{dA}}{{dt}}} = {{\alpha _2}SA - \beta {\theta _2}AP - {\lambda _2}A - {\gamma _2}A - \mu A,}\\ {\frac{{d{G_1}}}{{dt}}} = {{\lambda _1}P - {\varepsilon _1}{G_1} - \mu {G_1},}\\ {\frac{{d{G_2}}}{{dt}}} = {{\lambda _2}A - {\varepsilon _2}{G_2} - \mu {G_2},}\\ {\frac{{d{R_1}}}{{dt}}} = {{\gamma _1}P + {\varepsilon _1}{G_1} - \mu {R_1},}\\ {\frac{{d{R_2}}}{{dt}}} = {{\gamma _2}A + {\varepsilon _2}{G_2} - \mu {R_2},} \end{array}} \right. \end{aligned}$$
(1)

where

$$\begin{aligned} \begin{array}{l} B> 0,\mu> 0,{\varepsilon _1}> 0,{\varepsilon _2}> 0,{\gamma _1}> 0,{\gamma _2} > 0,{\alpha _1} \in (0,1],{\alpha _2} \in (0,1],\beta \in (0,1],{\theta _1} \in (0,1],{\theta _2} \in (0,1],{\lambda _1} \in (0,1],{\lambda _2} \in (0,1], \end{array} \end{aligned}$$

and

$$\begin{aligned} S(t) + P(t) + A(t) + {G_1}(t) + {G_2}(t) + {R_1}(t) + {R_2}(t) = N(t). \end{aligned}$$

It is easy to know that \(\frac{{dN(t)}}{{dt}} = B - \mu N\) form system (1). So \(N(t) = \left( {{N_0} - \frac{B}{\mu }} \right) {e^{ - \mu t}} + \frac{B}{\mu },\) where \({N_0} = N(0).\) Then, it can be known that \(\mathop {\lim }\nolimits _{t \rightarrow \infty } N(t) = \frac{B}{\mu }\). That is to say, the assumed total population N(t) of this virtual community in this article does not tend to infinity over time, and its ultimate limit value is the ratio of the number of people entering B and the proportion of people leaving \(\mu\) per unit time. Thus the solutions of system (1) are positive invariant and bounded as shown below.

$$\begin{aligned} \Gamma = \left\{ \left( {S(t),P(t),A(t),{G_1}(t),{G_2}(t),{R_1}(t),{R_2}(t)} \right) \in R_7^ +: S(t) + P(t) + A(t) + {G_1}(t) + {G_2}(t) + {R_1}(t) + {R_2}(t) \le \frac{B}{\mu } \right\} .\end{aligned}$$

Stability analysis of the model

Solution of basic regeneration number R 0

The basic reproduction number \(R_0\) represents the expected number of next-generation information disseminators that a single individual can produce. The basic reproduction number \(R_0\) of system (1) can be defined by the next generation matrix36.

Let \(X = {(P,A,{G_1},{G_2},{R_1},{R_2},S)^T}\), then system (1) can be written as:

$$\begin{aligned} \frac{{dX}}{{dt}} = {{\mathscr{F}}}(X) - {{\mathscr{V}}}(X), \end{aligned}$$

where

$$\begin{aligned}{{\mathscr{F}}}(X) = \left( {\begin{array}{*{20}{c}} {{\alpha _1}{\theta _1}SP + \beta {\theta _2}AP}\\ {{\alpha _2}SA}\\ 0\\ 0\\ 0\\ 0\\ 0 \end{array}} \right) ,{{\mathscr{V}}}(X) = \left( {\begin{array}{*{20}{c}} {{\lambda _1}P + {\gamma _1}P + \mu P}\\ {\beta {\theta _2}AP + {\lambda _2}A + {\gamma _2}A + \mu A}\\ { - {\lambda _1}P + {\varepsilon _1}{G_1} + \mu {G_1}}\\ { - {\lambda _2}A + {\varepsilon _2}{G_2} + \mu {G_2}}\\ { - {\gamma _1}P - {\varepsilon _1}{G_1} + \mu {R_1}}\\ { - {\gamma _2}A - {\varepsilon _2}{G_2} + \mu {R_2}}\\ { - B + {\alpha _1}{\theta _1}SP + {\alpha _2}SA + \mu S} \end{array}} \right) . \end{aligned}$$
(2)

Calculate the Jacobian matrices \({{\mathscr{F}}}(X)\) and \({{\mathscr{V}}}(X)\) in Eq. (2) separately, then take the submatrix corresponding to the variables P and A directly related to information dissemination, it can be calculated that:

$$\begin{aligned} F = \left( {\begin{array}{*{20}{l}} {{\alpha _1}{\theta _1}S + \beta {\theta _2}A}& \quad {\beta {\theta _2}P}\\ 0& \quad {{\alpha _2}S} \end{array}} \right) ,V = \left( {\begin{array}{*{20}{l}} {{\lambda _1} + {\gamma _1} + \mu }& \quad 0\\ {\beta {\theta _2}A}& \quad {\beta {\theta _2}P + {\lambda _2} + {\gamma _2} + \mu } \end{array}} \right) . \end{aligned}$$
(3)

Where F and V represent the infection and transition matrices respectively37. And the next generation matrix can be represented as:

$$\begin{aligned}F{V^{ - 1}} = \left( {\begin{array}{*{20}{l}} {\frac{{B{\alpha _1}{\theta _1}}}{{\mu ({\lambda _1} + {\gamma _1} + \mu )}}}& \quad 0\\ 0& \quad {\frac{{B{\alpha _2}}}{{\mu ({\lambda _2} + {\gamma _2} + \mu )}}} \end{array}} \right) . \end{aligned}$$
(4)

The basic reproduction number \(R_0\) of system (1) is the spectral radius of the next generation matrix \(F{V^ {-1} }\). Matrix (4) can obtain two eigenvalues, representing the basic reproduction numbers of positive and negative information in system (1)9. The basic reproduction numbers can be calculated as:

$$\begin{aligned} R_0^P = \rho {(F{V^{ - 1}})_1} = \frac{{B{\alpha _1}{\theta _1}}}{{\mu ({\lambda _1} + {\gamma _1} + \mu )}},\quad R_0^A = \rho {(F{V^{ - 1}})_2} = \frac{{B{\alpha _2}}}{{\mu ({\lambda _2} + {\gamma _2} + \mu )}}. \end{aligned}$$
(5)

Existence of equilibrium point

Using the basic reproduction numbers \(R_0^P\) and \(R_0^A\) as critical points, the equilibrium point E = (S, P, A, \({G_1},{G_2},{R_1},{R_2})\) of system (1) can be divided into information-free equilibrium point and information-existence equilibrium point. The information-free equilibrium point is \({E^0}=({B/\mu },0,0,0,0,0,0)\) if \(R_0^P < 1\) and \(R_0^A < 1\). It means that the information will eventually disappear from the social system and no longer be disseminated change over time. By verifying the stability of the information-free equilibrium point, the stable conditions for various populations at the information-free equilibrium point can be obtained.

And over time, information can be disseminated in social systems under the following three conditions:

(1) The information-existence equilibrium point is \({E^{1,*}}=({S^{1,*}},{P^{1,*}},{A^{1,*}},G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) when \(R_0^P > 1\) and \(R_0^A > 1\). It means that both types of information can continue to disseminate in the social system change over time. The information-existence equilibrium point \({E^{1,*}}\) should satisfy:

$$\begin{aligned} \left\{ \begin{array}{l} B - {\alpha _1}{\theta _1}{S^{1,*}}{P^{1,*}} - {\alpha _2}{S^{1,*}}{A^{1,*}} - \mu {S^{1,*}} = 0,\\ {\alpha _1}{\theta _1}{S^{1,*}}{P^{1,*}} + \beta {\theta _2}{A^{1,*}}{P^{1,*}} - {\lambda _1}{P^{1,*}} - {\gamma _1}{P^{1,*}} - \mu {P^{1,*}} = 0,\\ {\alpha _2}{S^{1,*}}{A^{1,*}} - \beta {\theta _2}{A^{1,*}}{P^{1,*}} - {\lambda _2}{A^{1,*}} - {\gamma _2}{A^{1,*}} - \mu {A^{1,*}} = 0,\\ {\lambda _1}{P^{1,*}} - {\varepsilon _1}G_1^{1,*} - \mu G_1^{1,*} = 0,\\ {\lambda _2}{A^{1,*}} - {\varepsilon _2}G_2^{1,*} - \mu G_2^{1,*} = 0,\\ {\gamma _1}{P^{1,*}} + {\varepsilon _1}G_1^{1,*} - \mu R_1^{1,*} = 0,\\ {\gamma _2}{A^{1,*}} + {\varepsilon _2}G_2^{1,*} - \mu R_2^{1,*} = 0. \end{array} \right. \end{aligned}$$
(6)

The information-existence equilibrium point \({E^{1,*}} = ({S^{1,*}},{P^{1,*}},{A^{1,*}},G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) can be deduced as the following equations by solving Eq. (6):

$$\begin{aligned} {S^{1,*}}&= \frac{{B\beta {\theta _2}}}{{{\alpha _2}({\lambda _1} + {\gamma _1} + \mu ) - {\alpha _1}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ) + \mu \beta {\theta _2}}},\nonumber \\ {P^{1,*}}&= \frac{{B{\alpha _2}\beta {\theta _2} - {\alpha _2}({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ) + {\alpha _1}{\theta _1}{{({\lambda _2} + {\gamma _2} + \mu )}^2} - \mu \beta {\theta _2}({\lambda _2} + {\gamma _2} + \mu )}}{{\beta {\theta _2}[{\alpha _2}({\lambda _1} + {\gamma _1} + \mu ) - {\alpha _1}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ) + \mu \beta {\theta _2}]}},\nonumber \\ {A^{1,*}}&= \frac{{{\alpha _2}{{({\lambda _1} + {\gamma _1} + \mu )}^2} - {\alpha _1}{\theta _1}({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ) + \mu \beta {\theta _2}({\lambda _1} + {\gamma _1} + \mu ) - B{\alpha _1}{\theta _1}\beta {\theta _2}}}{{\beta {\theta _2}[{\alpha _2}({\lambda _1} + {\gamma _1} + \mu ) - {\alpha _1}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ) + \mu \beta {\theta _2}]}}. \end{aligned}$$
(7)

It can be known that \({S^{1,*}}> 0,\) \({P^{1,*}}> 0,\) \({A^{1,*}} > 0\) if \(\beta ,{\theta _1} \ge {\lambda _1} + {\gamma _1} + \mu ,{\lambda _2} + {\gamma _2} + \mu\), \({\alpha _1} > \mu\) and \({\alpha _2} > {\alpha _1},\) β, \({\theta _1},{\theta _2}\). Therefore, the equilibrium point \({E^{1,*}}\) = \(({S^{1,*}},{P^{1,*}},{A^{1,*}},G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) of system (1) exists.

(2) The information-existence equilibrium point is \({E^{2,*}} = ({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) when \(R_0^P > 1\) and \(R_0^A < 1\). It means that only the dissemination group P will exist while the dissemination group A will eventually disappear change over time. The information-existence equilibrium point \({E^{2,*}}\) should satisfy:

$$\begin{aligned} \left\{ \begin{array}{l} B - {\alpha _1}{\theta _1}{S^{2,*}}{P^{2,*}} - \mu {S^{2,*}} = 0,\\ {\alpha _1}{\theta _1}{S^{2,*}}{P^{2,*}} - {\lambda _1}{P^{2,*}} - {\gamma _1}{P^{2,*}} - \mu {P^{2,*}} = 0,\\ {\lambda _1}{P^{2,*}} - {\varepsilon _1}G_1^{2,*} - \mu G_1^{2,*} = 0,\\ {\gamma _1}{P^{2,*}} + {\varepsilon _1}G_1^{2,*} - \mu R_1^{2,*} = 0. \end{array} \right. \end{aligned}$$
(8)

The information-existence equilibrium point \({E^{2,*}} = ({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) can be deduced as the following equations by solving Eq. (8):

$$\begin{aligned}{S^{2,*}} = \frac{{{\lambda _1} + {\gamma _1} + \mu }}{{{\alpha _1}{\theta _1}}},\quad {P^{2,*}} = \frac{{B{\alpha _1}{\theta _1} - \mu ({\lambda _1} + {\gamma _1} + \mu )}}{{{\alpha _1}{\theta _1}({\lambda _1} + {\gamma _1} + \mu )}}. \end{aligned}$$
(9)

It can be known that \({S^{2,*}}> 0,\) \({P^{2,*}} > 0\) if \({\theta _1} \ge {\lambda _1} + {\gamma _1} + \mu\) and \({\alpha _1} > \mu\). Therefore, the equilibrium point \({E^{2,*}}\) \(({S^{2,*}},{P^{2,*}},\) \(G_1^{2,*},R_1^{2,*})\) of system (1) exists.

(3) The information-existence equilibrium point is \({E^{3,*}}\) = \(({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) when \(R_0^P < 1\) and \(R_0^A > 1\). It means that only the dissemination group A will exist while the dissemination group P will eventually disappear change over time. The information-existence equilibrium point \({E^{3,*}}\) should satisfy:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {B - {\alpha _2}{S^{3,*}}{A^{3,*}} - \mu {S^{3,*}} = 0,}\\ {{\alpha _2}{S^{3,*}}{A^{3,*}} - {\lambda _2}{A^{3,*}} - {\gamma _2}{A^{3,*}} - \mu {A^{3,*}} = 0,}\\ {{\lambda _2}{A^{3,*}} - {\varepsilon _2}G_2^{3,*} - \mu G_2^{3,*} = 0,}\\ {{\gamma _2}{A^{3,*}} + {\varepsilon _2}G_2^{3,*} - \mu R_2^{3,*} = 0.} \end{array}} \right. \end{aligned}$$
(10)

The information-existence equilibrium point \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) can be deduced as the following equations by solving Eq. (10):

$$\begin{aligned} {S^{3,*}} = \frac{{{\lambda _2} + {\gamma _2} + \mu }}{{{\alpha _2}}},\quad {A^{3,*}} = \frac{{B{\alpha _2} - \mu ({\lambda _2} + {\gamma _2} + \mu )}}{{{\alpha _2}({\lambda _2} + {\gamma _2} + \mu )}}. \end{aligned}$$
(11)

It can be known that \({S^{3,*}}> 0,{A^{3,*}} > 0\) if \({\alpha _2} > {\lambda _2} + {\gamma _2} + \mu\). Therefore, the equilibrium point \({E^{3,*}}\) = \(({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) of system (1) exists.

Stability of equilibrium point

Firstly, the local and global asymptotic stability of the information-free equilibrium point \({E^0} = ({B/\mu },\) 0, 0, 0, 0, 0, 0) can be proved as follows:

Theorem 1

If \(R_0^P < 1\) and \(R_0^A < 1,\) the information-free equilibrium point \({E^0} = ({B/\mu },0,0,0,0,0,0)\) is locally asymptotically stable.

Proof

The Jacobin matrix of system (1) at information-free equilibrium point \({E^0} = ({B/\mu },0,0,0,0,0,0)\) can be written as:

$$\begin{aligned} J({E^0}) = \left[ {\begin{array}{*{20}{c}} { - \mu }& \quad { - {\alpha _1}{\theta _1}{S^*}}& \quad { - {\alpha _2}{S^*}}& \quad 0& \quad 0& \quad 0& \quad 0\\ 0& \quad {{\alpha _1}{\theta _1}{S^*} - ({\lambda _1} + {\gamma _1} + \mu )}& \quad 0& \quad 0& \quad 0& \quad 0& \quad 0\\ 0& \quad 0& \quad {{\alpha _2}{S^*} - ({\lambda _2} + {\gamma _2} + \mu )}& \quad 0& \quad 0& \quad 0& \quad 0\\ 0& \quad {{\lambda _1}}& \quad 0& \quad { - {\varepsilon _1} - \mu }& \quad 0& \quad 0& \quad 0\\ 0& \quad 0& \quad {{\lambda _2}}& \quad 0& \quad { - {\varepsilon _2} - \mu }& \quad 0& \quad 0\\ 0& \quad 0& \quad 0& \quad {{\varepsilon _1}}& \quad 0& \quad { - \mu }& \quad 0\\ 0& \quad 0& \quad 0& \quad 0& \quad {{\varepsilon _2}}& \quad 0& \quad { - \mu } \end{array}} \right] . \end{aligned}$$
(12)

The negative eigenvalues of \(J({E^0})\) can be easily obtained as \({\Lambda _{01}} = {\Lambda _{02}} = {\Lambda _{03}} = - \mu< 0,\) \({\Lambda _{04}} = - {\varepsilon _1} - \mu< 0,\) \({\Lambda _{05}} = - {\varepsilon _2} - \mu < 0.\) And the other eigenvalues are \({\Lambda _{06}} = \frac{{B{\alpha _1}{\theta _1} - \mu ({\lambda _1} + {\gamma _1} + \mu )}}{\mu }\) and \({\Lambda _{07}} = \frac{{B{\alpha _2} - \mu ({\lambda _2} + {\gamma _2} + \mu )}}{\mu }.\) Thus it can be seen that \({\Lambda _{06}}< 0,{\Lambda _{07}} < 0\) if \(R_0^P < 1\) and \(R_0^A < 1\). Hence, the information-free equilibrium point \({E^0} = ({B/\mu },0,0,0,0,0,0)\) of system (1) is locally asymptotically stable based on the Routh–Hurwitz criterion. \(\square\)

Theorem 2

If \(B{\alpha _1}{\theta _1} \le {\mu ^2}\) and \(B{\alpha _2} \le {\mu ^2},\) the information-free equilibrium point \({E^0} = ({B/\mu },0,0,0,0,0,0)\) of system (1) is globally asymptotically stable.

Proof

It is easy to know that \(S(t) + P(t) + A(t) + {G_1}(t) + {G_2}(t) + {R_1}(t) + {R_2}(t) = N(t)\) and satisfy \(\frac{{dN(t)}}{{dt}} = B - \mu N\). It illustrates that:

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow 0} \sup N(t) \le \frac{B}{\mu }. \end{aligned}$$
(13)

For \(t \ge 0,\) the positive invariant set of system (1) can be written as:

$$\begin{aligned} \Gamma = \left\{ {\left( {S(t),P(t),A(t),{G_1}(t),{G_2}(t),{R_1}(t),{R_2}(t)} \right) \in R_7^ +:S(t) + P(t) + A(t) + {G_1}(t) + {G_2}(t) + {R_1}(t) + {R_2}(t) \le \frac{B}{\mu }} \right\} . \end{aligned}$$
(14)

Then, the Lyapunov function can be constructed as \(\mathbf{{L}}(t) = P(t) + A(t) + {G_1}(t) + {G_2}(t) + {R_1}(t) + {R_2}(t),\) and \(\mathbf{{L}}^{\prime}(t)\) can be computed as:

$$\begin{aligned} {\mathbf{{L}}^{\prime} (t)}&= {{\alpha _1}{\theta _1}SP + \beta {\theta _2}AP - {\lambda _1}P - {\gamma _1}P - \mu P + {\alpha _2}SA - \beta {\theta _2}AP}\nonumber \\ &\quad { - {\lambda _2}A - {\gamma _2}A - \mu A + {\lambda _1}P - {\varepsilon _1}{G_1} - \mu {G_1} + {\lambda _2}A - {\varepsilon _2}{G_2}}\nonumber \\ & \quad { - \mu {G_2} + {\gamma _1}P + {\varepsilon _1}{G_1} - \mu {R_1} + {\gamma _2}A + {\varepsilon _2}{G_2} - \mu {R_2}}\nonumber \\&= {( - \mu + {\alpha _1}{\theta _1}S)P + ( - \mu + {\alpha _2}S)A - \mu ({G_1} + {G_2} + {R_1} + {R_2})}\nonumber \\ & \le {\left( - \mu + \frac{{B{\alpha _1}{\theta _1}}}{\mu }\right) P + \left( - \mu + \frac{{B{\alpha _2}}}{\mu }\right) A - \mu ({G_1} + {G_2} + {R_1} + {R_2}),} \end{aligned}$$
(15)

it is easy to know that \(\mathbf{{L}}^{\prime}(t) \le 0\) if \(S \le \frac{B}{\mu }\), \(B{\alpha _1}{\theta _1} \le {\mu ^2}\) and \(B{\alpha _2} \le {\mu ^2}\).

In addition, \(\mathbf{{L}}^{\prime}(t) = 0\) holds if and only if \(S(t) = {S^0},P = A = {G_1} = {G_2} = {R_1} = {R_2} = 0\). From system (1), it is known that \({E^0}\) is the only solution in \(\Gamma\) when \(\mathbf{{L}}^{\prime}(t) = 0\). Therefore, it is shown that every solution of system (1) approach \({E^0}\) for \(t \rightarrow \infty\) based on the Lyapunov–LaSalle Invariance Principle37. Hence, the information-free equilibrium point \({E^0} = ({B/\mu },0,0,0,0,0,0)\) of system (1) is globally asymptotically stable. \(\square\)

Next, the local and global asymptotic stability of the information-existence equilibrium point \({E^*}\) = \(({S^*},{P^*},{A^*},\) \(G_1^*,G_2^*,R_1^*,R_2^*)\) can be proved as follows. Since the equilibrium points of system (1) has three situations, which when two types of information disseminate simultaneously or when only one type of information can continue to disseminate. Therefore, it is necessary to sequentially prove the local and global asymptotic stability of the information-existence equilibrium points \({E^{1,*}}\) = \(({S^{1,*}},{P^{1,*}},{A^{1,*}},G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\), \({E^{2,*}} = ({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) and \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) in the three scenarios.

(1) Firstly, it should be proved that the local and global asymptotic stability of the equilibrium point \({E^{1,*}}\) where both types of information disseminate simultaneously when \(R_0^P > 1\) and \(R_0^A > 1\).

Theorem 3

If \(R_0^P > 1,\) \(R_0^A > 1,\) \(\beta ,{\theta _1} \ge {\lambda _1} + {\gamma _1} + \mu ,{\lambda _2} + {\gamma _2} + \mu ,\) \({\alpha _1} > \mu\) and \({\alpha _2} > {\alpha _1},\beta ,{\theta _1},{\theta _2},\) the information-existence equilibrium point \({E^{1,*}} = ({S^{1,*}},{P^{1,*}},{A^{1,*}},G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) of system (1) is locally asymptotically stable.

Proof

The Jacobin matrix at information-existence equilibrium point \({E^{1,*}}\) = \(({S^{1,*}},{P^{1,*}},{A^{1,*}},\) \(G_1^{1,*},G_2^{1,*},\) \(R_1^{1,*},R_2^{1,*})\) of system (1) can be written as:

$$\begin{aligned} { J({E^{1,*}}) = \left[ {\begin{array}{*{20}{c}} { - {\alpha _1}{\theta _1}{P^{1,*}} - {\alpha _2}{A^{1,*}} - \mu }& { - {\alpha _1}{\theta _1}{S^{1,*}}}& { - {\alpha _2}{S^{1,*}}}& 0& 0& 0& 0\\ {{\alpha _1}{\theta _1}{P^{1,*}}}& {{\alpha _1}{\theta _1}{S^{1,*}} + \beta {\theta _2}{A^{1,*}} - ({\lambda _1} + {\gamma _1} + \mu )}& {\beta {\theta _2}{P^{1,*}}}& 0& 0& 0& 0\\ {{\alpha _2}{A^{1,*}}}& { - \beta {\theta _2}{A^{1,*}}}& {{\alpha _2}{S^{1,*}} - \beta {\theta _2}{P^{1,*}} - ({\lambda _2} + {\gamma _2} + \mu )}& 0& 0& 0& 0\\ 0& {{\lambda _1}}& 0& { - {\varepsilon _1} - \mu }& 0& 0& 0\\ 0& 0& {{\lambda _2}}& 0& { - {\varepsilon _2} - \mu }& 0& 0\\ 0& 0& 0& {{\varepsilon _1}}& 0& { - \mu }& 0\\ 0& 0& 0& 0& {{\varepsilon _2}}& 0& { - \mu } \end{array}} \right] .} \end{aligned}$$
(16)

The negative eigenvalues of \(J({E^{1,*}})\) can be easily obtained as \({\Lambda _{11}} = {\Lambda _{12}} = - \mu < 0\), \({\Lambda _{13}} = - {\varepsilon _1} - \mu< 0,\) \({\Lambda _{14}} = - {\varepsilon _2} - \mu < 0\) and the other eigenvalues are the characteristic roots of \(\ \left| {hE - J({E^{1,*}})} \right|\), where:

$$\begin{aligned} { \left| {hE - J({E^{1*}})} \right| = \left| {\begin{array}{*{20}{c}} {h + {\alpha _1}{\theta _1}{P^{1*}} + {\alpha _2}{A^{1*}} + \mu }& {{\alpha _1}{\theta _1}{S^{1*}}}& {{\alpha _2}{S^{1*}}}\\ { - {\alpha _1}{\theta _1}{P^{1*}}}& {h - {\alpha _1}{\theta _1}{S^{1*}} - \beta {\theta _2}{A^{1*}} + ({\lambda _1} + {\gamma _1} + \mu )}& { - \beta {\theta _2}{P^{1*}}}\\ { - {\alpha _2}{A^{1*}}}& {\beta {\theta _2}{A^{1*}}}& {h - {\alpha _2}{S^{1*}} + \beta {\theta _2}{P^{1*}} + ({\lambda _2} + {\gamma _2} + \mu )} \end{array}} \right| = 0.} \end{aligned}$$
(17)

The eigenvalues of Eq. (17) can be obviously obtained as:

$$\begin{aligned} {\left| {hE - J({E^{1*}})} \right| }&= {{h^3} + ({\alpha _1}{\theta _1}{P^{1*}} + {\alpha _2}{A^{1*}} + \mu ){h^2} + \left[ {\alpha _1^2\theta _1^2{S^{1*}}{P^{1*}} - ({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu )} \right. }\nonumber \\ &\quad {\left. { +\, {\alpha _1}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ){S^{1*}} + {\alpha _2}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}} - {\alpha _1}{\alpha _2}{\theta _1}{S^{1*2}} + \alpha _2^2{S^{1*}}{A^{1*}}} \right] h}\nonumber \\ & \quad { +\, \alpha _2^2({\lambda _1} + {\gamma _1} + \mu ){S^{1*}}{A^{1*}} - {\alpha _1}{\alpha _2}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}}{P^{1*}} - \mu {\alpha _1}{\alpha _2}{\theta _1}{S^{1*2}}}\nonumber \\ &\quad { -\, {\alpha _1}{\theta _1}({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ){P^{1*}} + \alpha _1^2\theta _1^2({\lambda _2} + {\gamma _2} + \mu ){S^{1*}}{P^{1*}}}\nonumber \\ &\quad { +\, {\alpha _1}{\alpha _2}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}} - {\alpha _2}({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ){A^{1*}}}\nonumber \\ &\quad { - \, \mu ({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ) + \mu {\alpha _1}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ){S^{1*}} + \mu {\alpha _2}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}} = 0.} \end{aligned}$$
(18)

Then we construct a cubic polynomial and replace the coefficient with \({a_3},{a_2},{a_1},{a_0}\) to determine the other eigenvalues of system (16). Hence, Eq. (18) can be rewritten as:

$$\begin{aligned}{a_3}{h^3} + {a_2}{h^2} + {a_1}h + {a_0} = 0,\end{aligned}$$

where

$$\left\{ {\begin{array}{*{20}{l}} {{a_3}} = {1,}\\ {{a_2}} = {{\alpha _1}{\theta _1}{P^{1*}} + {\alpha _2}{A^{1*}} + \mu ,}\\ {{a_1}} = {\alpha _1^2\theta _1^2{S^{1*}}{P^{1*}} - ({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ) + {\alpha _1}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ){S^{1*}}}\\ \quad \quad { + {\alpha _2}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}} - {\alpha _1}{\alpha _2}{\theta _1}{S^{1*2}} + \alpha _2^2{S^{1*}}{A^{1*}},}\\ {{a_0}} = {\alpha _2^2({\lambda _1} + {\gamma _1} + \mu ){S^{1*}}{A^{1*}} - {\alpha _1}{\alpha _2}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}}{P^{1*}} - \mu {\alpha _1}{\alpha _2}{\theta _1}{S^{1*2}}}\\ \quad \quad { - {\alpha _1}{\theta _1}({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ){P^{1*}} + \alpha _1^2\theta _1^2({\lambda _2} + {\gamma _2} + \mu ){S^{1*}}{P^{1*}}}\\ \quad \quad { + {\alpha _1}{\alpha _2}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}} - {\alpha _2}({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ){A^{1*}}}\\ \quad \quad { - \mu ({\lambda _1} + {\gamma _1} + \mu )({\lambda _2} + {\gamma _2} + \mu ) + \mu {\alpha _1}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ){S^{1*}} + \mu {\alpha _2}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}}.} \end{array}} \right.$$

And

$$\begin{aligned} {{a_2}{a_1} - {a_3}{a_0}}&= {\alpha _1^3\theta _1^3{S^{1*}}{P^{1*2}} + 2{\alpha _1}{\alpha _2}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}}{P^{1*}} + {\alpha _1}\alpha _2^2{\theta _1}{S^{1*}}{A^{1*}}{P^{1*}} - \alpha _1^2{\alpha _2}\theta _1^2{S^{1*2}}{P^{1*}}}\\ &\quad { +\, \alpha _1^2{\alpha _2}\theta _1^2{S^{1*}}{A^{1*}}{P^{1*}} + {\alpha _1}{\alpha _2}{\theta _1}({\lambda _2} + {\gamma _2} + \mu ){S^{1*}}{A^{1*}} + {\alpha _2}{S^{1*}}{A^{1*2}} + {\alpha _1}\alpha _2^2{\theta _1}{S^{1*2}}{A^{1*}}}\\ & \quad { +\, \mu \alpha _1^2\theta _1^2{S^{1*}}{P^{1*}} + \mu \alpha _2^2{S^{1*}}{A^{1*}} - {\alpha _1}{\alpha _2}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){S^{1*}}.} \end{aligned}$$

The condition of information-existence equilibrium point \({E^{1,*}} = ({S^{1,*}},{P^{1,*}},{A^{1,*}},G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) is locally asymptotically stable and the conditions: (1) \({a_3},{a_2},{a_1},{a_0} > 0\) and (2) \({a_2}{a_1} - {a_3}{a_0} > 0\) based on the Routh–Hurwitz criterion. It is easy to know that \({a_3},{a_2} > 0\). In addition, \({a_1},{a_0} > 0\) and \({a_2}{a_1} - {a_3}{a_0} > 0\) if \(R_0^P > 1\), \(R_0^A > 1\), \(\beta ,{\theta _1} \ge {\lambda _1} + {\gamma _1} + \mu ,{\lambda _2} + {\gamma _2} + \mu\), \({\alpha _1} > \mu\) and \({\alpha _2} > {\alpha _1},\beta ,{\theta _1},{\theta _2}\). In this case, the Routh–Hurwitz criterion are satisfied. Hence, the information-existence equilibrium point \({E^{1,*}} = ({S^{1,*}},{P^{1,*}},{A^{1,*}},G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) of system (1) is locally asymptotically stable. \(\square\)

Theorem 4

If \(R_0^P > 1\) and \(R_0^A > 1,\) the information-existence equilibrium point \({E^{1,*}}\) = \({E^{1,*}} = ({S^{1,*}},{P^{1,*}},{A^{1,*}},\) \(G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) of system (1) is globally asymptotically stable.

Proof

Let construct a Lyapunov function \({Z_1}(t)\) as:

$$\begin{aligned}{Z_1}(t) \!=\! \left[ {\left( S(t) \!-\! {S^{1,*}}\right) \!+\! \left( P(t) \!-\! {P^{1,*}}\right) \!+\! \left( A(t) \!-\! {A^{1,*}}\right) \!+\! \left( {G_1}(t) \!-\! G_1^{1,*}\right) {{\left. { \!+\! \left( {G_2}(t) \!-\! G_2^{1,*}\right) \!+\! \left( {R_1}(t) \!-\! R_1^{1,*}\right) \!+\! \left( {R_2}(t) \!-\! R_2^{1,*}\right) } \right] }^2},} \right.\end{aligned}$$
(19)

and \({Z'_1}(t)\) can be computed as:

$$\begin{aligned} {{{Z}_{1}^{\prime}}(t)}&= {2\left[ {\left( S(t) - {S^{1,*}}\right) + \left( P(t) - {P^{1,*}}\right) + \left( A(t) - {A^{1,*}}\right) + \left( {G_1}(t) - G_1^{1,*}\right) } \right. }\nonumber \\ & \quad {\left. { + \left( {G_2}(t) - G_2^{1,*}\right) + \left( {R_1}(t) - R_1^{1,*}\right) + \left( {R_2}(t) - R_2^{1,*}\right) } \right] }\nonumber \\ & \quad { \times \left[ {S'(t) + P'(t) + A'(t) + {{G}_{1}^{\prime}}(t) + {{G}_{2}^{\prime}}(t) + {{R}_{1}^{\prime}}(t) + {{R}_{2}^{\prime}}(t)} \right] }\nonumber \\&= {2\left[ {\left( S(t) - {S^{1,*}}\right) + \left( P(t) - {P^{1,*}}\right) + \left( A(t) - {A^{1,*}}\right) + \left( {G_1}(t) - G_1^{1,*}\right) } \right. }\nonumber \\ & \quad {\left. { + \left( {G_2}(t) - G_2^{1,*}\right) + \left( {R_1}(t) - R_1^{1,*}\right) + \left( {R_2}(t) - R_2^{1,*}\right) } \right] }\nonumber \\ &\quad { \times \left[ {B - \mu S - \mu P - \mu A - \mu {G_1} - \mu {G_2} - \mu {R_1} - \mu {R_2}} \right] .} \end{aligned}$$
(20)

Because of the existence of \({E^{1,*}}\) = \(({S^{1,*}},{P^{1,*}},{A^{1,*}},\) \(G_1^{1,*},G_2^{1,*}\) \(R_1^{1,*},R_2^{1,*})\), it can be known that \(B - \mu {S^{1,*}}\)\(\mu {P^{1,*}} - \mu {A^{1,*}}\)\(\mu G_1^{1,*} - \mu G_2^{1,*}\)\(\mu R_1^{1,*} - \mu R_2^{1,*} = 0\), i.e., B = \(\mu {S^{1,*}} + \mu {P^{1,*}} + \mu {A^{1,*}}\) + \(\mu G_1^{1,*} + \mu G_2^{1,*}\) + \(\mu R_1^{1,*} + \mu R_2^{1,*}\). Then, Eq. (20) can be computed as:

$$\begin{aligned} {{{Z}_{1}^{\prime}}(t)}&= {2\left[ {\left( S(t) - {S^{1,*}}\right) + \left( P(t) - {P^{1,*}}\right) + \left( A(t) - {A^{1,*}}\right) + \left( {G_1}(t) - G_1^{1,*}\right) } \right. }\nonumber \\ &\quad {\left. { + \left( {G_2}(t) - G_2^{1,*}\right) + \left( {R_1}(t) - R_1^{1,*}\right) + \left( {R_2}(t) - R_2^{1,*}\right) } \right] }\nonumber \\ &\quad { \times \left[ {\mu {S^{1,*}} + \mu {P^{1,*}} + \mu {A^{1,*}} + \mu G_1^{1,*} + \mu G_2^{1,*} + \mu R_1^{1,*} + \mu R_2^{1,*}} \right. }\nonumber \\ & \quad {\left. { - \mu S - \mu P - \mu A - \mu {G_1} - \mu {G_2} - \mu {R_1} - \mu {R_2}} \right] }\nonumber \\&= { - 2\mu \left[ {\left( S - {S^{1,*}}\right) + \left( P - {P^{1,*}}\right) + \left( A - {A^{1,*}}\right) + \left( {G_1} - G_1^{1,*}\right) } \right. }\nonumber \\ &\quad {{{\left. { + \left( {G_2} - G_2^{1,*}\right) + \left( {R_1} - R_1^{1,*}\right) + \left( {R_2} - R_2^{1,*}\right) } \right] }^2} \le 0.} \end{aligned}$$
(21)

Besides that, \({Z'_1}(t) = 0\) holds if and only if \(S(t) = {S^{1,*}},\) \(P(t) = {P^{1,*}},\) \(A(t) = {A^{1,*}},\) \({G_1}(t) = G_1^{1,*},\) \({G_2}(t) = G_2^{1,*},\) \({R_1}(t) = R_1^{1,*},\) \({R_2}(t) = R_2^{1,*}\). Hence, the information-existence equilibrium point \({E^{1,*}}\) = \(({S^{1,*}},{P^{1,*}},{A^{1,*}},\) \(G_1^{1,*},G_2^{1,*},R_1^{1,*},R_2^{1,*})\) of system (1) is globally asymptotically stable based on Lyapunov–LaSalle Invariance Principle37. \(\square\)

(2) Secondly, it should be proved that the local and global asymptotic stability of the equilibrium point \({E^{2,*}}\) where only the dissemination group P will exist while the dissemination group A will eventually disappear when \(R_0^P > 1\) and \(R_0^A < 1\).

Theorem 5

If \({\theta _1} \ge {\lambda _1} + {\gamma _1} + \mu ,\) \({\alpha _1} > \mu ,\) \(R_0^P > 1\) and \(R_0^A < 1,\) the information-existence equilibrium point \({E^{2,*}}=({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) of system (1) is locally asymptotically stable.

Proof

The Jacobin matrix at information-existence equilibrium point \({E^{2,*}} = ({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) of system (1) can be written as:

$$\begin{aligned} J({E^{2,*}}) = \left[ {\begin{array}{*{20}{c}} { - {\alpha _1}{\theta _1}{P^{2,*}} - \mu }& { - {\alpha _1}{\theta _1}{S^{2,*}}}& 0& 0& 0& 0& 0\\ {{\alpha _1}{\theta _1}{P^{2,*}}}& {{\alpha _1}{\theta _1}{S^{2,*}} - ({\lambda _1} + {\gamma _1} + \mu )}& 0& 0& 0& 0& 0\\ 0& {{\lambda _1}}& 0& { - {\varepsilon _1} - \mu }& 0& 0& 0\\ 0& 0& 0& 0& { - {\varepsilon _2} - \mu }& 0& 0\\ 0& 0& 0& {{\varepsilon _1}}& 0& { - \mu }& 0\\ 0& 0& 0& 0& {{\varepsilon _2}}& 0& { - \mu } \end{array}} \right] . \end{aligned}$$
(22)

The negative eigenvalues of \(J({E^{2,*}})\) can be easily obtained as \({\Lambda _{21}} = {\Lambda _{22}} = - \mu < 0\), \({\Lambda _{23}} = - {\varepsilon _1} - \mu < 0\), \({\Lambda _{24}} = - {\varepsilon _2} - \mu < 0\) and the other eigenvalues are the characteristic roots of \(\ \left| {hE - J({E^{2,*}})} \right|\), where

$$\begin{aligned} \left| {hE - J({E^{2*}})} \right| = \left| {\begin{array}{*{20}{c}} {h + {\alpha _1}{\theta _1}{P^{2*}} + \mu }& \quad {{\alpha _1}{\theta _1}{S^{2*}}}\\ { - {\alpha _1}{\theta _1}{P^{2*}}}& \quad {h - {\alpha _1}{\theta _1}{S^{2*}} + ({\lambda _1} + {\gamma _1} + \mu )} \end{array}} \right| = 0. \end{aligned}$$
(23)

The eigenvalues of Eq. (23) can be obviously obtained as:

$$\begin{aligned} {\left| {hE - J({E^{2*}})} \right| }&= {{h^2} + \left[ {{\alpha _1}{\theta _1}{P^{2*}} - {\alpha _1}{\theta _1}{S^{2*}} + ({\lambda _1} + {\gamma _1} + \mu ) + \mu } \right] h}\nonumber \\ & \quad { +\, {\alpha _1}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){P^{2*}} - {\alpha _1}{\theta _1}\mu {S^{2*}} + \mu ({\lambda _1} + {\gamma _1} + \mu ) = 0.} \end{aligned}$$
(24)

Further calculation yields the solution of Eq. (24) as follows:

$$\begin{aligned} {{\Lambda _{25}}}&= {\frac{1}{2}\left\{ {\left[ {{\alpha _1}{\theta _1}{S^{2*}} - {\alpha _1}{\theta _1}{P^{2*}} - ({\lambda _1} + {\gamma _1} + 2\mu )} \right] } \right. }\\ &\quad {\left. { +\, \sqrt{{{\left[ {{\alpha _1}{\theta _1}{P^{2*}} - {\alpha _1}{\theta _1}{S^{2*}} + ({\lambda _1} + {\gamma _1} + 2\mu )} \right] }^2} - 4\left[ {{\alpha _1}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){P^{2*}} - {\alpha _1}{\theta _1}\mu {S^{2*}} + \mu ({\lambda _1} + {\gamma _1} + \mu )} \right] } } \right\} ,} \end{aligned}$$

and

$$\begin{aligned} {{\Lambda _{25}}}& = {\frac{1}{2}\left\{ {\left[ {{\alpha _1}{\theta _1}{S^{2*}} - {\alpha _1}{\theta _1}{P^{2*}} - ({\lambda _1} + {\gamma _1} + 2\mu )} \right] } \right. }\\ & \quad {\left. { - \sqrt{{{\left[ {{\alpha _1}{\theta _1}{P^{2*}} - {\alpha _1}{\theta _1}{S^{2*}} + ({\lambda _1} + {\gamma _1} + 2\mu )} \right] }^2} - 4\left[ {{\alpha _1}{\theta _1}({\lambda _1} + {\gamma _1} + \mu ){P^{2*}} - {\alpha _1}{\theta _1}\mu {S^{2*}} + \mu ({\lambda _1} + {\gamma _1} + \mu )} \right] } } \right\} ,} \end{aligned}$$

it is easy to know that \({\Lambda _{25}} < 0\) and \({\Lambda _{26}} < 0\) if \({\theta _1} \ge {\lambda _1} + {\gamma _1} + \mu\), \({\alpha _1} > \mu\), \(R_0^P > 1\) and \(R_0^A < 1\). Hence, the information-existence equilibrium point \({E^{2,*}} = ({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) of system (1) is locally asymptotically stable. \(\square\)

Theorem 6

If \(R_0^P > 1\) and \(R_0^A < 1,\) the information-existence equilibrium point \({E^{2,*}} = ({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) of system (1) is globally asymptotically stable.

Proof

Let construct a Lyapunov function \({Z_2}(t)\) as:

$$\begin{aligned}{Z_2}(t) = {\left[ {\left( S(t) - {S^{2,*}}\right) + \left( P(t) - {P^{2,*}}\right) + \left( {G_1}(t) - G_1^{2,*}\right) + \left( {R_1}(t) - R_1^{2,*}\right) } \right] ^2}, \end{aligned}$$
(25)

and \({Z'_2}(t)\) can be computed as:

$$\begin{aligned} {{{Z}_{2}^{\prime}}(t)}&= {2\left[ {\left( S(t) - {S^{2,*}}\right) + \left( P(t) - {P^{2,*}}\right) + \left( {G_1}(t) - G_1^{2,*}\right) + \left( {R_1}(t) - R_1^{2,*}\right) } \right] }\nonumber \\ & \quad { \times \left[ {S'(t) + P'(t) + {{G}_{1}^{\prime}}(t) + {{R}_{1}^{\prime}}(t)} \right] }\nonumber \\& = {2\left[ {\left( S(t) - {S^{2,*}}\right) + \left( P(t) - {P^{2,*}}\right) + \left( {G_1}(t) - G_1^{2,*}\right) + \left( {R_1}(t) - R_1^{2,*}\right) } \right] }\nonumber \\ &\quad { \times \left[ {B - \mu S - \mu P - \mu {G_1} - \mu {R_1}} \right] .} \end{aligned}$$
(26)

Because of the existence of \({E^{2,*}}\) = \(({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\), it can be known that \(B - \mu {S^{2,*}} - \mu {P^{2,*}}\)\(\mu G_1^{2,*} - \mu R_1^{2,*} = 0\), i.e., \(B = \mu {S^{2,*}} + \mu {P^{2,*}} + \mu G_1^{2,*} + \mu R_1^{2,*}\). Then, Eq. (26) can be computed as:

$$\begin{aligned} {{{Z}_{2}^{\prime}}(t)}& = {2\left[ {\left( S(t) - {S^{2,*}}\right) + \left( P(t) - {P^{2,*}}\right) + \left( {G_1}(t) - G_1^{2,*}\right) + \left( {R_1}(t) - R_1^{2,*}\right) } \right] }\nonumber \\ & \quad { \times \left[ {\mu {S^{2,*}} + \mu {P^{2,*}} + \mu G_1^{2,*} + \mu R_1^{2,*} - \mu S - \mu P - \mu {G_1} - \mu {R_1}} \right] }\nonumber \\& = { - 2\mu {{\left[ {\left( S(t) - {S^{2,*}}\right) + \left( P(t) - {P^{2,*}}\right) +\left( {G_1}(t) - G_1^{2,*}\right) + \left( {R_1}(t) - R_1^{2,*}\right) } \right] }^2} \le 0.} \end{aligned}$$
(27)

Besides that, \({Z'_2}(t) = 0\) holds if and only if \(S(t) = {S^{2,*}},P(t) = {P^{2,*}},{G_1}(t) = G_1^{2,*},{R_1}(t) = R_1^{2,*}\). Hence, the information-existence equilibrium point \({E^{2,*}} = ({S^{2,*}},{P^{2,*}},G_1^{2,*},R_1^{2,*})\) of system (1) is globally asymptotically stable based on Lyapunov–LaSalle Invariance Principle37. \(\square\)

(3) Thirdly, it should be proved that the local and global asymptotic stability of the equilibrium point \({E^{3,*}}\) where only the dissemination group A will exist while the dissemination group P will eventually disappear when \(R_0^P < 1\) and \(R_0^A > 1\).

Theorem 7

If \({\alpha _2} > {\lambda _2} + {\gamma _2} + \mu ,\) \(R_0^P < 1\) and \(R_0^A > 1,\) the information-existence equilibrium point \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) of system (1) is locally asymptotically stable.

Proof

The Jacobin matrix at information-existence equilibrium point \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) of system (1) can be written as:

$$\begin{aligned} J({E^{3,*}}) = \left[ {\begin{array}{*{20}{c}} { - {\alpha _2}{A^{3,*}} - \mu }& 0& { - {\alpha _2}{S^{3,*}}}& 0& 0& 0& 0\\ {{\alpha _2}{A^{3,*}}}& 0& {{\alpha _2}{S^{3,*}} - ({\lambda _2} + {\gamma _2} + \mu )}& 0& 0& 0& 0\\ 0& 0& 0& { - {\varepsilon _1} - \mu }& 0& 0& 0\\ 0& 0& {{\lambda _2}}& 0& { - {\varepsilon _2} - \mu }& 0& 0\\ 0& 0& 0& {{\varepsilon _1}}& 0& { - \mu }& 0\\ 0& 0& 0& 0& {{\varepsilon _2}}& 0& { - \mu } \end{array}} \right] . \end{aligned}$$
(28)

The negative eigenvalues of \(J({E^{3,*}})\) can be easily obtained as \({\Lambda _{31}} = {\Lambda _{32}} = - \mu < 0\), \({\Lambda _{33}} = - {\varepsilon _1} - \mu < 0\), \({\Lambda _{34}} = - {\varepsilon _2} - \mu < 0\) and the other eigenvalues are the characteristic roots of \(\ \left| {hE - J({E^{3,*}})} \right|\), where

$$\begin{aligned} \left| {hE - J({E^{3,*}})} \right| = \left| {\begin{array}{*{20}{c}} {h + {\alpha _2}{A^{3,*}} + \mu }& \quad {{\alpha _2}{S^{3,*}}}\\ { - {\alpha _2}{A^{3,*}}}& \quad {h - {\alpha _2}{S^{3,*}} + ({\lambda _2} + {\gamma _2} + \mu )} \end{array}} \right| = 0. \end{aligned}$$
(29)

The eigenvalues of Eq. (29) can be obviously obtained as:

$$\begin{aligned} \left| {hE - J({E^{3,*}})} \right| = {h^2} + \left[ {{\alpha _2}{A^{3,*}} - {\alpha _2}{S^{3,*}} + \left( {\lambda _2} + {\gamma _2} + \mu \right) + \mu } \right] h + {\alpha _2}\left( {\lambda _2} + {\gamma _2} + \mu \right) {A^{3,*}} - {\alpha _2}\mu {S^{3,*}} + \mu \left( {\lambda _2} + {\gamma _2} + \mu \right) = 0. \end{aligned}$$
(30)

Further calculation yields the solution of Eq. (30) as follows:

$$\begin{aligned} {{\Lambda _{35}}}& = {\frac{1}{2}\left\{ {\left[ {{\alpha _2}{S^{3,*}} - {\alpha _2}{A^{3,*}} - \left( {\lambda _2} + {\gamma _2} + 2\mu \right) } \right] } \right. }\\ & \quad {\left. { + \sqrt{{{\left[ {{\alpha _2}{A^{3,*}} - {\alpha _2}{S^{3,*}} + \left( {\lambda _2} + {\gamma _2} + 2\mu \right) } \right] }^2} - 4\left[ {{\alpha _2}\left( {\lambda _2} + {\gamma _2} + \mu \right) {A^{3,*}} - {\alpha _2}\mu {S^{3,*}} + \mu \left( {\lambda _2} + {\gamma _2} + \mu \right) } \right] } } \right\} ,} \end{aligned}$$

and

$$\begin{aligned} {{\Lambda _{36}}}& = {\frac{1}{2}\left\{ {\left[ {{\alpha _2}{S^{3,*}} - {\alpha _2}{A^{3,*}} - \left( {\lambda _2} + {\gamma _2} + 2\mu \right) } \right] } \right. }\\ &\quad {\left. { - \sqrt{{{\left[ {{\alpha _2}{A^{3,*}} - {\alpha _2}{S^{3,*}} + \left( {\lambda _2} + {\gamma _2} + 2\mu \right) } \right] }^2} - 4\left[ {{\alpha _2}\left( {\lambda _2} + {\gamma _2} + \mu \right) {A^{3,*}} - {\alpha _2}\mu {S^{3,*}} + \mu \left( {\lambda _2} + {\gamma _2} + \mu \right) } \right] } } \right\} ,} \end{aligned}$$

it is easy to know that \({\Lambda _{35}} < 0\) and \({\Lambda _{36}} < 0\) if \({\alpha _2} > {\lambda _2} + {\gamma _2} + \mu\), \(R_0^P < 1\) and \(R_0^A > 1\). Hence, the information-existence equilibrium point \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) of system (1) is locally asymptotically stable. \(\square\)

Theorem 8

If \(R_0^P < 1\) and \(R_0^A > 1,\) the information-existence equilibrium point \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) of system (1) is globally asymptotically stable.

Proof

Let construct a Lyapunov function \({Z_3}(t)\) as:

$$\begin{aligned} {Z_3}(t) = {\left[ {\left( S(t) - {S^{3,*}}\right) + \left( A(t) - {A^{3,*}}\right) + \left( {G_2}(t) - G_2^{3,*}\right) + \left( {R_2}(t) - R_2^{3,*}\right) } \right] ^2}, \end{aligned}$$
(31)

and \({Z'_3}(t)\) can be computed as:

$$\begin{aligned} {{{Z}_{3}^{\prime}}(t)}= & {2\left[ {\left( S(t) - {S^{3,*}}\right) + \left( A(t) - {A^{3,*}}\right) + \left( {G_2}(t) - G_2^{3,*}\right) + \left( {R_2}(t) - R_2^{3,*}\right) } \right] }\nonumber \\ & { \times \left[ {S'(t) + A'(t) + {{G}_{2}^{\prime}}(t) + {{R}_{2}^{\prime}}(t)} \right] }\nonumber \\= & {2\left[ {\left( S(t) - {S^{3,*}}\right) + \left( A(t) - {A^{3,*}}\right) + \left( {G_2}(t) - G_2^{3,*}\right) + \left( {R_2}(t) - R_2^{3,*}\right) } \right] }\nonumber \\ & { \times \left[ {B - \mu S - \mu A - \mu {G_2} - \mu {R_2}} \right] ,} \end{aligned}$$
(32)

Because of the existence of \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\), it can be known that \(B - \mu {S^{3,*}} - \mu {A^{3,*}}\) −  \(\mu G_2^{3,*} - \mu R_2^{3,*} = 0\) , i.e., \(B = \mu {S^{3,*}} + \mu {A^{3,*}} + \mu G_2^{3,*} + \mu R_2^{3,*}\). Then, Eq. (32) can be computed as:

$$\begin{aligned} {{{Z}_{3}^{\prime}}(t)} & = {2\left[ {\left( S(t) - {S^{3,*}}\right) + \left( A(t) - {A^{3,*}}\right) + \left( {G_2}(t) - G_2^{3,*}\right) + \left( {R_2}(t) - R_2^{3,*}\right) } \right] }\nonumber \\ &\quad { \times \left[ {\mu {S^{3,*}} + \mu {A^{3,*}} + \mu G_2^{3,*} + \mu R_2^{3,*} - \mu S - \mu A - \mu {G_2} - \mu {R_2}} \right] }\nonumber \\ & = { - 2\mu {{\left[ {\left( S(t) - {S^{3,*}}\right) + \left( A(t) - {A^{3,*}}\right) + \left( {G_2}(t) - G_2^{3,*}\right) + \left( {R_2}(t) - R_2^{3,*}\right) } \right] }^2} \le 0.} \end{aligned}$$
(33)

Besides that, \({Z'_3}(t) = 0\) holds if and only if \(S(t) = {S^{3,*}},A(t) = {A^{3,*}},{G_2}(t) = G_2^{3,*},{R_2}(t) = R_2^{3,*}\). Hence, the information-existence equilibrium point \({E^{3,*}} = ({S^{3,*}},{A^{3,*}},G_2^{3,*},R_2^{3,*})\) of system (1) is globally asymptotically stable based on Lyapunov–LaSalle Invariance Principle37. \(\square\)

Sensitivity analysis

In order to analyze the effect of variables \({\alpha _1}\) and \(theta_1\) on the basic reproductive number \(R_0^P\). It is necessary to perform the sensitivity analysis of \(R_0^P\), which has been given above that \(R_0^P = \rho {(F{V^{ - 1}})_1} = \frac{{B{\alpha _1}{\theta _1}}}{{\mu ({\lambda _1} + {\gamma _1} + \mu )}}\), thereby calculating:

$$\begin{aligned} \frac{{\partial R_0^P}}{{\partial {\alpha _1}}} = \frac{{B{\theta _1}}}{{\mu ({\lambda _1} + {\gamma _1} + \mu )}} > 0, \end{aligned}$$
(34)

and

$$\begin{aligned} \frac{{\partial R_0^P}}{{\partial {\theta _1}}} = \frac{{B{\alpha _1}}}{{\mu \left( {\lambda _1} + {\gamma _1} + \mu \right) }} > 0. \end{aligned}$$
(35)

Thus it can be seen from Eqs. (34) and (35), \(R_0^P\) is positively correlated with \({\alpha _1}\) and \(theta_1\). That is to say, the basic reproduction number increases with the increase of parameters \({\alpha _1}\) and \(theta_1\), as shown in Fig. 2A. As the natural exposure rate of positive information increases, more people are exposed to information. In addition, the educational guidance provided by relevant management departments has played a certain role in expanding the dissemination of positive information.

Further, in order to analyze the effect of variable \({\alpha _2}\) on the basic reproductive number \(R_0^A\). It is necessary to perform the sensitivity analysis of \(R_0^A\), which has been given above that \(R_0^A = \rho {(F{V^{ - 1}})_2} = \frac{{B{\alpha _2}}}{{\mu ({\lambda _2} + {\gamma _2} + \mu )}}\), thereby calculating:

$$\begin{aligned} \frac{{\partial R_0^A}}{{\partial {\alpha _2}}} = \frac{B}{{\mu \left( {\lambda _2} + {\gamma _2} + \mu \right) }} > 0. \end{aligned}$$
(36)
Fig. 2
figure 2

The sensitivity analysis of the basic reproduction number (A) \(R_0^P\) and (B) \(R_0^A\).

Thus it can be seen from Eq. (36), \(R_0^A\) is positively correlated with \({\alpha _2}\). That is to say, the basic reproduction number increases with the increase of parameters \({\alpha _2}\), as shown in Fig. 2B. The natural exposure rate of negative information groups increases, which will also drive the widespread dissemination of negative information. The effective means to curb the spread of negative information is to control the flow of negative information disseminators among the population.

The optimal control model

Based on the competitive information dissemination model established above, two control objectives are proposed here to effectively guide and publicize the dissemination of various types of information in a scientific manner, while also accurately controlling information isolation. On the one hand, when information must be disseminated, the number of people guided by education and publicity should gradually increase in order to maximize the dissemination of positive information. On the other hand, when information does not need to be disseminated, the management department should be encouraged to maximize information isolation so that it can be managed quickly. To achieve this, the model’s four proportionality constants \(\theta _1\), \(\theta _2\), \(\lambda _1\), and \(\lambda _2\) are disseminated into control variables \(\theta _1(t)\), \(\theta _2(t)\), \(\lambda _1(t)\), and \(\lambda _2(t)\). The control variables \(\theta _1(t)\) and \(\theta _2(t)\) are used to control the proportion of people who follow the management department’s publicity and education. In general, the media or other forms of publicity can be used to encourage the group to accept positive information while also strengthening universal education so that the group can disseminate positive information through their own understanding. The control variables \(\lambda _1(t)\) and \(\lambda _2(t)\) control the proportion of people subjected to information isolation by the management department. In general, strengthening supervision and information screening can reduce the audience’s level of information, gradually reducing the heat of information and eventually leading to its disappearance.

Hence, it can be proposed an objective function as:

$$\begin{aligned}J({\theta _1},{\theta _2},{\lambda _1},{\lambda _2}) = \int _0^{{t_f}} {\left[ {P(t) + {G_1}(t) + {G_2}(t) - {{c_1}/2}\theta _1^2(t) - {{c_2}/2}\theta _2^2(t) - {{c_3}/2}\lambda _1^2(t) - {{c_4}/2}\lambda _2^2(t)} \right] dt,}\end{aligned}$$
(37)

and satisfy the follow state system

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {\frac{{dS}}{{dt}}} = {B - {\alpha _1}{\theta _1}(t)SP - {\alpha _2}SA - \mu S,}\\ {\frac{{dP}}{{dt}}} = {{\alpha _1}{\theta _1}(t)SP + \beta {\theta _2}(t)AP - {\lambda _1}(t)P - {\gamma _1}P - \mu P,}\\ {\frac{{dA}}{{dt}}} = {{\alpha _2}SA - \beta {\theta _2}(t)AP - {\lambda _2}(t)A - {\gamma _2}A - \mu A,}\\ {\frac{{d{G_1}}}{{dt}}} = {{\lambda _1}(t)P - {\varepsilon _1}{G_1} - \mu {G_1},}\\ {\frac{{d{G_2}}}{{dt}}} = {{\lambda _2}(t)A - {\varepsilon _2}{G_2} - \mu {G_2},}\\ {\frac{{d{R_1}}}{{dt}}} = {{\gamma _1}P + {\varepsilon _1}{G_1} - \mu {R_1},}\\ {\frac{{d{R_2}}}{{dt}}} = {{\gamma _2}A + {\varepsilon _2}{G_2} - \mu {R_2}.} \end{array}} \right. \end{aligned}$$
(38)

The initial conditions for state system (38) are satisfied:

$$\begin{aligned}{S(0) = {S_0},P(0) = {P_0},A(0) = {A_0},{G_1}(0) = {G_{1,0}},{G_2}(0) = {G_{2,0}},{R_1}(0) = {R_{1,0}},{R_2}(0) = {R_{2,0}},}\end{aligned}$$

where

$$\begin{aligned}{{\theta _1}(t),{\theta _2}(t),{\lambda _1}(t),{\lambda _2}(t) \in U,}\end{aligned}$$

and

$$\begin{aligned}U \buildrel \Delta \over = \left\{ {\left( {\theta _1},{\theta _2},{\lambda _1},{\lambda _2}\right) |\left( {\theta _1}(t),{\theta _2}(t),{\lambda _1}(t),{\lambda _2}(t)\right) \; measurable, \; 0 \le {\theta _1}(t),{\theta _2}(t),{\lambda _1}(t),{\lambda _2}(t) \le 1,\forall t \in \left[ {0,{t_f}} \right] } \right\} ,\end{aligned}$$

while U is the admissible control set. The time interval of control is between 0 and \(t_f\). \(c_1\), \(c_2\), \(c_3\), \(c_4\) are positive weight coefficients shown the control strength and importance of four control measures.

Theorem 9

An optimal control pair \(\left( {\theta _1^*,\theta _2^*,\lambda _1^*,\lambda _2^*} \right) \in U\) exists so that the function is established below:

$$\begin{aligned}J\left( \theta _1^*,\theta _2^*,\lambda _1^*,\lambda _2^*\right) = \max \left\{ J\left( {\theta _1},{\theta _2},{\lambda _1},{\lambda _2}\right) :\left( {\theta _1},{\theta _2},{\lambda _1},{\lambda _2}\right) \in U\right\} . \end{aligned}$$
(39)

Proof

Let \(X(t) = {(S(t),P(t),A(t),{G_1}(t),{G_2}(t),{R_1}(t),{R_2}(t))^T}\) and

$$\begin{aligned}\mathbb {L} \left( {t;X(t),{\theta _1}(t),{\theta _2}(t),{\lambda _1}(t),{\lambda _2}(t)} \right) = P(t) + {G_1}(t) + {G_2}(t) - {{{c_1}} / {2\theta _1^2(t){{ - {c_2}}/ {2\theta _2^2(t) - {{{c_3}} / {2\lambda _1^2(t) - {{{c_4}} / {2\lambda _2^2(t).}}}}}}}}\ \end{aligned}$$
(40)

The existence of an optimal pair must satisfy: (i) the set of control variables and state variables is nonempty, (ii) the control set U is convex and closed, (iii) the right-hand side of the state system is bounded by a linear function in the state and control variables, (iv) the integrand of the objective functional is convex on U, (v) there exist constants \({d_1},{d_2} > 0\) and \(\rho > 1\) such that the integrand of the objective functional satisfies:

$$\begin{aligned}-\mathbb {L} (t;X(t),{\theta _1};{\theta _2};{\lambda _1};{\lambda _2}) \ge {d_1}{\left( {\left| {{\theta _1}} \right| ^2} + {\left| {{\theta _2}} \right| ^2} + {\left| {{\lambda _1}} \right| ^2} + {\left| {{\lambda _2}} \right| ^2}\right) ^{\rho / 2}} - {d_2}. \end{aligned}$$
(41)

Conditions (i)–(iii) are clearly established, so the existence of optimal control only needs to satisfy conditions (iv) and (v). It is easy to obtain the inequality by state system (38) below:

$$\begin{aligned}{S' \le B,P' \le {\alpha _1}{\theta _1}(t)SP + \beta {\theta _2}(t)AP,A' \le {\alpha _2}SA,{{G}_{1}^{\prime}} \le {\lambda _1}(t)P,{{G}_{2}^{\prime}} \le {\lambda _2}(t)A.}\end{aligned}$$

Hence, condition (iv) is established. Then, for any \(t \ge 0\), there is a positive constant M which is satisfied \(\left| {X(t)} \right| \le M\), therefore

$$\begin{aligned} { -\mathbb {L} (t;X(t),{\theta _1};{\theta _2};{\lambda _1};{\lambda _2})}& = {{{\left( {c_1}\theta _1^2(t) + {c_2}\theta _2^2(t) + {c_3}\lambda _1^2(t) + {c_4}\lambda _2^2(t)\right) } / 2}}\nonumber \\ & \quad { - P(t) - {G_1}(t) - {G_2}(t)}\nonumber \\ & \ge {{d_1}{{\left( {{\left| {{\theta _1}} \right| }^2} + {{\left| {{\theta _2}} \right| }^2} + {{\left| {{\lambda _1}} \right| }^2} + {{\left| {{\lambda _2}} \right| }^2}\right) }^{\rho /2}} - 2M.} \end{aligned}$$
(42)

Let \({d_1} = \min \left\{ {\frac{{{c_1}}}{2},\frac{{{c_2}}}{2},\frac{{{c_3}}}{2},\frac{{{c_4}}}{2}} \right\} ,{d_2} = 2M\) and \(\rho = 2\), then condition (v) is established. Hence, the optimal control can be realized. \(\square\)

Theorem 10

For the optimal control pair \(\left( {\theta _1^*,\theta _2^*,\lambda _1^*,\lambda _2^*} \right)\) of state system (38), there exist adjoint variables \({\delta _1},{\delta _2},{\delta _3},{\delta _4},{\delta _5},{\delta _6},{\delta _7}\) that satisfy:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {\frac{{d{\delta _1}}}{{dt}}} = {\left( {{\delta _1} - {\delta _2}} \right) {\alpha _1}{\theta _1}(t)P + \left( {{\delta _1} - {\delta _3}} \right) {\alpha _2}A + {\delta _1}\mu ,}\\ {\frac{{d{\delta _2}}}{{dt}}} = {1 + \left( {{\delta _1} - {\delta _2}} \right) {\alpha _1}{\theta _1}(t)S + \left( {{\delta _3} - {\delta _2}} \right) \beta {\theta _2}(t)A + \left( {{\delta _2} - {\delta _4}} \right) {\lambda _1}(t) + \left( {{\delta _2} - {\delta _2}} \right) {\gamma _1} + {\delta _2}\mu ,}\\ {\frac{{d{\delta _3}}}{{dt}}} = {\left( {{\delta _1} - {\delta _3}} \right) {\alpha _2}S + \left( {{\delta _3} - {\delta _2}} \right) \beta {\theta _2}(t)P + \left( {{\delta _3} - {\delta _5}} \right) {\lambda _2}(t) + \left( {{\delta _3} - {\delta _5}} \right) {\gamma _2} + {\delta _3}\mu ,}\\ {\frac{{d{\delta _4}}}{{dt}}} = {1 + \left( {{\delta _4} - {\delta _6}} \right) {\varepsilon _1} + {\delta _4}\mu ,}\\ {\frac{{d{\delta _5}}}{{dt}}} = {1 + \left( {{\delta _5} - {\delta _7}} \right) {\varepsilon _2} + {\delta _5}\mu ,}\\ {\frac{{d{\delta _6}}}{{dt}}} = {{\delta _6}\mu ,}\\ {\frac{{d{\delta _7}}}{{dt}}} = {{\delta _7}\mu .} \end{array}} \right.\end{aligned}$$
(43)

With boundary conditions:

$$\begin{aligned}{\delta _1}({t_f}) = {\delta _2}({t_f}) = {\delta _3}({t_f}) = {\delta _4}({t_f}) = {\delta _5}({t_f}) = {\delta _6}({t_f}) = {\delta _7}({t_f}) = 0,\end{aligned}$$

In addition, the optimal control pair \(\left( {\theta _1^*,\theta _2^*,\lambda _1^*,\lambda _2^*} \right)\) of state system (38) can be given by:

$$\begin{aligned} \theta _1^*(t) & = \min \left\{ {1,\max \left\{ {0,\frac{{\left( {\delta _1} - {\delta _2}\right) {\alpha _1}SP}}{{{c_1}}}} \right\} } \right\} ,\theta _2^*(t) = \min \left\{ {1,\max \left\{ {0,\frac{{\left( {\delta _3} - {\delta _2}\right) \beta AP}}{{{c_2}}}} \right\} } \right\} ,\\ \lambda _1^*(t) & = \min \left\{ {1,\max \left\{ {0,\frac{{\left( {\delta _2} - {\delta _4}\right) P}}{{{c_3}}}} \right\} } \right\} ,\lambda _2^*(t) = \min \left\{ {1,\max \left\{ {0,\frac{{\left( {\delta _3} - {\delta _5}\right) A}}{{{c_4}}}} \right\} } \right\} . \end{aligned}$$

Proof

Define an expanded Hamilton function with a penalty term, calculate the expression of the optimal control parameters by solving this function, and ultimately obtain the optimal control system. The Hamilton function enlarged can be written as:

$$\begin{aligned} H& = { - P(t) - {G_1}(t) - {G_2}(t) + {{c_1}/2}\theta _1^2(t)(t) + {{c_2}/2}\theta _2^2(t)(t) + {{c_3}/2}\lambda _1^2(t) + {{c_4}/2}\lambda _2^2(t)}\nonumber \\ &\quad { + \, {\delta _1}\left[ {B - {\alpha _1}{\theta _1}(t)SP - {\alpha _2}SA - \mu S} \right] + {\delta _2}\left[ {{\alpha _1}{\theta _1}(t)SP + \beta {\theta _2}(t)AP - {\lambda _1}(t)P - {\gamma _1}P - \mu P} \right] }\nonumber \\ & \quad { +\, {\delta _3}\left[ {{\alpha _2}SA - \beta {\theta _2}(t)AP - {\lambda _2}(t)A - {\gamma _2}A - \mu A} \right] + {\delta _4}\left[ {{\lambda _1}(t)P - {\varepsilon _1}{G_1} - \mu {G_1}} \right] }\nonumber \\ & \quad { +\, {\delta _5}\left[ {{\lambda _2}(t)A - {\varepsilon _2}{G_2} - \mu {G_2}} \right] + {\delta _6}\left[ {{\gamma _1}P + {\varepsilon _1}{G_1} - \mu {R_1}} \right] + {\delta _7}\left[ {{\gamma _2}A + {\varepsilon _2}{G_2} - \mu {R_2}} \right] }\nonumber \\ &\quad { -\, {\omega _{11}}{\theta _1}(t) - {\omega _{12}}(1 - {\theta _1}(t)) - {\omega _{21}}{\theta _2}(t) - {\omega _{22}}(1 - {\theta _2}(t)) - {\omega _{31}}{\lambda _1}(t) - {\omega _{32}}(1 - {\lambda _1}(t))}\nonumber \\ & \quad { -\, {\omega _{41}}{\lambda _2}(t) - {\omega _{42}}(1 - {\lambda _2}(t)).} \end{aligned}$$
(44)

The penalty term is \({\omega _{ij}}(t) \ge 0\) , and it is satisfied that \({\omega _{11}}(t){\theta _1}(t) = {\omega _{12}}(t)(1 - {\theta _1}(t)) = 0\) at optimal control \(\theta _1^*\), \({\omega _{21}}(t){\theta _2}(t) = {\omega _{22}}(t)(1 - {\theta _2}(t)) = 0\) at optimal control \(\theta _2^*\), \({\omega _{31}}(t){\lambda _1}(t) = {\omega _{32}}(t)(1 - {\lambda _1}(t)) = 0\) at optimal control \(\lambda _1^*\) and \({\omega _{41}}(t){\lambda _2}(t) = {\omega _{42}}(t)(1 - {\lambda _2}(t)) = 0\) at optimal control \(\lambda _2^*\).

Based on the Pontyragin maximum principle, the adjoint system can be written as:

$$\begin{aligned}\frac{{d{\delta _1}}}{{dt}} = - \frac{{\partial H}}{{\partial S}},\frac{{d{\delta _2}}}{{dt}} = - \frac{{\partial H}}{{\partial P}},\frac{{d{\delta _3}}}{{dt}} = - \frac{{\partial H}}{{\partial A}},\frac{{d{\delta _4}}}{{dt}} = - \frac{{\partial H}}{{\partial {G_1}}},\frac{{d{\delta _5}}}{{dt}} = - \frac{{\partial H}}{{\partial {G_2}}},\frac{{d{\delta _6}}}{{dt}} = - \frac{{\partial H}}{{\partial {R_1}}},\frac{{d{\delta _7}}}{{dt}} = - \frac{{\partial H}}{{\partial {R_2}}},\end{aligned}$$

and the boundary conditions of adjoint system are

$$\begin{aligned}{\delta _1}\big ({t_f}\big ) = {\delta _2}\big ({t_f}\big ) = {\delta _3}\big ({t_f}\big ) = {\delta _4}\big ({t_f}\big ) = {\delta _5}\big ({t_f}\big ) = {\delta _6}\big ({t_f}\big ) = {\delta _7}\big ({t_f}\big ) = 0.\end{aligned}$$

Since the derivation process of the optimal control formulae for optimal control parameters \(\theta _1^*\), \(\theta _2^*\), \(\lambda _1^*\) and \(\lambda _2^*\) is similar, the derivation process of the optimal control formulae for optimal control parameter \(\theta _1^*\) is taken as an example here.

The covariate variable can be given as:

$$\begin{aligned}\frac{{\partial H}}{{\partial {\theta _1}}} = {c_1}{\theta _1}(t) - {\delta _1}{\alpha _1}SP + {\delta _2}{\alpha _1}SP - {\omega _{11}} + {\omega _{12}} = 0, \end{aligned}$$
(45)

the optimal control formulae with a penalty term can be obtained as:

$$\begin{aligned}\theta _1^* = \frac{1}{{{c_1}}}({\delta _1} - {\delta _2}){\alpha _1}SP + {\omega _{11}} - {\omega _{12}}. \end{aligned}$$
(46)

To obtain the final optimal control formulae without \({\omega _{11}}\) and \({\omega _{12}}\) need to consider the following three situations.

(1) Let \({\omega _{11}}(t) = {\omega _{12}}(t) = 0\) in set \(\left\{ {\left. t \right| 0< \theta _1^*(t) < 1} \right\}\), then the optimal control formulae can be written as:

$$\begin{aligned}\theta _1^*(t) = \frac{1}{{{c_1}}}({\delta _1} - {\delta _2}){\alpha _1}SP.\end{aligned}$$

(2) Let \({\omega _{11}}(t) = 0\) in set \(\left\{ {\left. t \right| \theta _1^*(t) = 1} \right\}\), then the optimal control formulae can be written as:

$$\begin{aligned}1 = \theta _1^*(t) = \frac{1}{{{c_1}}}\left[ {({\delta _1} - {\delta _2}){\alpha _1}SP - {\omega _{12}}} \right] .\end{aligned}$$

Due to \({\omega _{12}}(t) \ge 0\), it is shown that \(\frac{1}{{{c_1}}}({\delta _1} - {\delta _2}){\alpha _1}SP \ge 1\)

(3) Let \({\omega _{12}}(t) = 0\) in set \(\left\{ {\left. t \right| \theta _1^*(t) = 0} \right\}\), then the optimal control formulae can be written as:

$$\begin{aligned}0 = \theta _1^*(t) = \frac{1}{{{c_1}}}\left[ {({\delta _1} - {\delta _2}){\alpha _1}SP + {\omega _{11}}} \right] .\end{aligned}$$

Based on the above situation, the final optimal control formulae of \(\theta _1^*(t)\) can be written as:

$$\begin{aligned}\theta _1^*(t) = \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _1} - {\delta _2}){\alpha _1}SP}}{{{c_1}}}} \right\} } \right\} .\end{aligned}$$

Similarly, the final optimal control formulae of \(\theta _2^*(t)\), \(\lambda _1^*(t)\) and \(\lambda _2^*(t)\) can be obtained as:

$$\begin{aligned}\theta _2^*(t) = \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _2})\beta AP}}{{{c_2}}}} \right\} } \right\} ,\lambda _1^*(t) = \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _2} - {\delta _4})P}}{{{c_3}}}} \right\} } \right\} ,\lambda _2^*(t) = \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _5})A}}{{{c_4}}}} \right\} } \right\} .\end{aligned}$$

So far, the initial conditions and optimal control system that include the state system (38) and the adjoint system (43) with boundary conditions with the optimization conditions are:

$$\begin{aligned}\left\{ \begin{array}{l} {\frac{{dS}}{{dt}}} = {B - {\alpha _1}\min \left\{ {1,\max \left\{ {0,\frac{{({\delta _1} - {\delta _2}){\alpha _1}SP}}{{{c_1}}}} \right\} } \right\} (t)SP - {\alpha _2}SA - \mu S,}\\ {\frac{{dP}}{{dt}}} = {{\alpha _1}\min \left\{ {1,\max \left\{ {0,\frac{{({\delta _1} - {\delta _2}){\alpha _1}SP}}{{{c_1}}}} \right\} } \right\} (t)SP - \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _2} - {\delta _4})P}}{{{c_3}}}} \right\} } \right\} (t)P}\\ \quad \quad \quad { + \beta \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _2})\beta AP}}{{{c_2}}}} \right\} } \right\} (t)AP - {\gamma _1}P - \mu P}\\ {\frac{{dA}}{{dt}}} = {{\alpha _2}SA - \beta \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _2})\beta AP}}{{{c_2}}}} \right\} } \right\} (t)AP}\\ \quad \quad \quad { - \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _5})A}}{{{c_4}}}} \right\} } \right\} (t)A - {\gamma _2}A - \mu A,}\\ {\frac{{d{G_1}}}{{dt}}} = {\min \left\{ {1,\max \left\{ {0,\frac{{({\delta _2} - {\delta _4})P}}{{{c_3}}}} \right\} } \right\} (t)P - {\varepsilon _1}{G_1} - \mu {G_1},}\\ {\frac{{d{G_2}}}{{dt}}} = {\min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _5})A}}{{{c_4}}}} \right\} } \right\} (t)A - {\varepsilon _2}{G_2} - \mu {G_2},}\\ {\frac{{d{R_1}}}{{dt}}} = {{\gamma _1}P + {\varepsilon _1}{G_1} - \mu {R_1},}\\ {\frac{{d{R_2}}}{{dt}}} = {{\gamma _2}A + {\varepsilon _2}{G_2} - \mu {R_2}} \\ {\frac{{d{\delta _1}}}{{dt}}} = {\left( {{\delta _1} - {\delta _2}} \right) {\alpha _1}\min \left\{ {1,\max \left\{ {0,\frac{{({\delta _1} - {\delta _2}){\alpha _1}SP}}{{{c_1}}}} \right\} } \right\} (t)P + \left( {{\delta _1} - {\delta _3}} \right) {\alpha _2}A + {\delta _1}\mu ,}\\ {\frac{{d{\delta _2}}}{{dt}}} = {1 + \left( {{\delta _1} - {\delta _2}} \right) {\alpha _1}\min \left\{ {1,\max \left\{ {0,\frac{{({\delta _1} - {\delta _2}){\alpha _1}SP}}{{{c_1}}}} \right\} } \right\} (t)S}\\ \quad \quad \quad { + \left( {{\delta _3} - {\delta _2}} \right) \beta \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _2})\beta AP}}{{{c_2}}}} \right\} } \right\} (t)A}\\ \quad \quad \quad { + \left( {{\delta _2} - {\delta _4}} \right) \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _2} - {\delta _4})P}}{{{c_3}}}} \right\} } \right\} (t) + \left( {{\delta _2} - {\delta _2}} \right) {\gamma _1} + {\delta _2}\mu }\\ {\frac{{d{\delta _3}}}{{dt}}} = {\left( {{\delta _1} - {\delta _3}} \right) {\alpha _2}S + \left( {{\delta _3} - {\delta _2}} \right) \beta \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _2})\beta AP}}{{{c_2}}}} \right\} } \right\} (t)P}\\ \quad \quad \quad { + \left( {{\delta _3} - {\delta _5}} \right) \min \left\{ {1,\max \left\{ {0,\frac{{({\delta _3} - {\delta _5})A}}{{{c_4}}}} \right\} } \right\} (t) + \left( {{\delta _3} - {\delta _5}} \right) {\gamma _2} + {\delta _3}\mu ,}\\ {\frac{{d{\delta _4}}}{{dt}}} = {1 + \left( {{\delta _4} - {\delta _6}} \right) {\varepsilon _1} + {\delta _4}\mu ,}\\ {\frac{{d{\delta _5}}}{{dt}}} = {1 + \left( {{\delta _5} - {\delta _7}} \right) {\varepsilon _2} + {\delta _5}\mu ,}\\ {\frac{{d{\delta _6}}}{{dt}}} = {{\delta _6}\mu ,}\\ {\frac{{d{\delta _7}}}{{dt}}} = {{\delta _7}\mu ,} \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} S(0) &= {S_0},P(0) = {P_0},A(0) = {A_0},{G_1}(0) = {G_{1,0}},{G_2}(0) = {G_{2,0}},{R_1}(0) = {R_{1,0}},{R_2}(0) = {R_{2,0}}, \\ {\delta _1}\big ({t_f}\big ) &= {\delta _2}\big ({t_f}\big ) = {\delta _3}\big ({t_f}\big ) = {\delta _4}\big ({t_f}\big ) = {\delta _5}\big ({t_f}\big ) = {\delta _6}\big ({t_f}\big ) = {\delta _7}\big ({t_f}\big ) = 0. \end{aligned}$$

\(\square\)

Numerical simulations

The methods for solving ordinary differential equations usually include the RungeKutta 4 (RK4), Euler, and ModifiedEuler methods. It was found that the RungeKutta 4 (RK4) method has significantly higher accuracy and better stability under the same step size, and is suitable for more complex problems with higher accuracy requirements. Therefore, the RungeKutta 4 (RK4) method is used for numerical simulation to validate the theoretical results of system (1) in this section. It was discovered in the process of referring to relevant literature that the values of the models in most of the literature were not based on a unified standard. Assigning values to parameters according to the stability conditions of the model is widely applied. When faced with practical research problems, we can solve them by appropriately adjusting the parameters based on combining the problem’s background and the model, as well as satisfying the model’s relevant theorems. Therefore, referring to the related literature38,39, on the basis of combining with the above-given Theorems 1 to 8 on equilibrium \(E^0,\) \(E^{1,*},\) \(E^{2,*}\) and \(E^{3,*}\) local and global asymptotic stability condition, the parameter values for verifying the theoretical results of system 1 are given, as shown in Table 2.

Table 2 The numerical simulation values for theoretical analysis of system 1.

First, Theorems 1 and 2 are tested for the local and global asymptotic stability of the vanishing equilibrium point of information \(E^0\). Figure 3 depicts the stable state of system (1) at the vanishing equilibrium point \(E^0\) and the evolution of different groups over time. As shown in Fig. 3, the group S(t) gradually rises until it becomes stable, while the groups \(P(t),A(t),{G_1}(t),{G_2}(t),{R_1}(t),{R_2}(t)\) gradually decline to stability. Finally, the system achieves a stable state at the equilibrium point. At this point, the system contains only the group S(t); other groups vanish, and information disappears from the social system. In addition, to verify the accuracy of the results, several other groups of parameter values that satisfy the stability given by the theorem are also selected. Similar results are obtained, indicating that the different parameter values do not affect the stability of the vanishing equilibrium point \(E^0\) of information.

Fig. 3
figure 3

The stability of information-free equilibrium \(E^0\) of system 1.

Second, the local and global asymptotic stability of the information in Theorems 3 to 8 is demonstrated. Here, it is considered that the information dissemination model with competitive relationship proposed in this paper contains three stable states, namely, the equilibrium point \(E^{1,*}\) where two pieces of information disseminate in the system at the same time, and the equilibrium points \(E^{2,*}\)and \(E^{3,*}\) where any one piece of information disseminates in the system alone while the other piece of information disappears. Figure 4 shows the model’s stable state when the equilibrium point \(E^{1,*}\) exists in both pieces of information, as well as changes in various groups over time. Figure 4 illustrates that the change trends of various groups and the equilibrium points \(E^0\) of information disappearance differ over time. At this time, the group S(t) gradually declines until it becomes stable, while the group \(P(t), A(t), {G_1}(t), {G_2}(t), {R_1}(t), {R_2}(t)\) does not tend to be 0 as time changes when the information equilibrium \(E^{1,*}\) reaches a steady state. This indicates that the two pieces of information can eventually continue to spread throughout the social system.

Fig. 4
figure 4

The stability of information-existence equilibrium \(E^{1,*}\) of system 1.

Figure 5 confirms the model’s stable state at the information-existence equilibrium points \(E^{2,*}\) and \(E^{3,*}\), as well as the evolution of different groups over time. Figure 5A shows that the group \(S(t),P(t),{G_1}(t),{R_1}(t)\) do not tend to be 0 as time changes when the information equilibrium \(E^{2,*}\) reaches a stable state, whereas the group \(A(t),{G_2}(t),{R_2}(t)\) tend to 0 with the change of time in the information equilibrium \(E^{2,*}\). At the information equilibrium point \(E^{2,*}\), positive information can continue to disseminate while negative information will eventually disappear. Figure 5B shows that over time, groups \(S(t),A(t),{G_2}(t),{R_2}(t)\) do not tend to be 0 when they reach the information equilibrium point. As time goes on, the group tends to be 0 at the information equilibrium point \(E^{3,*}\). At the information equilibrium point \(E^{3,*}\), negative information can still be disseminated, while positive information will eventually disappear.

Fig. 5
figure 5

The stability of information-existence equilibrium (A) \(E^{2,*}\) and (B) \(E^{3,*}\) of system 1.

To examine the impact of system (1) on various group changes of the optimal control pairs \(\left( {\theta _1^*,\theta _2^*,\lambda _1^*,\lambda _2^*} \right)\) chosen in this paper, “optimal control”, “middle control measure”, “single control measure”, and “constant control measure” are used for the observed group to analyze the effectiveness of the proposed optimal control.

Next, let \(B = 1,{\alpha _1} = {\alpha _2} = \beta = 0.3,{\gamma _1} = {\gamma _2} = {\varepsilon _1} = {\varepsilon _2} = \mu = 0.1\), and use this optimal control for \(\left( {\theta _1^*,\theta _2^*,\lambda _1^*,\lambda _2^*}\right)\) to conduct optimal control. Figure 6 shows the density of the group \(P(t),A(t),{G_1}(t),{G_2}(t)\) over time with various control strategies. It can be seen from Fig. 6B and D that when the optimal control is adopted, the negative information disseminating group A(t) and the negative information isolating group \(G_2(t)\) gradually disappear until they tend to be 0. It demonstrates that optimal control can effectively prevent the spread of negative information. Figure 6A shows that when the optimal control is used, the positive information dissemination group P(t) does not reach the optimal state. Therefore, the middle control measure needs to be adjusted.

Fig. 6
figure 6

The densities of (A) P(t), (B) A(t), (C) \({G_1}(t)\), (D) \({G_2}(t)\) change over time under different control strategies.

Finally, the middle control measure is changed, while the other parameters remain unchanged. Figure 7 shows the density of groups \(P(t),A(t),{G_1}(t),{G_2}(t)\) over time using various control strategies. Figure 7A shows that when the optimal control is used, both the group P(t) disseminating positive information and the isolated group \(G_1(t)\) disseminating positive information achieve the optimal state, indicating that positive information dissemination is maximized. When information becomes invalid, it has the potential to effectively regulate information dissemination. Figure 7B and D show that when the optimal control is applied, the negative information disseminating group A(t) and the negative information isolating group \(G_2(t)\) gradually disappear until they tend to be 0. This indicates that the negative information gradually disappears, and resources are not required to control the dissemination of the negative information.

Fig. 7
figure 7

The densities of (A) P(t), (B) A(t), (C) \({G_1}(t)\), (D) \({G_2}(t)\) change over time under different control strategies.

In summary, it can be found from the findings of numerical simulation that: (1) Under the premise of satisfying the stability conditions, each population in the model can eventually tend to a stable state. (2) The conditions for promoting or curbing the dissemination of information can be found by adjusting the value of the basic reproduction number. The dissemination conditions of each type of information are in line with the results of theoretical analysis. (3) In order to effectively promote the dissemination of positive information and curb the dissemination of negative information, the control parameters selected in this paper have played an important role. On the one hand, in order to encourage the dissemination of positive information, guidance and education mechanisms can be adopted to enable the crowd to absorb positive information to the greatest extent. Not only guide the crowd to cultivate awareness from the source, but also strengthen the authority of relevant administrative departments to enhance the education for those who spread negative information. On the other hand, when the dissemination of information gets out of control and it is impossible to distinguish positive information from negative information in a timely manner, taking isolation measures for both types of information can play a buffering role. That is to say, the relevant administrative departments make full use of their own authoritative advantages to vigorously guide the public to give up disseminating all kinds of information. Similarly, two types of competing factors also have important applications in the field of infectious disease prevention and control. For example, mosquitoes are the root cause of the spread of yellow fever virus. However, increasing the vaccination rate, which is in competition with it, can effectively reduce the probability of falling ill. Taking isolation measures for patients who are ill can also reduce the risk of spreading. The results of this study have certain implications for promoting the information dissemination model with competitive relationships to the spread of diseases in the future.

Conclusions

The phenomenon of multiple information disseminating together has brought challenges to the monitoring of information. When the dissemination of information gets out of control, promptly curbing the dissemination of multiple pieces of information can provide a buffer time for information control. Taking into account the phenomenon of mutual competition in the dissemination of multiple pieces of information, this paper constructs a SPA2G2R model with a competitive relationship. This model also considers the control of information by implementing isolation measures. Subsequently, the basic reproduction number of the model was calculated by means of the next-generation matrix method. The stability analysis of the model was carried out by adopting the Routh–Hurwitz stability criterion and the second method of Lyapunov. Taking the influence rate of the guidance mechanism and the isolation rate as the optimal control parameters, the expressions of the optimal control parameters and the optimal control system are presented with the aid of the Pontryagin’s maximum principle. Numerical simulations demonstrate various states and changes of information dissemination. The research results show that: (1) When multiple pieces of information are disseminating together, they will restrain each other. Each piece of information has an equilibrium point, and each can reach a stable state. (2) By performing optimal control on the influence rate of the guidance mechanism, it is possible to prompt the disseminators of negative information to transform into disseminators of positive information. (3) By conducting optimal control on the isolation rate, the impact brought by the out-of-control information dissemination can be eliminated in a timely manner.

The research in this paper studies the problem of information dissemination by means of the modeling methods and theoretical analysis methods in biology. This research can also be extended to the study of the spread of infectious diseases. For example, regarding the issues of the spread and control of the yellow fever virus, a model of information dissemination with a competitive relationship can be utilized to construct a model that considers two influencing factors with a competitive relationship, namely the vaccination rate and mosquito control. This will provide a research perspective for the control of infectious diseases such as yellow fever that have treatment mechanisms and harmful agents.

Limitations of the work

The research in this paper is of certain help in weakening the impact of negative information on society, mainly by providing solutions through the means of expanding the dissemination of positive information. However, there are many other ways to control the dissemination of negative information. In addition, this paper takes into account the issue of information dissemination in a deterministic environment and does not consider the impact of random perturbations on information dissemination. Similarly, the model constructed in this paper adopts ordinary differential equation modeling, while the fractional-order differential equation modeling method can also solve more problems. In future research, from the perspective of research, we are going to study the methods of controlling negative information from more angles to avoid its competition with positive information. From the perspective of the research environment, we will further construct an information dissemination model that takes random perturbations into account. From the perspective of research methods, we will construct an information dissemination model by using fractional-order differential equations. From the perspective of expanding from biology to other disciplinary fields, we will apply the concept of competitive relationship more widely in the research on modeling of infectious diseases such as yellow fever.