Summary


Overview





Parameter space





Difficulties


References




Modelling


Summary

We show that the circuit will work!

Being the first the analyse the repression-based antithetic integral controller architecture

Conducting pre-experimental parameter analysis to optimise design strategy

Overview

We constructed and implemented a variety of complex models in order to guide our experimental design as well as using these models to answer fundamental questions about our system and probe aspects of our project that we could not do so experimentally.
We used a variety of modelling techniques in our work in order to tailor our modelling efforts to answer specific questions in an efficient manner. Using multiple approaches to modelling the system furthermore allowed us to verify each individual model; this allows us to consider which assumptions are valid and to capitalise on the strengths of each individual model.
Specifically, we first considered the ordinary differential equations that describe the time evolution of the species in our system. To derive these ODEs we considered the fluxes that determine the concentration of each species in the system. To support our analytical derivation of the relevant ODEs we used SimBiology to build our model and confirm our ODEs. In this, we used a model which included both transcription and translation.
The parameters of the ODEs were obtained from the literature where possible along with first-principle calculations. These ODEs could then be solved deterministically and stochastically to obtain the time evolution of the species in the system in the presence of a variety of types of perturbations (described in SimBiology as events).
In parallel to this, we used a reduced set of ODEs (which combined the effects of transcription and translation) to explore the stability conditions of our system by using mathematics of control theory. In this we also performed extensive parameter optimisation and sensitivity analysis which guided our experimental design throughout our project in an integrated manner; this was crucial and necessary in order to reduce the sheer volume of circuits we needed to build and test in the finite time given. In this, the information obtained from these models was used in our experimental design and the results from experiment were then incorporated into our models by making our parameters more accurate and in this we were able to go through many design-test-build-learn cycles. See section ‘Control Theory Modelling’ for a complete description of the model, including results and how they were integrated into our wider project by influencing our experimental design.

Our project is furthermore heavily based on the mathematics of control theory and therefore we felt it was important to include a description of the relevant mathematics that we used to design our project and build our models. See ‘Theoretical Background’ for a summary of the most important concepts that we used and built upon in our modelling endeavours.

For the documentation of our code, please visit the following GitHub repository.

Joni Modelling
Joni Modelling

1. Gene Regulatory Network (GRN) Simulation of Varying Complexity

1.1. Modelling the GRN of the antithetic feedback controller

Here the GRN of the antithetic feedback motif was used to simulate the response of our system to perturbation. Modelling efforts in the literature (such as in Briat et al. 2019) have focused on a simplified system in which the dynamics of transcription and translation are combined. This is a common simplification used when simulating the dynamics of genetic circuits since the differences in time scale between these two processes allow for transcription to often be assumpted to be at steady state (since it occurs more rapidly). However, a more realistic model including both transcription and translation was used here for two primary reasons: firstly, to test whether robust adaptation was still achieved when transcription and translation are treated separately, and thus to test the assumptions in previous models (and in separate modelling efforts with in this project) are valid. It is especially important to test these model assumptions in case where the system is highly non-linear, as ours is. Secondly, be using this more realistic model of the network, we can understand the parameters of the system better as we can set parameters specific to trancriptional rate and translational rates (which can be determined form literature or 'back-of-the-envelope' calculations) which are easier to understand than a combined 'production rate' parameter, for instance. In this we can insert parameters reflect the characteristics of the parts we will use more accurately, which is important when choosing which parts to use in our assemblies, and then post-experiment we can fit our more detailed model with our results to deduce these more physically significant parameters (such as translational rate of a specific protein).

Here we first use Hill activation in the model, for the activation of Z2 by X and for X by Z1, and then simplify the model by linearising the Hill activation function to give linear activation.

Presented below is the model derivation, a detailed description of the model, the results from the simulations and then a discussion of the implications of the results, including how they influenced our experimental work and helped us to understand our lab results, and a consideration of future work and next steps.

Model Derivation


Here we describe the features and derivation of the ODE-based mathematical model of the proposed antithetic integral control system GRN.


Model Assumptions


  1. We assume that protein degradation is predominantly due to dilution from cell division and not from active degradation of protein species.
  2. We use a continuous and deterministic ODE model to represent the system dynamics. Reactions are represented through mass-action kinetics and population growth is not explicitly accounted for.
  3. For all species in the system it is assumed that the variation in plasmid copy number is not large enough to significantly affect the dynamics and can be neglected from the model.
  4. We assume that interactions between the sigma factors and RNA polymerase can be neglected.
  5. We assume a cell volume equal to that of a typical E.Coli cell of 1fL.


Model Diagram


Figure 1. A diagram of the Gene Regulatory Network for the Antithetic Integral Controller. Circles represent processes and ovals represent species. Blue circles= basal transcription, lilac circles= translation, green circles= activation (Hill activation in this case), red circles= degradation, white circle= binding/unbinding of antithetic molecules.

System of Ordinary Differential Equations (ODEs)


Variables:


Symbol Variable
[MZ1] mRNA concentration for Z1
[MZ2] mRNA concentration for Z2
[MX] mRNA concentration for X
[MR] mRNA concentration for reporter
[PZ1] Z1 protein concentration
[PZ2] Z2 protein concentration
[PX] X protein concentration
[PR] reporter protein concentration
[C] Z1-Z2 complex concentration

Table 1.Variables in the ODE describing the antithetic integral controller

Parameters (hill activation):

Symbol Parameter Units
χ0,i Basal rate of transcription for species i nM min1
χ1,i Maximal rate of transcription for species i nM min1
ki Rate of translation of species i from it’s mRNA min1
γi Rate of mRNA degradation for species i min1
δi Rate of protein degradation for species i min1
ni Hill coefficient for activation by species i -
Ki Dissociation constant for activation by species i nM
η+ Rate of association of Z1-Z2 complex min1molecules1
η Rate of dissociation of Z1-Z2 complex min1

Table 2. Parameters associated with the antithetic integral controller using Hill activation.

Equations (hill activation):

$$\frac{d[M_{Z1}]}{dt}=\chi_{0,Z1} - \gamma_{Z1}[M_{Z1}]$$

$$\frac{d[P_{Z1}]}{dt} = k_{Z1}[M_{Z1}] - \delta_{Z1}[P_{Z1}] - (\eta^+[P_{Z1}][P_{Z2}] - \eta^-[C])$$

$$\frac{d[M_{Z2}]}{dt} = -\gamma_{Z2}[M_{Z2}] + \frac{\chi_{1,Z2}[P_X]^{n_x}}{K_X + [P_X]^{n_x}} + \chi_{0,Z2}$$

$$\frac{d[P_{Z2}]}{dt} = k_{Z2}[M_{Z2}] - \delta_{Z2}[P_{Z2}] - (\eta^+[P_{Z1}][P_{Z2}] - \eta^-[C])$$

$$\frac{d[M_X]}{dt} = \frac{\chi_{1,X}[P_{Z1}]^{n_{Z1}}}{K_{Z1} + [P_{Z1}]^{n_{Z1}}} - \gamma_X[M_X] + \chi_{0,X}$$

$$\frac{d[P_X]}{dt} = k_X[M_X] - \delta_X[P_X]$$

$$\frac{d[M_R]}{dt} = \frac{\chi_{1,R}[P_{Z1}]^{n_{Z1}}}{K_{Z1} + [P_{Z1}]^{n_{Z1}}} - \gamma_R[M_R] + \chi_{0,R}$$

$$\frac{d[P_R]}{dt} = k_R[M_R] - \delta_R[P_R]$$

$$\frac{d[C]}{dt} = (\eta^+[P_{Z1}][P_{Z2}]-\eta^-[C]) - \delta_C[C]$$

Parameter choice (hill activation):

Parameter Baseline Value Reference
χ0,i 90 nM min1 Stricker et al. (2008)
χ1,i 1800 nM min1 Stricker et al. (2008)
ki 81 min1 Stricker et al. (2008)
γi 0.54 min1 Stricker et al. (2008)
δi 0.03 min1 Annunziata et al. (2017)
nZ1 2 U. Alon An introduction to systems biology
KZ1 3000 nM Rhodius et al. (2013)
nX 2.3 Meyer et al. (2019)
KX 4300 nM Meyer et al. (2019)
η+ 0.0108 min1nM1 Stricker et al. (2008)
η 0.000108 min1 Stricker et al. (2008)

Table 3. Choice of parameters for the antithetic integral control circuit used in our project.

Note that the parameters chosen for the promoter induced by the X species were those associated with the CinR inducible promoter, as documented in Meyer et al. (2019), because this is the inducible promoter we are using for the activation based antithetic integral feedback motif (see section: Circuit design).

Results (hill activation)

Below are the simulations showing the response of the system to perturbations in the concentration of X, in the rate of translation of X and in the rate of degradation of X. In each case the system is able to adapt to the perturbation and return to the steady state reference point.

Figure 2. Response of the system to a perturbation X=1.5X with the antithetic integral control motif.

Figure 3. Response of the system to a perturbation of a 1.5 increase in the degradation rate of X with the antithetic integral control motif.

Figure 4. Figure 3. Response of the system to a perturbation of a 1.5 increase in the translation rate of X with the antithetic integral control motif.

1.2.Global Sensitivity Analysis (GSA)

In GSA, model quantities are varied together to simultaneously evaluate the relative contributions of each quantity with respect to a model response. Here, GSA was used in order to obtain information on which parameters of the system the response of X was more sensitive to. This would help us to make an informed decision on how best to perturb the system in the lab by chosing perturbations that we know the concentration of our protein of interest, represented by the variable X, is most sensitive to.
Furthermore, by identifying which parameters X is most sensitive to, we can focus efforts on accurate characterization of the parameters in question since uncertainties in these parameters will be more important and influential than uncertainties in parameters that X is less sensitive to.
SimBiology performs a decomposition of the model output (response) variance by calculating the first- and total-order Sobol indices. The first-order Sobol indices give the fractions of the overall response variance that can be attributed to variations in an input parameter alone. The total-order Sobol index gives the fraction of the overall response variance that can be attributed to joint parameter variations.
The first order sensitivity index indicates the effect of an individual parameter whereas the total order can indicate the effect of the interactions of that parameter with the other parameters and their combined influence on the output. Thus, the difference between first order and total order indices can provide information on to what extend the inputs are interacting with each other.
To this end, a global sensitivity analysis was performed using 20 inputs, the parameters of our circuit model, and with the output variable being the concentration of protein X at steady state (t=1000s). The parameters were sampled from a uniform distribution spanning an order of magnitude below and above the value of the parameter used in the deterministic and stochastic simulations (with the exception of the hill coefficient which was sampled from a normal distribution between a smaller range of values as the uncertainty in this is less).

GSA Results:

Table 4 shows documents the Sobol index values for the parameters used in the GSA.



Parameter First order index Total order index
χ0,Z1 0.1237 0.6514
χ0,Z2 0 0
χ0,X 0.0169 0.0141
χ1,Z2 0 0
χ1,X 0 0
kZ1 0 0
kZ2 0.1243 0.6581
kX 0.1574 0.7491
γZ1 0.1243 0.6581
γZ2 0.1237 0.6516
γX 0.0061 0.0021
δZ1 0 0
δZ2 0 0
δX 0.0488 0.0290
nZ1 0 0
nX 0 0
KZ1 0 0
KX 0 0
η+ 0 0
η- 0
Table 4. Sobol indices for the GSA.


Table 5 ranks the influence of the input parameters on the output X according to their first order Sobol index.



Rank (highest to lowest) Parameter
1 kX
2 kZ2
3 γZ1
4 γZ2
5 χ0,Z1
6 γX
7 χ0,X
8 δX
9 χ0,Z2
10 KX
11 η+
12 nX
13 KZ1
14 kZ1
15 δZ2
16 nZ1
17 δZ1
18 χ1,X
19 χ1,Z2
20 η-
Table 5. The influence of input parameters on output parameter according to their first order Sobol index from the GSA.


Table 6 ranks the influence of the input parameters on the output X according to their total order Sobol index.



Rank (highest to lowest) Parameter
1 kX
2 kZ2
3 γZ1
4 γZ2
5 χ0,Z1
6 δX
7 χ0,X
8 γX
9 η+
10 χ0,Z2
11 δZ2
12 χ1,Z2
13 χ1,X
14 kZ1
15 KZ1
16 nX
17 KX
18 nZ1
19 δZ1
20 η-
Figure 6 The influence of input parameters on output parameter according to their total order Sobol index from the GSA.

These results show that for the most important parameters (those with Sobol indices greater than 0 to 4 decimal places), their order of importance does not change when considering first order and total order indices suggesting that interactions between parameters is not greatly affecting the sensitivty of X to that parameter. That said, there is an increase in index value for a some parameters and therefore interactions between parameters is not completely absent. This analysis, and the ranking of influence of parameters, can be used to chose which parameters should be changed to perturb the system; specifically, those parameters that are ranked highest in influence because X is most sensitive to variation in these parameters at steady state and thus changing these parameters would influence X. We want to perturb the system such that we can then see if the system is able to adapt using our integral control motif; to test if our control circuit is effective we thus need to have a clear, strong perturbation.

1.3.Linearising Hill Activation

To simplify the model we make the additional assumption:
Hill activation dynamics can be approximated by linear mass action kinetics.
This reduces the system of ODEs to as follows: Parameters (linear activation):


Symbol Parameter Units
χ0,i Basal rate of transcription for species i nM min1
χ1,i Maximal rate of transcription for species i nM min1
ki Rate of translation of species i from it’s mRNA min1
γi Rate of mRNA degradation for species i min1
δi Rate of protein degradation for species i min1
η+ Rate of association of Z1-Z2 complex min1nM1
η Rate of dissociation of Z1-Z2 complex min1
Table 7. Parameters for the antithetic integral control model after linearising the Hill activation function.

Equations (linear activation):

$$\frac{d[M_{Z1}]}{dt}=\chi_{0,Z1} - \gamma_{Z1}[M_{Z1}]$$

$$\frac{d[P_{Z1}]}{dt} = k_{Z1}[M_{Z1}] - \delta_{Z1}[P_{Z1}] - (\eta^+[P_{Z1}][P_{Z2}] - \eta^-[C])$$

$$\frac{d[M_{Z2}]}{dt} = -\gamma_{Z2}[M_{Z2}] + \chi_{1,Z2}[P_X] + \chi_{0,Z2}$$

$$\frac{d[P_{Z2}]}{dt} = k_{Z2}[M_{Z2}] - \delta_{Z2}[P_{Z2}] - (\eta^+[P_{Z1}][P_{Z2}] - \eta^-[C])$$

$$\frac{d[M_X]}{dt} = \chi_{1,X}[P_{Z1}] - \gamma_X[M_X] + \chi_{0,X}$$

$$\frac{d[P_X]}{dt} = k_X[M_X] - \delta_X[P_X]$$

$$\frac{d[M_R]}{dt} = \chi_{1,R}[P_{Z1}] - \gamma_R[M_R] + \chi_{0,R}$$

$$\frac{d[P_R]}{dt} = k_R[M_R] - \delta_R[P_R]$$

$$\frac{d[C]}{dt} = (\eta^+[P_{Z1}][P_{Z2}]-\eta^-[C]) - \delta_C[C]$$


Here the choice of the parameters used in this system of equations remain the same as those used above.
The purpose of linearising the system is so that it can be modelled stochastically, as detailed in the following section.


1.4. Stochastic Model

A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities. They are more realistic in many ways than deterministic ODEs since they use populations rather than concentrations (which is important when dealing with small populations of species) and models the inherent uncertainty in all biological reactions. One algorithm that is well suited for stochastic simulation of biological networks is the Gillespie algorithm. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of the position and behaviour of individual molecules is computationally feasible.
Briefly, the Gillespie algorithm incorporates the stochastic nature of biological networks by, at each iteration of the algorithm, drawing 1. the time until the next reaction and 2. the type of reaction that occurs from a probability distribution that is weighted by the propensity function of the reactions. See the Gillespie paper for more details (ref).

This method is the basis of the stochastic simulation algorithm used in SimBiology.

The time evolution of probability distributions is used in a Chemical Master Equation (CME) to describe the dynamics of a chemical system, however directly solving for this distribution is impractical for most realistic problems. The stochastic simulation algorithm (SSA), used by the stochastic solver in SimBiology, instead efficiently generates individual simulations that are consistent with the CME, by simulating each reaction using its propensity function. Thus, analyzing multiple stochastic simulations to determine the probability distribution is more efficient than directly solving the CME.

Both the Gillespie algorithm, and the stochastic simulation algorithm, become computationally impractical for large populations and fast reactions (as was the case with the more complex CME of our network).
The stochastic simulation in the case of the more complex model, including both transcription and translation, took a long time to run and it was not plausible to run an ensemble simulation for this reason. Because stochastic simulations use the probabilistic nature of the type of reactions in biological systems, multiple runs of a stochastic simulation are required and an ensemble of runs can then be analyzed. Therefore, we used the simpler model, which combined transcription and translation processes, to run a stochastic simulation ensemble.


Stochastic Model Results:

Depicted are the stochastic model simulations of the number of X molecules from t=0 to t=8000 seconds with a perturbation of a 1.5x increase in number of X molecules at t=5000 seconds. Fig.5 shows the results of an individual run of the stochastic simulation and Fig.6 shows the results of an ensemble simulation comprising of 10 runs. These results show that our model circuit is able to adapt to perturbations robustly, as is seen in the deterministic case, even with stochasticity considered. The perturbations and the system response are above the level of noise and the trajectory of the number of X molecules with time does not vary significantly between successive stochastic simulation runs.



Figure 5. Stochastic simulation of the antithetic integral control motif with linear activation: The number of X molecules with time with a perturbation of a 1.5x increase in X molecules at t=5000 seconds.

Figure 6. Ensemble tochastic simulation of the antithetic integral control motif with linear activation: The number of X molecules with time with a perturbation of a 1.5x increase in X molecules at t=5000 seconds. 10 runs of the simulation are plotted.

2. Mathematical and Parameter analysis




3. Control Theory

3.1 What is control theory and why are we using it?

Control theory is a branch of engineering that deals with the control of dynamical systems with the aim of designing systems with good stability, that can compensate for uncertainty, reject disturbances and attenuate noise, leading to a system with a good performance. Control theory is applied to a wide range of Engineering disciplines. For example, in electrical engineering where capacitors, resistors, operational amplifiers, etc. are being utilised to create an electrical circuit that gives a desired response or in mechanical engineering where movement of systems are described with defined systems of differential equations. Despite its use in various other Engineering areas, application of control theory in synthetic biology to create genetic circuits have been difficult due to its intrinsic non-linearity and unpredictability in biological systems. Here we will bring you onto a journey of the system analysis of the antithetic integral controller with control theory and we hope you will appreciate the usefulness and limitations of applying control theory in synthetic biology.

3.2. Understanding the system in a control sense

The aim of the antithetic integral controller is to allow X to be expressed at the same level even in response to uncertainties in parameters, or perturbation. The antithetic integral controller contains Z1 and Z2 as the controller species. Z1 acts as an actuator where it directly controls the amount of X being produced while Z2 acts as a sensor which its production depends on the amount of X present. Together Z1 and Z2 annihilate each other at the same rate which serve as the error signal of the system.

The equations of our system is as follows:
For activation:
$$\dot x = \theta_1 z_1 - \gamma_p x$$
$$\dot z_{1} = \mu_1 - \eta z_1 z_2 - \gamma_c z_1$$
$$\dot z_{2} = \theta_2 x - \eta z_1 z_2 - \gamma_c z_2$$
For repression:
$$\dot x = \frac{\alpha}{\theta_{1} z_{1} + 1} -\gamma x$$
$$\dot z_{1} = \mu_{1} - \eta z_{1} z_{2}$$
$$\dot z_{2} = \frac{\mu_{2}}{\theta_{2} x + 1} - \eta z_{1} z_{2}$$

The following images (Figure 7 and Figure 8) shows the Chemical Reaction Network (CRN) of the activation and the repression system respectively.


Figure 7. Chemical Reaction Network of the activation system
Figure 8. Chemical Reaction Network of the repression system

In control theory, block diagrams are often utilised to represent a system. Our system can be summarised with the following block diagram (Figure 9).



Figure 9. Block diagram of the antithetic integral controller system

What are block diagrams?


In control systems, the goal is to produce a controlled output Y(s) from the input U(s). The Plant/Process (P(s)) is the system we are trying to control. In order to control a system, a controller (C(s)) is designed. The transfer function in this open loop system is G(s) = C(s)P(s). Figure 10 shows the block diagram of an open loop system. The definition and the importance of transfer functions will be discussed further below.

Figure 10. Block diagram of a general control system

3.3. How to set the system up for control theory analysis?


In this section, we will explain the reasoning behind the steps we take to get closer to analysing the system with the control theory toolbox, as well as the maths on how to do it with our system.


3.3.1. Equilibrium

Equilibrium point of an ODE is the value which the ODE converges to. In a simple system, the equations can be easily solved analytically. However, in more complicated systems, numerical methods have to be used to solve the equation as there may not be a closed form solution. The equilibrium position is especially important for non-linear systems as linearisation about the equilibrium position is necessary for the analysis of the stability of the control system.
The equilibrium position of x, z1 and z2 with different γc values is being plotted in the range of γc = 0 to γc = 0.1 for both activation (Figure 11) and repression (Figure 12). When γc = 0, there is an analytical solution for the equilibrium positions which can be easily obtained.
For activation:
$$x_{eq} = \frac{\mu_1}{\theta_2}$$, $$z_{1eq} = \frac{\mu \gamma_p}{\theta_1 \theta_2}$$, $$z_{2eq} = \frac{-\gamma_c^{2} + \theta_1 \theta_2}{\eta \gamma_p}$$
For repression:
$$x_{eq} = \frac{\mu_{2} - \mu_{1}}{\mu_{1} \theta_{2}}$$, $$z_{1eq} = \frac{\alpha \mu_{1} \theta_{2} \left(\mu_{2} - \mu_{1}\right)}{\gamma \theta_{1} \left(\mu_{2} -\mu_{1}\right)}$$, $$z_{2eq} = \frac{\gamma \theta_{1}\left( \mu_{1} \mu_{2} - \mu_{1}^{2} \right)}{\alpha \eta \mu_{1} \theta_{2} - \gamma \eta \left(\mu_{2} - \mu_{1}\right)}$$
With γc not equal to 0 however, an analytical solution is difficult to obtain. Therefore, numerical solution is obtained through odeint in Python and how the equilibrium position changes depending on the value of γc is plotted. This range is chosen because γc = 0.00038 as calculated with the dilution rate of E. coli. It is interesting to note the equilibrium position follows a monotonic trend for both activation and repression when γc changes in the physical ranges of values.
The following parameter values are used when analysing the activation system: θ1 = 0.005, γp = 0.00038, η = 0.018/60, µ1 = 0.7, θ2 = 0.0005.
The following parameter values are used when analysing the repression system: α = 10/60, θ1 = 0.05, γp = 0.00038, η = 0.018/60, µ2 = 1.5, µ1 = 0.7, θ2 = 0.005.



Figure 11. Equilibrium position of x, z1 and z2 at different γc for activation


Figure 12. Equilibrium position of x, z1 and z2 at different γc for repression

3.3.2. Linearisation and state space representation

Systems are often characterised with ODEs and these ODEs are often non-linear thus is very difficult to solve. To utilise the control systems toolbox, the system has to be linearised about its equilibrium position using a Jacobian matrix. Linearisation is similar to zooming in into the dynamics of the system at the fixed point, assuming that just around this fixed point, the dynamics is linear. To understand how linearisation works graphically, look at Figure 13. As we can see, further from the fixed point however, linearisation of the system does not hold. Therefore, the system has to be working closed to around the fixed point for linearisation to be justified. After the ODEs are linearised, the system can be written in a state space form i.e. a sytem of linear ODEs arranged in a specific format as below.
$$\dot x = Ax + Bu$$ $$y = Cx + Du$$
where x is the states of the system, u is the input of the system and y is the output of the system. A, B, C and D are appropriate matrix describing the system of ODEs. By taking Laplace transform, the transfer function of the system can be found.



Figure 13. Schematic of how linearisation works

What are states?

In a dynamical system, its memory/present output y(t1) depends not only the present input u(t1) but also on past inputs u(t) where t ≤ t1. The states of the system x(t) are a set of variables of the system chosen to represent this memory and the choice of a state is not unique. Essentially, x(t0) summarises the effect on the future of input and states prior to t = t0.
There are certain methods to choose the correct states for the system. One can choose states intuitively, considering independent energy elements or energy elements of the system. For instance, in electrical circuits, the voltage across capacitors or the current in inductors are reasonable choices. In mechanics, positions and velocities of masses can often also be chosen as states. In our system, intuitively, the number of species X, Z1 and Z2 gives the memory elements of the system and thus intuitively are chosen as the state of the system.



Setting up the input:

In order to see whether the system can robustly adapt, the input of the system is the disturbance to x. Let the disturbance be d. Here d = 0.75θ1 where θ1 is the production rate of x for each z1 molecule. It will not be realistic if d is set too high and the dynamics of the system will deviate to very far away from the equilibrium point, so looking at the system response and the stability of the linearised system will not be justified.


Activation:


After linearisation about its equilibrium position:
$$\begin{bmatrix} \dot \delta x \\ \dot \delta z_1 \\ \dot \delta z_2 \end{bmatrix} = \begin{bmatrix} -\gamma_p & \theta_1 & 0\\ 0 & -\eta z_2 - \gamma_c & -\eta z_1 \\ \theta_2 & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix} + \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} d$$ $$y = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix}$$ where $$A = \begin{bmatrix} -\gamma_p & \theta_1 & 0\\ 0 & -\eta z_2-\gamma_c & -\eta z_1 \\ \theta_2 & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix}$$, $$B = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$$, $$C = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}$$

Repression:


After linearisation about its equilibrium position:
$$\begin{bmatrix} \dot \delta x \\ \dot \delta z_1 \\ \dot \delta z_2 \end{bmatrix} = \begin{bmatrix} -\gamma_p & -\frac{\alpha \theta_1}{(\theta_1 z_{1eq} + 1)^2} & 0\\ 0 & -\eta z_2 - \gamma_c & -\eta z_1 \\ -\frac{\mu_2 \theta_2}{(\theta_2 x_{eq}+1)^2} & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix} + \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} d$$
$$y = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix}$$
where $$A = \begin{bmatrix} -\gamma_p & -\frac{\alpha \theta_1}{(\theta_1 z_{1eq} + 1)^2} & 0\\ 0 & -\eta z_2 -\gamma_c & -\eta z_1 \\ -\frac{\mu_2 \theta_2}{(\theta_2 x_{eq}+1)^2} & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix}$$, $$B = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$$, $$C = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}$$

3.3.3. Laplace Transform

Systems described in ODEs can be solved and visualised in the time domain. However, while the time domain shows us its actual response, we do not have a lot of tools to analyse the system in the time domain. In the Laplace domain (s domain) however, a lot of tools from the control theory toolbox can be used to analyse the system in an simpler way. In the Laplace domain, instead of using time as the variable, s (any complex number) is being used as the variable.
Laplace transform is defined as the following:
$$\mathscr{L}\{f(t)\}=\int_{t=0}^{\infty}f(t)e^{-st}dt$$
Although the Laplace transform expression looks complicated, Laplace transform of ODEs are very easy where X(s) is the Laplace transform of X.
$$\mathscr{L}\{\dot x\}=sX(s) - x_0$$
where X(s) is the Laplace transform of x(t) and x0 is the initial condition of the differential.

After Laplace transform of the ODE, we obtain a transfer function that links the input of the system (U(s)) to the output of the system (Y(s)). A transfer function is the Laplace transform of the impulse response of the system. The impulse response of the system refers to how the system respond to an impulse in the time domain. Looking at the expression of transfer function and manipulating it with the control systems toolbox, one can determine its stability, amplitude and phase with inputs at different frequencies.
Let the transfer function be G(s), U(s) be the input of the Laplace domain and Y(s) be the output in the Laplace domain, U(s)G(s) = Y(s). In a single-input single-output system, Laplace transform can be taken on the single differential equation to find the transfer function.

For a multi-input multi-output system, a state space representation of a system is required. The transfer function can be obtained through state space representation through G(s) = C(sI-A)-1B + D.

3.4. What are the different types of systems?

3.4.1. Open loop:

An open loop system is one that does not have any feedback i.e. the output will not influence the input. Let the open loop transfer function be G(s) directly linking the input (U(s)) and output (Y(s)). In principle, it is possible to choose a desired transfer function G(s) and use C(s) = G(s)/P(s) to obtain Y(s) = G(s)U(s). In practice, this will not work because it requires an exact model of the plant/process without any disturbances or uncertainties. Therefore, a feedback system is needed. The block diagram of an opened loop system is shown in Figure 13.



Figure 13. Block diagram of an open loop system

3.4.2. Closed loop:

Negative feedback is important in this case to deal with the effects of uncertainty. The closed loop transfer function for a negative feedback loop is H(s) = G(s)/(1+KG(s)) where K is the feedback gain of the system. In this case, we define the return ratio L(s) = KG(s). According to the block diagram, negative feedback allows the output at that moment in time to be compared to the reference value and the error signal at that time is being fed back into the system. Looking at the denominator of the closed loop transfer function, we can see that the closed loop characteristic equation is 1+KG(s) = 0 thus the stability and characteristics of the closed loop system compared to the open loop system can be very different (Further discussed below).

The block diagram of a negative feedback system is shown in Figure 14. In terms of how a negative feedback system rejects disturbances, we have to look at its sensitivity function (explained below). Essentially the sensitivity function is in the form of 1/(1+KG(s)) which means that a higher K leads to a lower sensitivity function value, thus rejecting disturbances. Although negative feedback allows disturbance rejection and can further reduce steady state error if the gain is very high, in biological systems, gain of a system is limited and that the steady state error can never be 0.



Figure 14. Block diagram of negative feedback closed loop system

3.4.3. Integral Controller:

In Laplace domain, an integrator is 1/s in the transfer function. The presence of an integrator inside the controller ensures that there will be no steady state error assuming that the closed loop system is asymptotically stable, even in the presence of constant disturbances and demands. An integrator essentially is the memory of error signal through time.
To look at the integrator of the system, an additional state can be added to the state space representation which represents the error signal of the system. The detailed steps of doing this is discussed below.
The block diagram of an integrator is shown in Figure 15.



Figure 15. Block diagram of an integral controller

3.4.4. Comparasion:

After understanding the different types of systems, it is time to simulate them to see how they respond to perturbation and whether they robustly adapt.


Looking at Figure 16, we can see that the antithetic integral controller adapts robustly when there is no degradation of controller species. With degradation of controller species, although the system is not robustly adapting, it still performs a lot better than the open loop and the negative feedback loop. Now we can see how the antithetic integral controller motif can minimise steady state error!


Figure 16. Simulation of the open loop, negative feedback and the antithetic integral controller (With and without degradation)

3.5. Analysing systems of ODE with the control theory toolbox


In this section, control theory toolbox will be utilised to characterise the system, the most important concepts behind is equilibrium and stability.


3.5.1. Stability

Stability refers to whether the system will converge to a particular value. An unstable system will lead to an unbound output even when given a bound input (images!). There are 3 ways to check the stability of a closed loop system - checking the eigenvalue of the A matrix in the state space system, checking the poles of the transfer function and through drawing a Nyquist diagram. To analyse the stability of a non-linear system, linearisation about the equilibrium point is required.

What are poles of the transfer function?

By setting the denominator equal to zero, we have the characteristic equation. The roots of the characteristic equation is called the poles.

For an unstable system, some species (states) of the system will blow up to infinity as time goes on. If the real part of any of the poles/eigenvalues are positive, the system will be unstable.The system will only be stable if all the poles/eigenvalues are non-positive. If the real part of the pole is 0, the system is marginally stable where constant amplitude and frequency oscillations will occur. If the poles are complex numbers, the imaginary part dictates the oscillation frequency of the system. The response time of a system is determined by the magnitude of the real part of the least negative pole.

Here we analyse the stability of the fixed point through analysing the eigenvalues of A with Routh-Hurwitz criterion.

Activation - without degradation

The equation of finding the eigenvalues of A is the following:


$$\eta \gamma_{p} s z_{1eq} + \eta \gamma_{p} s z_{2eq} + \eta s^{2} z_{1eq} + \eta s^{2} z_{2eq} + \eta \theta_{1} \theta_{2} z_{1eq} + \gamma_{p} s^{2} + s^{3} = s^3 + (\gamma_p + \eta (z_{1eq} + z_{2eq}) s^2 + \eta \gamma_p (z_{1eq} + z_{2eq})s + \eta \theta_1 \theta_2 z_{1eq}$$

The Routh-Hurwitz criterion states that for the roots of the characteristic equation to be negative real part, $$a_{1}a_{2} > a_{0}a_{3}$$ where $$a_{1}a_{2} = (\gamma_p + \eta (z_{1eq}+z_{2eq}))(\eta \gamma_p (z_{1eq}+z_{2eq}))$$ and $$a_{0}a_{3} = \eta \theta_1 \theta_2 z_{1eq}$$
Substituting $$z_{1eq} = \frac{\mu \gamma_p}{\theta_1 \theta_2}$$, $$a_{0}a_{3} = \eta \mu \gamma_p$$
$$a_{1}a_{2} = z_{1eq} z_{2eq} \eta^{2} \gamma_p + ... = \mu \gamma_p \eta +...$$
where +... are non-negative values as all the parameters are non-negative This proves that $$a_{1}a_{2} > a_{0}a_{3}$$ and that the system is stable linearised about the equilibrium point. Looking at the system we have used for simulation, the poles of the system are
−0.0386 + 0.0000i, − 0.0002 + 0.0014i, − 0.0002 − 0.0014i


Activation - with degradation

With γc > 0, there’s no close form analytical solution to the steady state value of the ODE. However, given that γc is very small and that without degradation $$a_{1}a_{2} > a_{0}a_{3}$$. It is reasonable to deduce that for the range of value of γc we have, the equilibrium point will be stable.
For the system we have used for simulation, we can analyse its stability through looking at the poles.
The poles of the system are:
−0.0389 + 0.0000i, − 0.0004 + 0.0014i, − 0.0004 − 0.0014i As all the poles have a negative real part, the equilibrium point of the system is stable.

Repression - without degradation

The eigenvalue of the repression system leads to a very complicated expression.
Here, we analyse the stability of the system through looking at the poles of the system we are simulating to show that for realistic values, the equilibrium point is stable.
The poles of the system are:
−0.0436 + 0.0000i, − 0.0002 + 0.0007i, − 0.0002 − 0.0007i As all the poles have a negative real part, the equilibrium point of the system is stable.

Repression - with degradation

Similarly, the eigenvalue of the repression system does not have a closed form solution. Therefore, like above, we analyse the stability of the system through looking at the numerical values of the poles.
The poles of the system are:
−0.0387 + 0.0000i, − 0.0004 + 0.0008i, − 0.0004 − 0.0008i As all the poles have a negative real part, the equilibrium point of the system is stable.

3.5.2. System Response

In this section, we will simulate the systems of non-linear ODEs with Python odeint and see how the system respond to perturbations. We will then vary the magnitude of perturbation to show how the system behaves.
The graphs below shows the response of the system in response to small perturbation with varying magnitudes for both the Activation and Repression system, with and without degradation.

Figure 17. Change of System Response with varying values of small disturbance (Activation, non-degradation)
Figure 18. Change of System Response with varying values of small disturbance (Activation, degradation)
Figure 19. Change of System Response with varying values of small disturbance (Repression, non-degradation)
Figure 20. Change of System Response with varying values of small disturbance (Repression, degradation)
In order to understand the system better, we visualise the system in response to perturbation with larger magnitude difference. As we can see, for all the cases, as the disturbance increases, the system has a higher peak value but become less oscillatory.

Figure 21. Change of System Response with different values of disturbance (Activation, non-degradation)
Figure 22. Change of System Response with different values of disturbance (Activation, degradation)
Figure 23. Change of System Response with different values of disturbance (Repression, non-degradation)
Figure 24. Change of System Response with different values small disturbance (Repression, degradation)

Looking at the system with and without degradation, one can see that for the non-degradation case, the system is able to robustly adapt, return back to its original steady state value. However, for the degradation case, it reaches the new steady state faster given the same parameter value and the same disturbance.

This suggests that the system with degradation is more stable - a concept known as Leaky Integrator.


Let’s look at the activation system first - To explain the faster settling time of the degradation case, we look at the poles of the system. With this set of parameters, the poles of the non-degradation case are:
−0.0386 + 0.0000i, − 0.0002 + 0.0014i, − 0.0002 − 0.0014i
and the poles of the degradation case are:
−0.0389 + 0.0000i, − 0.0004 + 0.0014i, − 0.0004 − 0.0014i

As we can see, the real part of the least negative pole is more negative in the degradation case. This explains why the system reaches the steady state quicker with degradation.

Now let’s move on the repression system. With this set of parameters, the poles of the non degradation case are:
−0.0436 + 0.0000i, − 0.0002 + 0.0007i, − 0.0002 − 0.0007i
and the poles of the degradation case are:
−0.0387 + 0.0000i, − 0.0004 + 0.0008i, − 0.0004 − 0.0008i

As we can see, similar to the activation case, the real part of the least negative pole is more negative in the degradation case. This explains why the system reaches the steady state quicker with degradation.

Leaky integrator

An integrator is a controller that ensures 0 steady state error. While 0 steady state error is ideal, an integral controller have slow dynamics. A Leaky integrator is a type of integrator that allows for faster dynamics and increased stability but in the expense of having some small steady state error. Essentially, it allows some memory of the error to be lost in the process, just like in our system where controller species are being degraded, leading to a leaky integrator instead of a perfect integrator.

3.6 How much uncertainty can the system handle for θ1 and θ2?

Here, we analyse the system in the activation case only using Nyquist Diagrams and sensitivity functions to show how much variability the system can take in the value of θ1 and θ2. To investigate this, we first need to obtain the open loop function of the system, using the parameter θ1 and θ2 as the feedback gain to investigate how big θ1 and θ2 can be for the system to be stable.


Uncertainty in θ1

To obtain the open loop system, we break open the loop in order to investigate the open loop system. The explanation of breaking the loop open is shown in Figure 25.

Figure 25. The process of breaking the loop open to analyse the uncertainty θ1 of the system

The systems of equation characterising the open loop system is:
$$ \dot x = u - \gamma_p x$$ $$ \dot z_{1} = \mu_1 - \eta z_1 z_2 - \gamma_c z_1$$ $$\dot z_{2} = \theta_2 x - \eta z_1 z_2 - \gamma_c z_2$$
When representing it in a state space format after linearisation:
$$\begin{bmatrix} \dot \delta x \\ \dot \delta z_1 \\ \dot \delta z_2 \end{bmatrix} = \begin{bmatrix} -\gamma_p & 0 & 0\\ 0 & -\eta z_2 - \gamma_c & -\eta z_1 \\ \theta_2 & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix} + \begin{bmatrix} -1 \\ 0 \\ 0 \end{bmatrix} u$$ $$y = \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix}$$
where $$A = \begin{bmatrix} -\gamma_p & 0 & 0\\ 0 & -\eta z_2-\gamma_c & -\eta z_1 \\ \theta_2 & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix}$$, $$B = \begin{bmatrix} -1 \\ 0 \\ 0 \end{bmatrix}$$, $$C = \begin{bmatrix} 0 & 1 & 0 \end{bmatrix}$$
Thus the open loop transfer function of the system with θ1 as the feedback gain is given by H(s) = C(sI-A)-1B.

Uncertainty in θ2

To obtain the open loop system, we break open the loop in order to investigate the open loop system. The explanation of breaking the loop open is shown in Figure 26.

Figure 26. The process of breaking the loop open to analyse the uncertainty θ2 of the system

The systems of equation characterising the open loop system is:
$$ \dot x = \theta_1 z_1 - \gamma_p x$$ $$\dot z_{1} = \mu_1 - \eta z_1 z_2 - \gamma_c z_1$$ $$\ z_{2} = u - \eta z_1 z_2 - \gamma_c z_2$$
When representing it in a state space format after linearisation:
$$\begin{bmatrix} \dot \delta x \\ \dot \delta z_1 \\ \dot \delta z_2 \end{bmatrix} = \begin{bmatrix} -\gamma_p & \theta_1 & 0\\ 0 & -\eta z_2 - \gamma_c & -\eta z_1 \\ 0 & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ -1 \end{bmatrix} u$$ $$y = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} \delta x \\ \delta z_1 \\ \delta z_2 \end{bmatrix}$$ where
$$A = \begin{bmatrix} -\gamma_p & \theta_1 & 0\\ 0 & -\eta z_2-\gamma_c & -\eta z_1 \\ 0 & -\eta z_2 & -\eta z_1 - \gamma_c \end{bmatrix}$$, $$B = \begin{bmatrix} 0 \\ 0 \\ -1 \end{bmatrix}$$, $$C = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}$$
Thus the open loop transfer function of the system with θ1 as the feedback gain is given by H(s) = C(sI-A)-1B.

Sensitivity Function of the Open loop system

The sensitivity function of the parameter θ1 and θ2 is shown in the following graphs. As we can see from the graph below, the peak of the sensitivity function of θ1 is at 0dB, 7.03 dB and 7.3dB for the non-degradation and degradation case respectively, meaning that the closed loop system can be quite sensitive to the value of θ1. For θ2 however, the peak of the sensitivity function are 1.53dB and 1.52dB for the non-degradation case and degradation case respectively, showing that the system will become unstable if θ2 reaches a certain value. However, the frequency where the sensitivity function reaches to above 0dB is at the high frequency end, meaning that disturbance will very likely be rejected as disturbances usually have low frequencies.


Figure 27. Sensitivity Function of the open loop system with θ1 as the feedback gain (Activation, non-degradation)
Figure 28. Sensitivity Function of the open loop system with θ1 as the feedback gain (Activation, degradation)
Figure 29. Sensitivity Function of the open loop system with θ2 as the feedback gain (Activation, non-degradation)
Figure 30. Sensitivity Function of the open loop system with θ2 as the feedback gain (Activation, degradation)

Sensitivity function and Nyquist Diagram


Nyquist diagram provides an easy way to visualise the stability of a feedback system and to access the maximum feedback gain obtained before the system becomes unstable. It allows us to deduce the properties of the closed loop system based on the properties of the open loop system. It essentially plots the value of the open loop transfer function transfer function (G(s)) in the complex plane at all frequencies from ω = 0 to ω = ∞. Assuming that the open loop system is stable, the closed loop system will be stable if the Nyquist diagram doesn’t have a net encirclement of the -1/K point. In the analysis of our system, K = 1 so if there’s no encirclement of the -1 point. The closed loop system is stable.


Why is the -1 point significant?

Remember that the denominator of a closed loop transfer function is 1+L(s). If L(s) = -1, the denominator of the system will be 0 and the system will go to infinity at the frequency and thus is unstable at that specific frequency.

Apart from whether the Nyquist diagram encircles the -1 point, there are other properties of the Nyquist diagram that is important for the analysis of the robustness of the system - gain margin, phase margin and the peak of sensitivity function. The gain margin is the amount of change in the open-loop gain required to make a closed-loop system unstable i.e. the distance between the -1 point to the point where the Nyquist diagram crosses the real axis. The phase margin is the amount of change in open-loop phase required to make a closed-loop system unstable where phase is strongly related to the time delay of the system.


In control theory, sensitivity function is a very important concept and is given by S = (1 + L(s))−1L(s). The property of the sensitivity function can be obtained through looking at the Nyquist diagram of the system. Essentially, the closer the Nyquist plot of L(jω) is to the point -1, the less robust the system will be when the loop is closed. If there is uncertainty in parameters such that the process or the controller is larger than what we model it to be, the Nyquist diagram will magnify. Therefore, the closer the Nyquist diagram is to the -1 point, the less it can magnify before it crosses over the -1 point, leading to instability. Moreover, the Nyquist diagram also contains phase information where time delay causes the Nyquist diagram to rotate, which can bring some part of the Nyquist diagram closer to the -1 point.

The peak of the sensitivity function is the minimum distance from the -1 point to the Nyquist plot. Therefore, when looking at the sensitivity function, the magnitude of the peak gives us information on the robustness. Remember that S = (1 + L(s))−1L(s) so the higher S is, the closer L is to -1 at that particular frequency and the more likely the system will be unstable. Apart from the peak of the sensitivity function, it is also good practice to look at the frequencies of the sensitivity function. Ideally we want the sensitivity function to have a lower magnitude at lower frequency as it is closely associated with reference tracking and disturbance rejection.

The interpretation of gain margin, phase margin and the peak of the sensitivity function is shown in the following diagram.


Figure 31. The relationship between Nyquist Diagram, phase margin, gain margin and sensitivity function

3.7. Looking at the Integrator


As the antithetic circuit acts as an integral controller, looking at the integrator of the system is important. An integrator is the error signal of the system which in this case, is governed by the difference of z1 and z2. In order to look at the integrator, new states has to be created to visualise the integrator.


Therefore, we set:
$$\dot q_1 = \dot z_1 - \dot z_2$$ $$\dot q_2 = \dot z_1 + \dot z_2$$ in order to visualise the integrator (q1).

Activation:

Including integrator as states, we get:
$$\dot x = \theta_1 \frac{q_1 + q_2}{2} - \gamma_p x$$ $$\dot q_1 = \mu_1 - \theta_2 x + \gamma_c (z_2 - z_1)$$ $$\dot q_2 = \mu_1 + \theta_2 x - 2\eta z_1 z_2 - \gamma_c(z_1 + z_2)$$
Substituting:
$$ z_1 = \frac{q_1 + q_2}{2}$$ $$ z_2 = \frac{q_2 - q_1}{2}$$ We obtain:
$$\dot x = \theta_1 \frac{q_1 + q_2}{2} - \gamma_p x$$ $$\dot q_1 = \mu_1 - \theta_2 x - \gamma_c q_1$$ $$\dot q_2 = \mu_1 + \theta_2 x - \frac{\eta}{2} (q_{2}^{2} - q_{1}^{2}) - \gamma_c q_2$$
The response of the integrator in response to small varying perturbation is shown in Figure 32 for the non-degradation case and Figure 33 for the degradation case.

Figure 32. Change of Integrator Response with different values of small disturbance (Activation, non-degradation)
Figure 33. Change of Integrator Response with different values of small disturbance (Activation, degradation)
Like above, to understand the system better, we give the system a larger perturbation to study the response of the integrator. Figure 34 shows the integrator response for the system without degradation while Figure 35 shows the integrator response for the system with degradation.
Figure 34. Change of Integrator Response with different values of disturbance (Activation, non-degradation)
Figure 35. Change of Integrator Response with different values of disturbance (Activation, degradation)
Just like what we have seen in the system response, the integrator reaches steady state a lot faster in the degradation case compared to the non-degradation case.

Repression:

For the repression case, the systems of ODEs visualising the integrator becomes:
$$\dot x = \frac{\alpha}{\theta_1 \frac{q_1 + q_2}{2} + 1} - \gamma_p x$$ $$\dot q_1 = \mu_1 - \frac{\mu_2}{\theta_2 x + 1} - \gamma_c q_1$$ $$\dot q_2 = \mu_1 + \frac{\mu_2}{\theta_2 x + 1} - \frac{\eta}{2}(q_{2}^{2}-q_{1}^{2}) - \gamma_c q_2$$ where q1 is the integrator.
Writing it as the state space representation, after linearisation, we obtain:
The response of the integrator in response to small varying perturbation is shown in Figure 36 for the non-degradation case and Figure 37 for the degradation case.

Figure 36. Change of Integrator Response with different values of small disturbance (Repression, non-degradation)
Figure 37. Change of Integrator Response with different values of small disturbance (Repression, degradation)
Like above, to understand the system better, we give the system a larger perturbation to study the response of the integrator. Figure 38 shows the integrator response for the system without degradation while Figure 39 shows the integrator response for the system with degradation.
Figure 38. Change of Integrator Response with different values of disturbance (Repression, non-degradation)
Figure 39. Change of Integrator Response with different values of disturbance (Repression, degradation)
Just like what we have seen in the system response, the integrator reaches steady state a lot faster in the degradation case compared to the non-degradation case.

3.8. Comparison with non-linearised system


As we can see above, we are analysing the system through linearisation. Analysing the system response of the linearised system is only justified if the dynamics of the system is close to the equilibrium point being linearised about. Here we show how different the response of the linearised and the non-linearised system can be if the disturbance is set too high, deviating the system away from the equilibrium fixed point.

Let’s first look at the response of the system when the disturbance is d=0.00375 for both the linearised and the non-linearised system. We can see that the system response is very similar and thus the analysis of system response with the linearised system is justified.


Figure 40. Comparison between the linearised and non-linearised system at small disturbances

Let’s now look at the response of the system when the disturbance is d=0.525. We can see that the system response of the linearised and the non-linearised system is very different.

Figure 41. Comparison between the linearised and non-linearised system at high disturbances

From this, we can see how linearisation of the system makes calculations easier and allows us to apply the control theory toolbox easily to analyse the stability of the system about the point we linearised at. However, we should bear in mind that if the disturbance is too big where we push the system away from the equilibrium, analysing the system response of the linear system is no longer valid as the linear system’s behaviour may deviate a lot from the non-linear system’s behaviour.

It is also important to note that in this journey of using control theory to analyse the system, we are only using 3 ODEs to describe the system, nesting various dynamics into single variables. In real biological systems, there are multiple steps that goes in between, for instance, the variable θ1 represents the action of z1 on the promoter that affects the transcription rate, the strength of RBS thus the translation rate also plays a role. Therefore, to make our system more realistic, we can add in extra ODEs to describe these dynamics but this will make our model more complicated, making it more difficult to analyse.

3.8. Outlook of application of control theory in synthetic biology

In this part, we have discussed one of the recent advances of using an antithetic integral controller to reduce steady state error in synthetic biology. As we can see, we have applied the classical control theory toolbox to design robust genetic circuits, making them more reliable in the presence of noise and be less sensitive to to variability. Throughout the past years, genetic circuit based controller have been evolving, from single transcription unit level (e.g. autoregulatory feedback) to single cell level (e.g. incoherent feedforward loop, antithetic integral controller) to the control of single cell populations or multiple populations. Recently, in-silico methods is being investigated to build more sophisticated controllers. With such amazing evolution over the past years, we are hopeful that in the future, control theory will be applied even more in synthetic biology.

4. Difficulties in Parameter Analysis

In order to find out the parameters of the model, we did literature search on relevant paper. We first referred to the Aoki et al.,(2019) for the values that they have used for their model. μ is a collective parameter that include promoter activity and RBS strengths and we only know the relative strength of the promoters and RBS, we find it a bit difficult to find appropriate values to put into our model. For 𝜃, not only the transcription rate and translation rate matters, the ability for the species to bind onto the promoter also matters. For this, we require the Kd of the system. We are able to find the translation rate of RBS on the iGEM website for B0032m (medium) and B0034 (strong) but we are unable to find it for B0033. In terms of promoter, promoter strengths are always relative or are given in fluorescence. Without knowing the conditions that the promoter was characterised in, we are unable to convert the promoter activity into transcription rates. Despite this, we came across a paper by Kannan et al., (2018) that provides us with insight to convert fluorescence into transcription rates. However, to be able to convert fluorescence into transcription rate, the value of translation rate, biomass, time series data of fluorescence, maturation time of the fluorescent protein and degradation rates of all the species have to be known. Due to time constraints, we ar unable to do this characterisation ourselves.

We have also attempted in using the Promoter Calculator from Salis Lab to find out the promoter strengths of the inducible promoters without the activator and repressor bound onto it. Through looking at the paper by Rhodius et al (2013), there are values for fold change in fluorescence under sigma factor binding is being characterised. Similarly, for species X the paper by Meyer et al., (2019) has clearly characterised the Kd, relative promoter units of the promoters with and without activators and repressors. We tried to use the fold change to get the promoter strength when the activator/repressor is bound then compare the results to the well characterised Anderson promoter to obtain a relative strength of the promoters we are using. However, the result we obtain using this method yields counter-intuitive results where the promoter activated by an activator gets a lot stronger than when a repressor is not bound onto its promoter.

These attempts therefore shows how difficult it is to put in parameter values into our model, especially if the part itself is not well characterised. We believe that in the future, given enough time, we could characterise the parts ourselves and compare it with well characterised promoters to put in more reasonable range of parameters for the specific promoter we are using.

5. References

  1. Briat C, Khammash M. Perfect Adaptation and Optimal Equilibrium Productivity in a Simple Microbial Biofuel Metabolic Pathway Using Dynamic Integral Control. ACS Synth Biol. 2018 Feb 16;7(2):419-431. doi: 10.1021/acssynbio.7b00188. Epub 2018 Jan 30. PMID: 29343065.

  2. Aoki SK, Lillacci G, Gupta A, Baumschlager A, Schweingruber D, Khammash M. A universal biomolecular integral feedback controller for robust perfect adaptation. Nature. 2019 Jun;570(7762):533-537. doi: 10.1038/s41586-019-1321-1. Epub 2019 Jun 19. PMID: 31217585.

  3. Briat C, Gupta A, Khammash M. Antithetic Integral Feedback Ensures Robust Perfect Adaptation in Noisy Biomolecular Networks. Cell Syst. 2016 Jan 27;2(1):15-26. doi: 10.1016/j.cels.2016.01.004. Epub 2016 Jan 27. PMID: 27136686.

  4. Emmert-Streib F, Dehmer M, Haibe-Kains B. Gene regulatory networks and their applications: understanding biological and medical problems in terms of networks. Front Cell Dev Biol. 2014 Aug 19;2:38. doi: 10.3389/fcell.2014.00038. PMID: 25364745; PMCID: PMC4207011.

  5. Yang B, Chen Y. Overview of Gene Regulatory Network Inference Based on Differential Equation Models. Curr Protein Pept Sci. 2020;21(11):1054-1059. doi: 10.2174/1389203721666200213103350. PMID: 32053072.

  6. Milo et al. Nucl. Acids Res. (2010) 38 (suppl 1): D750-D753, Bionumbers Database

  7. Ehsan Elahi F, Hasan A. A method for estimating Hill function-based dynamic models of gene regulatory networks. R Soc Open Sci. 2018 Feb 21;5(2):171226. doi: 10.1098/rsos.171226. PMID: 29515843; PMCID: PMC5830732.

  8. Stricker, J., Cookson, S., Bennett, M. et al. A fast, robust and tunable synthetic gene oscillator. Nature 456, 516–519 (2008). https://doi.org/10.1038/nature07389

  9. Woodrow P, Ciarmiello LF, Annunziata MG, Pacifico S, Iannuzzi F, Mirto A, D'Amelia L, Dell'Aversana E, Piccolella S, Fuggi A, Carillo P. Durum wheat seedling responses to simultaneous high light and salinity involve a fine reconfiguration of amino acids and carbohydrate metabolism. Physiol Plant. 2017 Mar;159(3):290-312. doi: 10.1111/ppl.12513. Epub 2016 Oct 19. PMID: 27653956.

  10. Alon, U. An introduction to systems biology : design principles of biological circuits, Chapman & Hall/CRC, Boca Raton, FL, 2007

  11. Meyer, A.J., Segall-Shapiro, T.H., Glassey, E. et al. Escherichia coli “Marionette” strains with 12 highly optimized small-molecule sensors. Nat Chem Biol 15, 196–204 (2019). https://doi.org/10.1038/s41589-018-0168-3

  12. Rhodius VA, Segall-Shapiro TH, Sharon BD, Ghodasara A, Orlova E, Tabakh H, Burkhardt DH, Clancy K, Peterson TC, Gross CA, Voigt CA. Design of orthogonal genetic switches based on a crosstalk map of σs, anti-σs, and promoters. Mol Syst Biol. 2013 Oct 29;9:702. doi: 10.1038/msb.2013.58. PMID: 24169405; PMCID: PMC3817407.

  13. Mathworks (2022), Simbiology documentation https://uk.mathworks.com/help/simbio/index.html?s_tid=CRUX_lftnav

  14. Ullah M, Schmidt H, Cho KH, Wolkenhauer O. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB. Syst Biol (Stevenage). 2006 Mar;153(2):53-60. doi: 10.1049/ip-syb:20050064. PMID: 16986253.

  15. A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, S. Tarantola, Global Sensitivity Analysis: The Primer, 2007, DOI:10.1002/9780470725184

  16. Nguyen Quang M, Rogers T, Hofman J, Lanham AB. Global Sensitivity Analysis of Metabolic Models for Phosphorus Accumulating Organisms in Enhanced Biological Phosphorus Removal. Front Bioeng Biotechnol. 2019 Oct 4;7:234. doi: 10.3389/fbioe.2019.00234. PMID: 31637235; PMCID: PMC6787149.

  17. C. Graham, D. Talay, Stochastic Simulation and Monte Carlo Methods, 2013

  18. Gillespie, D.T. (1977) Exact Stochastic Simulation of Coupled Chemical Reactions. Journal of Physical Chemistry, 81, 2340-2361., https://doi.org/10.1021/j100540a008
  19. Kannan, Soumya & Sams, Thomas & Maury, Jérôme & Workman, Christopher. (2018). Reconstructing Dynamic Promoter Activity Profiles from Reporter Gene Data. ACS Synthetic Biology. 7. 10.1021/acssynbio.7b00223.