Team:Freiburg/Project/modeling

From 2013.igem.org

Revision as of 10:56, 30 September 2013 by Lisa schaefer (Talk | contribs)


Modeling uniCas

Introduction

We used a thermodynamic approach to model and characterize our system. It is based on various ordinary differential equations that describe the behaviour of our network. Due to the limited measurment possibilities and the unwritten law, that you should at least measure half of the number of components of your network we started by using a small network with a limited amount of different components.
The Components are: the protein of Cas9, the target protein (SEAP), and two RNAs (tracrRNA and crRNA).
The two RNAs (tracrRNA and crRNA) are modeled as one complex, therefore there is no possibility to distinguish between them. There might be some differences in their expressions, especially because of the different promotors (crRNA expressed under U6-promotor; tracrRNA expressed under h1-promotor), however the model won't show them.

The Network

uniCAS-VP16

Our network includes four different components uniCas-VP16, a RNA complex (cr/tracr RNA), a RNA-uniCas-VP16 complex and the Secreted alkaline phosphatase (SEAP). UniCas-VP16 binds the RNA-complex and the whole complex binds the DNA, which leads to the production of SEAP

Fig. 1: Transcriptional Activation via uniCAS-VP16: The uniCAS-VP16 fusion protein is guided to the desired DNA sequence by a co-expressed crRNA, which binds a trRNA. The binding leads to an expression of SEAP.

Setting up the ODE

According to the graphical reaction network the ODE can be set up.

Cas9 is constitutively expressed by the CBh promotor and degraded proportional to the current concentration. It is used to build the DNA recognition complex and produced during complex decay.

d[Cas]/dt = k1 - k2*[Cas] - k5*[Cas]*[tr,crRNA] + [tr,crRNA,Cas]

The RNA-complex is build linearly. The production constant can be seen as production constant of the lower expressed RNA, because this expression regulates the complex building. It is assumed that the RNA is degraded after DNA recognition complex decay and therefore the complex decay does not lead to more RNA.

d[tr,crRNA]/dt = k3 - k4*[tr,crRNA]-k2*[tr,crRNA]*[Cas]

The complex is build, when Cas9 and RNA meets and degraded proportional to the currrent DNA recognition complex-concentration.

d[tr,crRNA,Cas]/dt = k5*[Cas]*[tr,crRNA] - k6*[tr,crRNA,Cas]

There is a leaky SEAP production and one that depends on the current concentration of the Cas9/RNA Complex. This dependency is assumed to follow the Michaelis-Menten-Rules. Because of the long half time (T2 > 500 h) of SEAP we can neglect the SEAP decay.(Müller et al., 2013 )

d[SEAP]/dt = k7 + k8*([Cas]/(k9+[Cas])))

The parameters are:
k1: linear production rate of Cas9
k2: Cas9 degradation rate
k3: cr/trRNA production rate
k4: cr/trRNA degradation rate
k5: cr/trRNA /Cas9 complex building rate
k6: cr/trRNA /Cas9 degradation rate
k7: SEAPs leaky production rate
k8: Cas9 dependent SEAP production rate
k9: Michaelis-Menten constant

Finding the parameter

Up to this point, it is only a theoretical description, which fits to every four component system connected in the given way. The reason is, that the parameter values are not yet defined and therefore there is a n-dimensional hypothesis space which contains the hypothesis which fits to the Cas9-system (n is the number of parameters). At least you assume this. Finding this hypothesis means finding the right parameters.

To find these parameters we use a method called maximum likelihood. The maximum likelihood hypothesis is the hypothesis which has the highest probability to generate the measured data. It is shown (Müller et al., 2013 ), that using the maximum likelihood approach and assuming gaussion noise in the data leads to a least-square error minimization problem.

A minimization problem is an optimization problem. You search for parameters (p0) for which holds, that the value of the function (f) at the point of the parameters is smaller than all other values. (f(p0)<=f(p)). In three dimensions the function can be thought as a landscape and minimization is finding the deepest valley. Depending on the method you use different problems arise. The most common problem is finding only a local minimum and not the global one. To avoid this and to be sure to have found a global minimum we started our minimization procedure using different start values for our parameters. To sample these parameters we used the latin hypercube sampling on a logarithmic scale.

The parameter space is divided in N subspaces and N is the number of different samples and out of each row and column one parameter value is taken.

The resulting errors we plotted in an increasing order to be sure to have found a global minimum.