Team:Grenoble-EMSE-LSU/Project/Modelling/Parameters

From 2013.igem.org

(Difference between revisions)
m (corrected english expression)
 
(21 intermediate revisions not shown)
Line 29: Line 29:
<li id="titre">
<li id="titre">
-
<div id="box" style="width: 700px; margin-left: 137px; padding: 5px; border: 3px solid #000; background-color: #fe2b33;">
+
 
-
<div id="template" style="text-align: center; font-weight: bold; font-size: large; color: #f6f6f6; padding: 5px;">
+
-
ATTENTION, RABBIE JACOB IL VA DANSER !!
+
-
</div>
+
-
<div id="instructions" style="text-align: center; font-weight: normal; font-size: small; color: #f6f6f6; padding: 5px;">
+
-
<strong>DO NOT EDIT THIS PAGE!!!</strong> </br>
+
-
</div>
+
-
</div>
+
<h1>Finding parameters</h1>
<h1>Finding parameters</h1>
-
<p> Our model now considers the maturation of KillerRed and the accumulation of damages done to the bacteria. It is able to explain and predict properly the evolution of all three quantities that are observed : the optical density of the suspension, its fluorescence and the density of living cells. But we still have to determine the best parameters to do it.</p>
+
<p> Our model now considers the maturation of KillerRed and the accumulation of damages done to the bacteria. It is able to describe the evolution of all three quantities that are observed: the optical density of the suspension, its fluorescence and the density of living cells. But we still have to find suitable parameter values to reproduce the experimental data and to simulate the model.</p>
-
<p> These are 6 parameters to find :</p>
+
<p> $r$: the rate of growth of bacteria in $min^{-1}$</p>
-
<p> $r$ : the speed of growth of bacteria. in $min^{-1}$</p>
+
<p> $a$: the production of KillerRed per bacteria in $RFU.OD^{-1}.min^{-1}$</p>
-
<p> $a$ : the production of KillerRed per bacteria. in $UF.OD^{-1}.min^{-1}$</p>
+
<p> $b$: the efficiency of photobleaching in $RFU.UL^{-1}.min^{-1}$</p>
-
<p> $b$ : the efficiency of photobleaching. in $UF.UL^{-1}.min^{-1}$</p>
+
<p> $m$: the maturation rate of KillerRed in $min^{-1}$</p>
-
<p> $m$ : the maturation rate of KillerRed. in $min^{-1}$</p>
+
<p> $k$: the toxicity of KillerRed in $OD.RFU^{-1}.UL^{-1}.min^{-1}$</p>
-
<p> $k$ : the toxicity of KillerRed. in $OD.UF^{-1}.UL^{-1}.min^{-1}$</p>
+
<p> $l$: the rate of healing of the bacteria by step of time. unit less</p>
-
<p> $l$ : the ability of the bacteria to repair damages of ROS. unit less</p>
+
<br>
<br>
<p> With the units :</p>
<p> With the units :</p>
<p> $OD$ is the Optical Density at $\lambda = 600nm$</p>
<p> $OD$ is the Optical Density at $\lambda = 600nm$</p>
-
<p> $UF$ is an arbitrary Unit of Fluorescence (with $\lambda_absorption=585nm$ and $\lambda_emission=610nm$</p>
+
<p> $RFU$ is Relative Fluorescent Units (with $\lambda_absorption=585nm$ and $\lambda_emission=610nm$</p>
-
<p> $UL$ is an arbitrary Unit of Light, related to the energy received by the bacteria. $1 UF$ shall be the energy of light received by an Erlenmeyer flask with a MR16 LED on its side at full power.</p>
+
<p> $UL$ is an arbitrary Unit of Light, related to the energy received by the bacteria. $1 UL$ shall be the energy of light received by an Erlenmeyer flask with a MR16 LED on its side at full power.</p>
<br>
<br>
-
<p>The aim is to find the set of parameters that best explains the curves of OD and fluorescence observed. As we cannot determine them separately because they have opposite effects, we search for the set of parameters that minimizes the distance between the outputs of the model and the experimental data. The distance chosen is the Euclidian distance : the Sum of Square Residuals, or SSR. In our case, the easiest and quickest methods are unusable:</p>
+
<p>The aim is to find the set of parameters that best fits the curves of $OD_{600}$ and fluorescence observed. As we cannot determine them separately because they have opposite effects, we searched for the set of parameters that minimizes the distance between the outputs of the model and the experimental data. The distance chosen is the Euclidian distance : the Sum of Square Residuals, or SSR. In our case, the easiest and quickest methods are unusable:</p>
<p>$\diamond$ A regression requires the solutions to be analytic functions, such as polynomials or exponentials to project the points on it.</p>
<p>$\diamond$ A regression requires the solutions to be analytic functions, such as polynomials or exponentials to project the points on it.</p>
-
<p>$\diamond$ Gradient or Newton methods require a regularity in the effect of parameters that we don't have.</p>
+
<p>$\diamond$ Gradient or Newton methods require regularity in the effect of parameters that we do not have.</p>
-
<p>$\diamond$ The technique of design of experiments is also unusable for the same reason.</p>
+
<p>$\diamond$ The technique of experimental design is not usable for the same reason.</p>
<br>
<br>
-
<p>So we used an alternative method.</p>
+
<p>We therefore used an alternative method based on the utilization of Genetic Algorithms.</p>
Line 65: Line 57:
                                 <li>
                                 <li>
-
                           <h2>Genetic Algorithms</h2>
+
                           <h2 id="AlgoGen">Genetic Algorithms</h2>
-
<p>At first sight, the only possibility to find our parameters was to manipulate them by hand until the predictions seemed good enough. It wasn't a slow method since we could imagine how the output of the calculations would change when we vary each parameter. But it gave no clue that the solution found was the best one. This information can be obtained by an exhaustive research, but this is a pretty long process. To verify 10 values of each parameter, $10^6$ tests are needed, and each test consists in the calculation of 1000 points. For a standard computer, it represents 2 hours of continuous processing. Considering that 10 values are indeed not enough to get a precise answer, it would have been difficult to use.</p>
+
<p>At first sight, the only way to find our parameters was to adjust them manually until the model predictions fitted the experimental data. It wasn't a slow method since we could imagine how the output of the calculations would change when we varied each parameter. However this did not ensure that the best solution was found. This information can be obtained by exhaustive research, but this is a pretty long process. To verify 10 values for each parameter, $10^6$ tests are needed, and each test consists in the calculation of 1000 points. For a standard computer, this represents 2 hours of continuous processing. Considering that 10 values are not enough to get a precise answer, this approach would have been difficult to use in practice. </p>
-
<p>That's why we used genetic algorithms.</p>
+
<p>This is the reason why we used genetic algorithms.</p>
<br>
<br>
<p> The idea of a genetic algorithm is based on the evolution of a wild population and the natural selection of phenotypes best adapted to environment. Here, a phenotype is a set of parameters, and the measure of adaptation is the distance of the kinetics predicted from the kinetics observed.</p>
<p> The idea of a genetic algorithm is based on the evolution of a wild population and the natural selection of phenotypes best adapted to environment. Here, a phenotype is a set of parameters, and the measure of adaptation is the distance of the kinetics predicted from the kinetics observed.</p>
-
<p> 1. First we start with a randomly chosen population (not too much random to accelerate the process).</p>
+
<p> 1. First we start with a randomly chosen population (not too random to accelerate the process).</p>
-
<p> 2. The best ones, those that minimize the distance between previsions and observations, are selected. </p>
+
<p> 2. The best ones; those that minimize the distance between previsions and observations, are selected. </p>
-
<p> 3. With these best ones, other phenotypes are created by mixing the values of parameters (crossing-over) and modifying a bit some of them (mutations). </p>
+
<p> 3. With these best ones, other phenotypes are created by mixing the values of parameters (crossing-over) and modifying some of them a bit (mutations). </p>
-
<p> 4. We now have a population of second generation. If they are all close enough to the solution (ie, the distance between previsions and observations is small enough), the algorithm is considered as 'stabilized', the best one is chosen and the process stop. If not, the algorithm goes back to step 2 with these new phenotypes.</p>
+
<p> 4. We now have a second generation population. If they are all close enough to the solution (ie, the distance between previsions and observations is small enough), the algorithm is considered as 'stabilized', the best one is chosen and the process stops. If not, the algorithm goes back to step 2 with these new phenotypes.</p>
<br>
<br>
-
<p> For 6 parameters, this genetic algorithm works well with a population of 21 phenotypes and 6 selected ones to breed the next generation. To be sure to explore a lot of sets of parameters, we don't chose the 6 best ones, but we chose randomly 5 out of the 6 best ones, and 1 out of the 15 others. It makes the process longer but creates better solutions.</p>
+
<p> For 6 parameters, this genetic algorithm works well with a population of 21 phenotypes and 6 selected ones to breed the next generation. To be sure to explore a lot of sets of parameters, we don't choose the 6 best ones, but we choose 5 out of the 6 best ones randomly, and 1 out of the 15 others. It makes the process longer but creates better solutions.</p>
-
<p>As this algorithm is not deterministic, the only way to compare it to the exhaustive research is to stay in front of the computer with a chronometer. The benefice is quite good : in only five minutes, we have a result much more precise.</p>
+
<p>As this algorithm is not deterministic, the only way to compare it to the exhaustive research is to stay in front of the computer with a chronometer. The time gain is quite good : in only five minutes, we have a much more precise result.</p>
</li>
</li>
                                 <li>
                                 <li>
-
                 <h2>Results</h2>
+
                 <h2 id="Results">Results</h2>
-
<p> We ran 5 experiments to find our parameters. All with the same stem and with the same initial conditions : <a href="/Team:Grenoble-EMSE-LSU/Project/Biology">Click here</a> and go to 'KillerRed characterisation'. Butwith different kinds of illumination :</p>
+
<p> We ran 5 experiments to find our parameters. All with the same <a href="/Team:Grenoble-EMSE-LSU/Project/Biology#KRcharac">start procedure</a>, but with different kinds of illumination :</p>
-
<p> In the 3 first ones, light is switch off for the first 200 minutes, then switch on at full power.</p>
+
<p> $\bullet$In the 3 first ones, the light is switched off for the first 200 minutes, then switched on at full power.</p>
-
<p> In the 2 last ones, light is switch off for the 180 first minutes, then switch on at full power, and then we tries to stabilize the bacterial growth with the help of the predictions. The last one has been a success.</p>
+
<p> $\bullet$In the 2 last ones, the light is switched off for the 180 first minutes, then switched on at full power, and then we modified the intensity of the light several times in order to stabilize the bacterial growth with the help of the predictions. We have succeeded in stabilizing the last one, but not the other.</p>
 +
<p> The 3 first experiments also permitted us to be sure our procedure was repeatable. And the two others, with a variable light, allowed us to be sure our parameters are independent from variations in the environment and to show that the model reacts properly to different intensities of light.
 +
<br>
 +
<p> For each experiment, we searched for the best parameters.</p>
<br>
<br>
-
<p> For each experiment, the best parameters are searched.</p>
 
Line 102: Line 96:
  <th> mean  </th>
  <th> mean  </th>
  <th> Standard Deviation  </th>
  <th> Standard Deviation  </th>
-
  <th> Standard Score $\left(\frac{\sigma}{\mu}, in %\right)$ </th>
+
  <th> Standard Score $\left(\frac{\sigma}{\mu}, in \%\right)$ </th>
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 112: Line 106:
  <td> 95 </td>  
  <td> 95 </td>  
  <td> 82.8 </td>  
  <td> 82.8 </td>  
-
  <td> 7.36 </td>  
+
  <td style="text-align:center"> 7.36 </td>  
-
  <td> 8.89 </td>  
+
  <td style="background-color:Aquamarine;text-align:center"> 8.89 </td>  
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 123: Line 117:
  <td> 100 </td>
  <td> 100 </td>
  <td> 132 </td>  
  <td> 132 </td>  
-
  <td> 34.0</td>  
+
  <td style="text-align:center"> 34.0</td>  
-
  <td> 25.7 </td>   
+
  <td style="text-align:center"> 25.7 </td>   
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 134: Line 128:
  <td> 0.7 </td>
  <td> 0.7 </td>
  <td> 0.86 </td>  
  <td> 0.86 </td>  
-
  <td> 0.439 </td>  
+
  <td style="text-align:center"> 0.439 </td>  
-
  <td> 51.1 </td>   
+
  <td style="background-color:Gold;text-align:center"> 51.1 </td>   
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 145: Line 139:
  <td> 100 </td>
  <td> 100 </td>
  <td> 108 </td>  
  <td> 108 </td>  
-
  <td> 11.0 </td>  
+
  <td style="text-align:center"> 11.0 </td>  
-
  <td> 10.1 </td>   
+
  <td style="background-color:Aquamarine;text-align:center"> 10.1 </td>   
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 156: Line 150:
  <td> 0.5 </td>
  <td> 0.5 </td>
  <td> 0.87 </td>  
  <td> 0.87 </td>  
-
  <td> 0.538 </td>  
+
  <td style="text-align:center"> 0.538 </td>  
-
  <td> 61.8 </td>   
+
  <td style="background-color:Gold;text-align:center"> 61.8 </td>   
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 167: Line 161:
  <td> 0.011 </td>
  <td> 0.011 </td>
  <td> 0.0166 </td>  
  <td> 0.0166 </td>  
-
  <td> 0.00979 </td>  
+
  <td style="text-align:center"> 0.00979 </td>  
-
  <td> 59.0 </td>   
+
  <td style="background-color:Gold;text-align:center"> 59.0 </td>   
   </tr>  
   </tr>  
</table>  
</table>  
<br>
<br>
-
<p>$M$ and $R$ are not the variables used in the equation, but are directly linked to them : </p>
+
<p>$M$ and $R$ are not the variables used in the equation, but are directly related to them : </p>
<p> $R$ is the time of division of bacteria, and so : $r=\frac{\ln(2)}{R}$.</p>
<p> $R$ is the time of division of bacteria, and so : $r=\frac{\ln(2)}{R}$.</p>
-
<p> $M$ is the time at which half of the KillerRed have matured, and so : $m=\frac{\ln(2)}{M}$ .</p>
+
<p> $M$ is the time at which half of the KillerRed proteins have matured, and so : $m=\frac{\ln(2)}{M}$ .</p>
-
<p> They are both actual times (in $min$) and make the figures easier to understand.</p>
+
<p> $M$ and $R$ are both actual times (in $min$) and make the figures easier to understand.</p>
 +
 
 +
<p> In the table, $1-l$ appears, and not $l$ for $l$ has a huge effect on the previsions when $l\rightarrow1$. Turning it that way allows us to better control its variations, a variation of 1% of $l$ can make it exceed the value $1$, which becomes meaningless, whereas for $1-l$ even a variation of 10% does not make it exceed any limit value. </p>
<br>
<br>
-
<p>Analyzing the table shows that $M$ and $R$ are the least variable parameters. In contrast, $a$, $b$ and $k$ the parameters that characterize KillerRed production, photobleaching and photo toxicity, exhibit large variations. These parameters should therefore be determined in each experiment, by a standard procedure described later.</p>
+
 
 +
<p>Analyzing the table shows that $M$ and $R$ are the least variable parameters. In contrast, $b$, $k$ and $l$, the parameters that characterize KillerRed photobleaching, photo toxicity and damage accumulation exhibit large variations. These parameters should therefore be determined in each experiment, by a standard procedure described <a href="https://2013.igem.org/Team:Grenoble-EMSE-LSU/Project/Modelling/Density#MCBP">here</a>.</p>
 +
<p> The mean values will be the ones used for the predictions, except for the 3 volatile parameters : their mean value will be used to begin with, but when, during the experiment, it is possible to improve the precision of these mean values, they are changed.</p>

Latest revision as of 03:11, 5 October 2013

Grenoble-EMSE-LSU, iGEM


Grenoble-EMSE-LSU, iGEM

Retrieved from "http://2013.igem.org/Team:Grenoble-EMSE-LSU/Project/Modelling/Parameters"