Team:Grenoble-EMSE-LSU/Project/Modelling/Parameters

From 2013.igem.org

(Difference between revisions)
Line 51: Line 51:
<p> $OD$ is the Optical Density at $\lambda = 600nm$</p>
<p> $OD$ is the Optical Density at $\lambda = 600nm$</p>
<p> $UF$ is an arbitrary Unit of Fluorescence (with $\lambda_absorption=585nm$ and $\lambda_emission=610nm$</p>
<p> $UF$ is an arbitrary Unit of Fluorescence (with $\lambda_absorption=585nm$ and $\lambda_emission=610nm$</p>
-
<p> $UL$ is an arbitrary Unit of Light, related to the energy received by the bacteria. $1 UF$ shall be the energy of light received by an an Erlenmeyer flask with a MR16 LED on its side at full power.</p>
+
<p> $UL$ is an arbitrary Unit of Light, related to the energy received by the bacteria. $1 UF$ shall be the energy of light received by an Erlenmeyer flask with a MR16 LED on its side at full power.</p>
<br>
<br>
<p>The aim is to find the set of parameters that best explains the curves of OD and fluorescence observed. As we cannot determine them separately because they have opposite effects, we search for the set of parameters that minimizes the distance between the outputs of the model and the experimental data. The distance chosen is the Euclidian distance : the Sum of Square Residuals, or SSR. In our case, the easiest and quickest methods are unusable:</p>
<p>The aim is to find the set of parameters that best explains the curves of OD and fluorescence observed. As we cannot determine them separately because they have opposite effects, we search for the set of parameters that minimizes the distance between the outputs of the model and the experimental data. The distance chosen is the Euclidian distance : the Sum of Square Residuals, or SSR. In our case, the easiest and quickest methods are unusable:</p>
Line 67: Line 67:
                           <h2>Genetic Algorithms</h2>
                           <h2>Genetic Algorithms</h2>
-
<p>At first sight, the only possibility to find our parameters was to manipulate them by hand until the predictions seemed good enough. It wasn't a slow method since we could imagine how the output of the calculations would change when we vary each parameter. But it gave no clue that the solution found was the best one. This information can be obtained by an exhaustive research, but this is a pretty long process. To verify 10 values of each parameter, $10^6$ tests are needed, and each test consists in the calculation of 1000 points. For a standard computer, it represents 2 hours of continuous processing. Considering that 10 values is indeed not enough to get a precise answer, it would have been difficult to use.</p>
+
<p>At first sight, the only possibility to find our parameters was to manipulate them by hand until the predictions seemed good enough. It wasn't a slow method since we could imagine how the output of the calculations would change when we vary each parameter. But it gave no clue that the solution found was the best one. This information can be obtained by an exhaustive research, but this is a pretty long process. To verify 10 values of each parameter, $10^6$ tests are needed, and each test consists in the calculation of 1000 points. For a standard computer, it represents 2 hours of continuous processing. Considering that 10 values are indeed not enough to get a precise answer, it would have been difficult to use.</p>
<p>That's why we used genetic algorithms.</p>
<p>That's why we used genetic algorithms.</p>
<br>
<br>
Line 100: Line 100:
  <th> exp 4 </th>  
  <th> exp 4 </th>  
  <th> exp 5 </th>  
  <th> exp 5 </th>  
 +
<th> mean  </th>
 +
<th> Standard Seviation  </th>
 +
<th> Standard Score  </th>
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 108: Line 111:
  <td> 81 </td>  
  <td> 81 </td>  
  <td> 95 </td>  
  <td> 95 </td>  
 +
<td> 82.8 </td>
 +
<td> 7.36 </td>
 +
<td> 8.89 </td>
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 115: Line 121:
  <td> 150 </td>  
  <td> 150 </td>  
  <td> 170 </td>  
  <td> 170 </td>  
-
  <td> 100 </td>  
+
  <td> 100 </td>
 +
<td> 132 </td>
 +
<td> 34.0</td>
 +
<td> 25.7 </td> 
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 123: Line 132:
  <td> 0.8 </td>  
  <td> 0.8 </td>  
  <td> 0.3 </td>  
  <td> 0.3 </td>  
-
  <td> 0.7 </td>  
+
  <td> 0.7 </td>
 +
<td> 0.86 </td>
 +
<td> 0.439 </td>
 +
<td> 51.1 </td> 
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 131: Line 143:
  <td> 100 </td>  
  <td> 100 </td>  
  <td> 120 </td>  
  <td> 120 </td>  
-
  <td> 100 </td>  
+
  <td> 100 </td>
 +
<td> 108 </td>
 +
<td> 11.0 </td>
 +
<td> 10.1 </td> 
   </tr>  
   </tr>  
   <tr>  
   <tr>  
Line 139: Line 154:
  <td> 1.1 </td>  
  <td> 1.1 </td>  
  <td> 0.15 </td>  
  <td> 0.15 </td>  
-
  <td> 0.5 </td>  
+
  <td> 0.5 </td>
 +
<td> 0.87 </td>
 +
<td> 0.538 </td>
 +
<td> 61.8 </td> 
   </tr>  
   </tr>  
   <tr>  
   <tr>  
-
  <th> $l$ </th>  
+
  <th> $1-l$ </th>  
-
  <td> 0.996 </td>  
+
  <td> 0.004 </td>  
-
  <td> 0.98 </td>  
+
  <td> 0.02 </td>  
-
  <td> 0.97 </td>  
+
  <td> 0.03 </td>  
-
  <td> 0.982 </td>  
+
  <td> 0.018 </td>  
-
  <td> 0.989 </td>  
+
  <td> 0.011 </td>
 +
<td> 0.0166 </td>
 +
<td> 0.00979 </td>
 +
<td> 59.0 </td> 
   </tr>  
   </tr>  
</table>  
</table>  
Line 155: Line 176:
<p> $M$ is the time at which half of the KillerRed have matured, and so : $m=\frac{\ln(2)}{M}$ .</p>
<p> $M$ is the time at which half of the KillerRed have matured, and so : $m=\frac{\ln(2)}{M}$ .</p>
<p> They are both actual times (in $min$) and make the figures easier to understand.</p>
<p> They are both actual times (in $min$) and make the figures easier to understand.</p>
 +
<br>
 +

Revision as of 10:31, 1 October 2013

Grenoble-EMSE-LSU, iGEM


Grenoble-EMSE-LSU, iGEM

Retrieved from "http://2013.igem.org/Team:Grenoble-EMSE-LSU/Project/Modelling/Parameters"