Team:Valencia Biocampus/Demonstration/Diffusion2

From 2013.igem.org

(Difference between revisions)
(Description of the numerical method)
(Description of the numerical method)
 
(11 intermediate revisions not shown)
Line 5: Line 5:
{{:Template:Team:Valencia_Biocampus/Templates/Header}}
{{:Template:Team:Valencia_Biocampus/Templates/Header}}
-
 
+
<html>
-
<div style="padding:20px">
+
<div class="container">
 +
</html>
== Description of the numerical method ==
== Description of the numerical method ==
Line 23: Line 24:
<br/>
<br/>
$$\; \frac{u^{k+1}_{i,j} - u^{k}_{i,j}}{q}\;= \frac{1}{2} \; \left(F^{k+1}_{i,j} \; \left( u, \;x, \;y, \;t, \;\frac{\partial u}{\partial x}, \;\frac{\partial u}{\partial y}, \;\frac{\partial^2 u}{\partial x^2}, \;\frac{\partial u^2}{\partial y^2}\right) + F^{k}_{i,j} \; \left( u, \;x, \;y, \;t, \;\frac{\partial u}{\partial x}, \;\frac{\partial u}{\partial y}, \;\frac{\partial^2 u}{\partial x^2}, \;\frac{\partial u^2}{\partial y^2}\right)\right)$$
$$\; \frac{u^{k+1}_{i,j} - u^{k}_{i,j}}{q}\;= \frac{1}{2} \; \left(F^{k+1}_{i,j} \; \left( u, \;x, \;y, \;t, \;\frac{\partial u}{\partial x}, \;\frac{\partial u}{\partial y}, \;\frac{\partial^2 u}{\partial x^2}, \;\frac{\partial u^2}{\partial y^2}\right) + F^{k}_{i,j} \; \left( u, \;x, \;y, \;t, \;\frac{\partial u}{\partial x}, \;\frac{\partial u}{\partial y}, \;\frac{\partial^2 u}{\partial x^2}, \;\frac{\partial u^2}{\partial y^2}\right)\right)$$
-
 
+
<br/>
-
 
+
So, fitting our differential equation into the definition of the method, we got the discretized one, that will lead the final numerical solution. Note that we replaced the time derivatives by the approximate <i>forward differences</i> and the space derivatives by <i>central differences</i>, being $\;q\;$ and $\;h\;$ the time and space step, respectively:
-
The first step is to discretize our differential equation, replacing the time derivatives by the approximate forward differences and the space derivatives by central differences, being $\;q\;$ and $\;h\;$ the time and space step, respectively:
+
<br/>
<br/>
<br/>
<br/>
<span style="font-size: 100%;">$$\; 2 \; \frac{C^{k+1}_{i,j} - C^{k}_{i,j}}{q}\;= \\ \;D \; \left(\frac{C^{k+1}_{i+1,j} - 2C^{k+1}_{i,j}+C^{k+1}_{i-1,j}}{h^2} + \frac{C^{k+1}_{i,j+1} - 2C^{k+1}_{i,j}+C^{k+1}_{i,j-1}}{h^2}\right) - \left(\frac{\nu_{x_{i+1,j}} - \nu_{x_{i-1,j}}}{2h} \; C^{k+1}_{i,j} + \nu_{x_{i,j}} \; \frac{C^{k+1}_{i+1,j} - C^{k+1}_{i-1,j}}{2h} + \frac{\nu_{y_{i,j+1}} - \nu_{y_{i,j-1}}}{2h} \; C^{k+1}_{i,j} + \nu_{y_{i,j}} \; \frac{C^{k+1}_{i,j+1} - C^{k+1}_{i,j-1}}{2h}\right) \\ + D \; \left(\frac{C^{k}_{i+1,j} - 2C^{k}_{i,j}+C^{k}_{i-1,j}}{h^2} + \frac{C^{k}_{i,j+1} - 2C^{k}_{i,j}+C^{k}_{i,j-1}}{h^2}\right) - \left(\frac{\nu_{x_{i+1,j}} - \nu_{x_{i-1,j}}}{2h} \; C^{k}_{i,j} + \nu_{x_{i,j}} \; \frac{C^{k}_{i+1,j} - C^{k}_{i-1,j}}{2h} + \frac{\nu_{y_{i,j+1}} - \nu_{y_{i,j-1}}}{2h} \; C^{k}_{i,j} + \nu_{y_{i,j}} \; \frac{C^{k}_{i,j+1} - C^{k}_{i,j-1}}{2h}\right)$$</span>
<span style="font-size: 100%;">$$\; 2 \; \frac{C^{k+1}_{i,j} - C^{k}_{i,j}}{q}\;= \\ \;D \; \left(\frac{C^{k+1}_{i+1,j} - 2C^{k+1}_{i,j}+C^{k+1}_{i-1,j}}{h^2} + \frac{C^{k+1}_{i,j+1} - 2C^{k+1}_{i,j}+C^{k+1}_{i,j-1}}{h^2}\right) - \left(\frac{\nu_{x_{i+1,j}} - \nu_{x_{i-1,j}}}{2h} \; C^{k+1}_{i,j} + \nu_{x_{i,j}} \; \frac{C^{k+1}_{i+1,j} - C^{k+1}_{i-1,j}}{2h} + \frac{\nu_{y_{i,j+1}} - \nu_{y_{i,j-1}}}{2h} \; C^{k+1}_{i,j} + \nu_{y_{i,j}} \; \frac{C^{k+1}_{i,j+1} - C^{k+1}_{i,j-1}}{2h}\right) \\ + D \; \left(\frac{C^{k}_{i+1,j} - 2C^{k}_{i,j}+C^{k}_{i-1,j}}{h^2} + \frac{C^{k}_{i,j+1} - 2C^{k}_{i,j}+C^{k}_{i,j-1}}{h^2}\right) - \left(\frac{\nu_{x_{i+1,j}} - \nu_{x_{i-1,j}}}{2h} \; C^{k}_{i,j} + \nu_{x_{i,j}} \; \frac{C^{k}_{i+1,j} - C^{k}_{i-1,j}}{2h} + \frac{\nu_{y_{i,j+1}} - \nu_{y_{i,j-1}}}{2h} \; C^{k}_{i,j} + \nu_{y_{i,j}} \; \frac{C^{k}_{i,j+1} - C^{k}_{i,j-1}}{2h}\right)$$</span>
<br/>
<br/>
-
Then, placing all (k+1) terms at left and (k) terms at right and recalling the constants $\;\lambda \; = \; \frac{q \; D}{2 \; h^2}\;$ and $\;\mu \; = \; \frac{q}{4 \; h}\;$ it results a more handy equation:
+
As there were so many terms, we reorganised the equation, by placing all the unknowns ($C^{k+1}$) at the left side, and the knowns ($C^{k}$) at right side of equality. Also, some constants were defined: $\;\lambda \; = \; \frac{q \; D}{2 \; h^2}\;$ and $\;\mu \; = \; \frac{q}{4 \; h}\;$. As result, a more handy equation was obtained:
<br/>
<br/>
<br/>
<br/>
<span style="font-size: 100%;">$$ C^{k+1}_{i+1,j} \; \left(- \lambda + \nu_{x_{i,j}} \; \mu \right) +  C^{k+1}_{i,j+1} \; \left(- \lambda + \nu_{y_{i,j}} \; \mu \right) +  C^{k+1}_{i,j} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,j}} + \nu^{(0,1)}_{{y}_{i,j}}\right) + C^{k+1}_{i-1,j} \; \left(- \lambda - \nu_{x_{i,j}} \; \mu \right) + C^{k+1}_{i,j-1} \; \left(- \lambda - \nu_{y_{i,j}} \; \mu \right) \; = \\ \; C^{k}_{i+1,j} \; \left(\lambda - \nu_{x_{i,j}} \; \mu \right) +  C^{k}_{i,j+1} \; \left(\lambda - \nu_{y_{i,j}} \; \mu \right) +  C^{k}_{i,j} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{i,j}} - \nu^{(0,1)}_{{y}_{i,j}}\right) + C^{k}_{i-1,j} \; \left(\lambda + \nu_{x_{i,j}} \; \mu \right) + C^{k}_{i,j-1} \; \left(\lambda + \nu_{y_{i,j}} \; \mu \right) $$</span>
<span style="font-size: 100%;">$$ C^{k+1}_{i+1,j} \; \left(- \lambda + \nu_{x_{i,j}} \; \mu \right) +  C^{k+1}_{i,j+1} \; \left(- \lambda + \nu_{y_{i,j}} \; \mu \right) +  C^{k+1}_{i,j} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,j}} + \nu^{(0,1)}_{{y}_{i,j}}\right) + C^{k+1}_{i-1,j} \; \left(- \lambda - \nu_{x_{i,j}} \; \mu \right) + C^{k+1}_{i,j-1} \; \left(- \lambda - \nu_{y_{i,j}} \; \mu \right) \; = \\ \; C^{k}_{i+1,j} \; \left(\lambda - \nu_{x_{i,j}} \; \mu \right) +  C^{k}_{i,j+1} \; \left(\lambda - \nu_{y_{i,j}} \; \mu \right) +  C^{k}_{i,j} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{i,j}} - \nu^{(0,1)}_{{y}_{i,j}}\right) + C^{k}_{i-1,j} \; \left(\lambda + \nu_{x_{i,j}} \; \mu \right) + C^{k}_{i,j-1} \; \left(\lambda + \nu_{y_{i,j}} \; \mu \right) $$</span>
<br/>
<br/>
-
&nbsp; 2.- For being able to compute the method in a matrix way, we were forced to do a small cencrete example, in order to extrapolate then the procedure into a general method. For that, we meshed the space in three steps each coordinate, so that, i=1,2,3 and j=1,2,3:
+
&nbsp; 2.- As it was still too cumbersome, was still not so easy to compute it in matrix way. For that, we were forced to particularize it in a reduced example, with the aim of extrapolate, from then, the general method. The particular case was based on the space meshing in three steps each coordinate, so that, $i=1,2,3$ and $j=1,2,3$:
<br/>
<br/>
<br/>
<br/>
Line 57: Line 57:
$$ C^{k+1}_{4,3} \; \left(- \lambda + \nu_{x_{3,3}} \; \mu \right) +  C^{k+1}_{3,4} \; \left(- \lambda + \nu_{y_{3,3}} \; \mu \right) +  C^{k+1}_{3,3} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{3,3}} + \nu^{(0,1)}_{{y}_{3,3}}\right) + C^{k+1}_{2,3} \; \left(- \lambda - \nu_{x_{3,3}} \; \mu \right) + C^{k+1}_{3,2} \; \left(- \lambda - \nu_{y_{3,3}} \; \mu \right) \; = \\ \; C^{k}_{4,3} \; \left(\lambda - \nu_{x_{3,3}} \; \mu \right) +  C^{k}_{3,4} \; \left(\lambda - \nu_{y_{3,3}} \; \mu \right) +  C^{k}_{3,3} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{3,3}} - \nu^{(0,1)}_{{y}_{3,3}}\right) + C^{k}_{2,3} \; \left(\lambda + \nu_{x_{3,3}} \; \mu \right) + C^{k}_{3,2} \; \left(\lambda + \nu_{y_{3,3}} \; \mu \right) $$</span>
$$ C^{k+1}_{4,3} \; \left(- \lambda + \nu_{x_{3,3}} \; \mu \right) +  C^{k+1}_{3,4} \; \left(- \lambda + \nu_{y_{3,3}} \; \mu \right) +  C^{k+1}_{3,3} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{3,3}} + \nu^{(0,1)}_{{y}_{3,3}}\right) + C^{k+1}_{2,3} \; \left(- \lambda - \nu_{x_{3,3}} \; \mu \right) + C^{k+1}_{3,2} \; \left(- \lambda - \nu_{y_{3,3}} \; \mu \right) \; = \\ \; C^{k}_{4,3} \; \left(\lambda - \nu_{x_{3,3}} \; \mu \right) +  C^{k}_{3,4} \; \left(\lambda - \nu_{y_{3,3}} \; \mu \right) +  C^{k}_{3,3} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{3,3}} - \nu^{(0,1)}_{{y}_{3,3}}\right) + C^{k}_{2,3} \; \left(\lambda + \nu_{x_{3,3}} \; \mu \right) + C^{k}_{3,2} \; \left(\lambda + \nu_{y_{3,3}} \; \mu \right) $$</span>
<br/>
<br/>
-
Our goal was to rewrite the ecuations system, in the form:
+
&nbsp; 3.- Our goal was to rewrite the equations system, in the form:
<br/>
<br/>
<br/>
<br/>
$$ \underline{\underline{A}} \; \underline{C^{k+1}} \; = \; \underline{\underline{B}} \; \underline{C^{k}} $$
$$ \underline{\underline{A}} \; \underline{C^{k+1}} \; = \; \underline{\underline{B}} \; \underline{C^{k}} $$
<br/>
<br/>
-
So, in that moment, we could finally put all that in a matrix form, being all $\;C^{k+1}_{i,j}\;$ the unknowns, and $\;C^{k}_{i,j}\;$ and rest of the constants, the knowns, and making zero all terms that contained i=0 or i=4 or j=0 or j=4, that are our boundary conditions. Renamed $\;\alpha \; = \; 1 + 4 \; \lambda\;$ and $\;\beta \; = \; 1 - 4 \; \lambda\;$:
+
So, in that moment, we could finally put all that in a matrix form, being all $\;C^{k+1}_{i,j}\;$ the unknowns, and $\;C^{k}_{i,j}\;$ the knowns (and of course, all the constant parameters).
 +
<br/>
 +
<br/>
 +
That was also the moment to impose the <i> boundary conditions </i>. We consider that the most fit are <i> Neumann </i> ones, that are to make zero all the border concentration; so we removed all terms containing i=0 or i=4 or j=0 or j=4. Note that, in order to facilitate the reading of the matrix, have been renamed some parameters, such as $\;\alpha \; = \; 1 + 4 \; \lambda\;$ and $\;\beta \; = \; 1 - 4 \; \lambda\;$:
<br/>
<br/>
<br/>
<br/>
Line 100: Line 103:
<br/>
<br/>
<br/>
<br/>
-
&nbsp; 3.- Now, we wrote the matrixes in $\;n^2\;$ matrix blocks of $\;n\;$ x $\;n\;$ dimension, and the vectors in $\;n\;$ vector blocks of $\;n\;$ dimension:
+
&nbsp; 3.- Now was time to generalize last matrix equation. To provide a better global vision, we divided the matrixes into $\;n^2\;$ matrix blocks of $\;n\;$ x $\;n\;$ size, and the vectors in $\;n\;$ vector blocks of $\;n\;$ size:
<br/>
<br/>
<br/>
<br/>
Line 128: Line 131:
\end{bmatrix} $$
\end{bmatrix} $$
<br/>
<br/>
-
And finally, we realised the pattern it followed, and could generalized for a $\;n\;$ x $\;n\;$ space steps:
+
Each one of these submatrixes was easier to study than the global one, so it wasn't complex to, finally, realise which pattern they followed, and could generalized for a $\;n\;$ x $\;n\;$ space steps (so $\;i=1,2,...,n\;$ and $\;j=1,2,...,n$):
<br/>
<br/>
<br/>
<br/>
-
<span style="font-size: 110%;">$$ \underline{\underline{A_{w,w}}} \; = \begin{bmatrix}
+
<span style="font-size: 110%;">$$ \underline{\underline{A_{i,i}}} \; = \begin{bmatrix}
-
1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{w,1}} + \nu^{(0,1)}_{{y}_{w,1}} & - \lambda + \nu_{y_{w,1}} \; \mu & 0 & \cdots & 0\\
+
1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,1}} + \nu^{(0,1)}_{{y}_{i,1}} & - \lambda + \nu_{y_{i,1}} \; \mu & 0 & \cdots & 0\\
-
- \lambda - \nu_{y_{w,2}} \; \mu & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{w,2}} + \nu^{(0,1)}_{{y}_{w,2}} & - \lambda + \nu_{y_{w,2}} \; \mu & \cdots & 0\\
+
- \lambda - \nu_{y_{i,2}} \; \mu & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,2}} + \nu^{(0,1)}_{{y}_{i,2}} & - \lambda + \nu_{y_{i,2}} \; \mu & \cdots & 0\\
-
0 & - \lambda - \nu_{y_{w,3}} \; \mu & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{w,3}} + \nu^{(0,1)}_{{y}_{w,3}} & \cdots & 0\\
+
0 & - \lambda - \nu_{y_{i,3}} \; \mu & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,3}} + \nu^{(0,1)}_{{y}_{i,3}} & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
-
0 & 0 & 0 & \cdots & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{w,n}} + \nu^{(0,1)}_{{y}_{w,n}}\\
+
0 & 0 & 0 & \cdots & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,n}} + \nu^{(0,1)}_{{y}_{i,n}}\\
\end{bmatrix} $$
\end{bmatrix} $$
-
$$ \underline{\underline{A_{w,w-1}}} \; = \begin{bmatrix}
+
$$ \underline{\underline{A_{i,i-1}}} \; = \begin{bmatrix}
-
- \lambda - \nu_{x_{w,1}} \; \mu & 0 & 0 & \cdots & 0\\
+
- \lambda - \nu_{x_{i,1}} \; \mu & 0 & 0 & \cdots & 0\\
-
0 & - \lambda - \nu_{x_{w,2}} \; \mu & 0 & \cdots & 0\\
+
0 & - \lambda - \nu_{x_{i,2}} \; \mu & 0 & \cdots & 0\\
-
0 & 0 & - \lambda - \nu_{x_{w,3}} \; \mu & \cdots & 0\\
+
0 & 0 & - \lambda - \nu_{x_{i,3}} \; \mu & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
-
0 & 0 & 0 & \cdots & - \lambda - \nu_{x_{w,n}} \; \mu \\
+
0 & 0 & 0 & \cdots & - \lambda - \nu_{x_{i,n}} \; \mu \\
\end{bmatrix} $$
\end{bmatrix} $$
-
$$ \underline{\underline{A_{w-1,w}}} \; = \begin{bmatrix}
+
$$ \underline{\underline{A_{i-1,i}}} \; = \begin{bmatrix}
-
- \lambda + \nu_{x_{w-1,1}} \; \mu & 0 & 0 & \cdots & 0\\
+
- \lambda + \nu_{x_{i-1,1}} \; \mu & 0 & 0 & \cdots & 0\\
-
0 & - \lambda + \nu_{x_{w-1,2}} \; \mu & 0 & \cdots & 0\\
+
0 & - \lambda + \nu_{x_{i-1,2}} \; \mu & 0 & \cdots & 0\\
-
0 & 0 & - \lambda + \nu_{x_{w-1,3}} \; \mu & \cdots & 0\\
+
0 & 0 & - \lambda + \nu_{x_{i-1,3}} \; \mu & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
-
0 & 0 & 0 & \cdots & - \lambda + \nu_{x_{w-1,n}} \; \mu \\
+
0 & 0 & 0 & \cdots & - \lambda + \nu_{x_{i-1,n}} \; \mu \\
\end{bmatrix} $$</span>
\end{bmatrix} $$</span>
<br/>
<br/>
Line 160: Line 163:
Where $\;\underline{\underline{\mathcal{M}}} \; = \; \underline{\underline{A^{-1}}} \;  \underline{\underline{B}}$
Where $\;\underline{\underline{\mathcal{M}}} \; = \; \underline{\underline{A^{-1}}} \;  \underline{\underline{B}}$
<br/>
<br/>
-
That is the last ecuation system must be solved each time iteration. Note that, for the first iteration $\;(k\;=\;0)$, we get $\;\underline{C^{1}}\;=\;\underline{\underline{\mathcal{M}}} \; \underline{C^{0}}\;$, and this $\;\underline{C^{0}}\;$ vector, is given with the initial conditions, in our case, a Gaussian Distribution, centered in the middle of the plate.
+
That is the last equation system must be solved each time iteration. Note that, for the first iteration $\;(k\;=\;0)$, we get $\;\underline{C^{1}}\;=\;\underline{\underline{\mathcal{M}}} \; \underline{C^{0}}\;$, and this $\;\underline{C^{0}}\;$ vector, is given with the <i> initial conditions </i>, in our case, a Gaussian Distribution, centered in the middle of the plate (in the ideal case, is usually chosen a Delta of Dirac ($\delta(x,y)$) as initial conditions, but we are dealing a real model).
<br/>
<br/>
 +
<html>
</div>
</div>
 +
</html>
 +
<html>
 +
<script type="text/javascript">
 +
$(document).ready(function() {
 +
  // cache the id
 +
  var navbox = $('#myTab');
 +
  // activate tab on click
 +
  navbox.on('click', 'a', function (e) {
 +
      // add a hash to the URL when the user clicks on a tab
 +
      history.pushState(null, null, $(this).attr('href'));
 +
      var $this = $(this);
 +
      // prevent the Default behavior
 +
      e.preventDefault();
 +
      // set the hash to the address bar
 +
      window.location.hash = $this.attr('href');
 +
      // activate the clicked tab
 +
      $this.tab('show');
 +
      //return false;
 +
  })
 +
 +
  // if we have a hash in the address bar
 +
  if(window.location.hash){
 +
    // show right tab on load (read hash from address bar)
 +
    $('a[href="'+window.location.hash+'"]').tab('show');
 +
  }
 +
 +
  // navigate to a tab when the history changes
 +
  window.addEventListener("popstate", function(e) {
 +
    var activeTab = $('[href=' + location.hash + ']');
 +
    if (activeTab.length) {
 +
        activeTab.tab('show');
 +
    } else {
 +
        $('.nav-tabs a:first').tab('show');
 +
    }
 +
  });
 +
});
 +
</script>
 +
 +
 +
</html>

Latest revision as of 10:35, 3 October 2013

Show/hide wiki menu

Description of the numerical method

The Crank-Nicolson method is obtained by using a forward difference in time ($t_k$) and central differences in space $(x_i, y_j)$. However, by giving discrete values for time and space $(t_k, x_i, y_j)$ to our Crank-Nicolson method and zeroing boundary conditions (since we have to define some limit for the plate) and also setting initial conditions to a specific initial distribution, we achieved the matrix recursive formula: $$[\mathcal{C}^{k+1}] = \mathcal{M}\;[\mathcal{C}^k]$$ Where $\mathcal{M}$ is a block tridiagonal matrix depending on the parameters and the velocities in $x$ and $y$ directions at some points in space.

However, this matrix recursive formula wasn't so obvious, so we tried to obtain it following carefully these steps:

  1.- Crank-Nicolson method says:

$$\; \frac{u^{k+1}_{i,j} - u^{k}_{i,j}}{q}\;= \frac{1}{2} \; \left(F^{k+1}_{i,j} \; \left( u, \;x, \;y, \;t, \;\frac{\partial u}{\partial x}, \;\frac{\partial u}{\partial y}, \;\frac{\partial^2 u}{\partial x^2}, \;\frac{\partial u^2}{\partial y^2}\right) + F^{k}_{i,j} \; \left( u, \;x, \;y, \;t, \;\frac{\partial u}{\partial x}, \;\frac{\partial u}{\partial y}, \;\frac{\partial^2 u}{\partial x^2}, \;\frac{\partial u^2}{\partial y^2}\right)\right)$$
So, fitting our differential equation into the definition of the method, we got the discretized one, that will lead the final numerical solution. Note that we replaced the time derivatives by the approximate forward differences and the space derivatives by central differences, being $\;q\;$ and $\;h\;$ the time and space step, respectively:

$$\; 2 \; \frac{C^{k+1}_{i,j} - C^{k}_{i,j}}{q}\;= \\ \;D \; \left(\frac{C^{k+1}_{i+1,j} - 2C^{k+1}_{i,j}+C^{k+1}_{i-1,j}}{h^2} + \frac{C^{k+1}_{i,j+1} - 2C^{k+1}_{i,j}+C^{k+1}_{i,j-1}}{h^2}\right) - \left(\frac{\nu_{x_{i+1,j}} - \nu_{x_{i-1,j}}}{2h} \; C^{k+1}_{i,j} + \nu_{x_{i,j}} \; \frac{C^{k+1}_{i+1,j} - C^{k+1}_{i-1,j}}{2h} + \frac{\nu_{y_{i,j+1}} - \nu_{y_{i,j-1}}}{2h} \; C^{k+1}_{i,j} + \nu_{y_{i,j}} \; \frac{C^{k+1}_{i,j+1} - C^{k+1}_{i,j-1}}{2h}\right) \\ + D \; \left(\frac{C^{k}_{i+1,j} - 2C^{k}_{i,j}+C^{k}_{i-1,j}}{h^2} + \frac{C^{k}_{i,j+1} - 2C^{k}_{i,j}+C^{k}_{i,j-1}}{h^2}\right) - \left(\frac{\nu_{x_{i+1,j}} - \nu_{x_{i-1,j}}}{2h} \; C^{k}_{i,j} + \nu_{x_{i,j}} \; \frac{C^{k}_{i+1,j} - C^{k}_{i-1,j}}{2h} + \frac{\nu_{y_{i,j+1}} - \nu_{y_{i,j-1}}}{2h} \; C^{k}_{i,j} + \nu_{y_{i,j}} \; \frac{C^{k}_{i,j+1} - C^{k}_{i,j-1}}{2h}\right)$$
As there were so many terms, we reorganised the equation, by placing all the unknowns ($C^{k+1}$) at the left side, and the knowns ($C^{k}$) at right side of equality. Also, some constants were defined: $\;\lambda \; = \; \frac{q \; D}{2 \; h^2}\;$ and $\;\mu \; = \; \frac{q}{4 \; h}\;$. As result, a more handy equation was obtained:

$$ C^{k+1}_{i+1,j} \; \left(- \lambda + \nu_{x_{i,j}} \; \mu \right) + C^{k+1}_{i,j+1} \; \left(- \lambda + \nu_{y_{i,j}} \; \mu \right) + C^{k+1}_{i,j} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,j}} + \nu^{(0,1)}_{{y}_{i,j}}\right) + C^{k+1}_{i-1,j} \; \left(- \lambda - \nu_{x_{i,j}} \; \mu \right) + C^{k+1}_{i,j-1} \; \left(- \lambda - \nu_{y_{i,j}} \; \mu \right) \; = \\ \; C^{k}_{i+1,j} \; \left(\lambda - \nu_{x_{i,j}} \; \mu \right) + C^{k}_{i,j+1} \; \left(\lambda - \nu_{y_{i,j}} \; \mu \right) + C^{k}_{i,j} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{i,j}} - \nu^{(0,1)}_{{y}_{i,j}}\right) + C^{k}_{i-1,j} \; \left(\lambda + \nu_{x_{i,j}} \; \mu \right) + C^{k}_{i,j-1} \; \left(\lambda + \nu_{y_{i,j}} \; \mu \right) $$
  2.- As it was still too cumbersome, was still not so easy to compute it in matrix way. For that, we were forced to particularize it in a reduced example, with the aim of extrapolate, from then, the general method. The particular case was based on the space meshing in three steps each coordinate, so that, $i=1,2,3$ and $j=1,2,3$:

$ i=1,\;j=1 $ $$ C^{k+1}_{2,1} \; \left(- \lambda + \nu_{x_{1,1}} \; \mu \right) + C^{k+1}_{1,2} \; \left(- \lambda + \nu_{y_{1,1}} \; \mu \right) + C^{k+1}_{1,1} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{1,1}} + \nu^{(0,1)}_{{y}_{1,1}}\right) + C^{k+1}_{0,1} \; \left(- \lambda - \nu_{x_{1,1}} \; \mu \right) + C^{k+1}_{1,0} \; \left(- \lambda - \nu_{y_{1,1}} \; \mu \right) \; = \\ \; C^{k}_{2,1} \; \left(\lambda - \nu_{x_{1,1}} \; \mu \right) + C^{k}_{1,2} \; \left(\lambda - \nu_{y_{1,1}} \; \mu \right) + C^{k}_{1,1} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{1,1}} - \nu^{(0,1)}_{{y}_{1,1}}\right) + C^{k}_{0,1} \; \left(\lambda + \nu_{x_{1,1}} \; \mu \right) + C^{k}_{1,0} \; \left(\lambda + \nu_{y_{1,1}} \; \mu \right) $$ $ i=1,\;j=2 $ $$ C^{k+1}_{2,2} \; \left(- \lambda + \nu_{x_{1,2}} \; \mu \right) + C^{k+1}_{1,3} \; \left(- \lambda + \nu_{y_{1,2}} \; \mu \right) + C^{k+1}_{1,2} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{1,2}} + \nu^{(0,1)}_{{y}_{1,2}}\right) + C^{k+1}_{0,2} \; \left(- \lambda - \nu_{x_{1,2}} \; \mu \right) + C^{k+1}_{1,1} \; \left(- \lambda - \nu_{y_{1,2}} \; \mu \right) \; = \\ \; C^{k}_{2,2} \; \left(\lambda - \nu_{x_{1,2}} \; \mu \right) + C^{k}_{1,3} \; \left(\lambda - \nu_{y_{1,2}} \; \mu \right) + C^{k}_{1,2} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{1,2}} - \nu^{(0,1)}_{{y}_{1,2}}\right) + C^{k}_{0,2} \; \left(\lambda + \nu_{x_{1,2}} \; \mu \right) + C^{k}_{1,1} \; \left(\lambda + \nu_{y_{1,2}} \; \mu \right) $$ $ i=1,\;j=3 $ $$ C^{k+1}_{2,3} \; \left(- \lambda + \nu_{x_{1,3}} \; \mu \right) + C^{k+1}_{1,4} \; \left(- \lambda + \nu_{y_{1,3}} \; \mu \right) + C^{k+1}_{1,3} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{1,3}} + \nu^{(0,1)}_{{y}_{1,3}}\right) + C^{k+1}_{0,3} \; \left(- \lambda - \nu_{x_{1,3}} \; \mu \right) + C^{k+1}_{1,2} \; \left(- \lambda - \nu_{y_{1,3}} \; \mu \right) \; = \\ \; C^{k}_{2,3} \; \left(\lambda - \nu_{x_{1,3}} \; \mu \right) + C^{k}_{1,4} \; \left(\lambda - \nu_{y_{1,3}} \; \mu \right) + C^{k}_{1,3} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{1,3}} - \nu^{(0,1)}_{{y}_{1,3}}\right) + C^{k}_{0,3} \; \left(\lambda + \nu_{x_{1,3}} \; \mu \right) + C^{k}_{1,2} \; \left(\lambda + \nu_{y_{1,3}} \; \mu \right) $$ $ i=2,\;j=1 $ $$ C^{k+1}_{3,1} \; \left(- \lambda + \nu_{x_{2,1}} \; \mu \right) + C^{k+1}_{2,2} \; \left(- \lambda + \nu_{y_{2,1}} \; \mu \right) + C^{k+1}_{2,1} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{2,1}} + \nu^{(0,1)}_{{y}_{2,1}}\right) + C^{k+1}_{1,1} \; \left(- \lambda - \nu_{x_{2,1}} \; \mu \right) + C^{k+1}_{2,0} \; \left(- \lambda - \nu_{y_{2,1}} \; \mu \right) \; = \\ \; C^{k}_{3,1} \; \left(\lambda - \nu_{x_{2,1}} \; \mu \right) + C^{k}_{2,2} \; \left(\lambda - \nu_{y_{2,1}} \; \mu \right) + C^{k}_{2,1} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{2,1}} - \nu^{(0,1)}_{{y}_{2,1}}\right) + C^{k}_{1,1} \; \left(\lambda + \nu_{x_{2,1}} \; \mu \right) + C^{k}_{2,0} \; \left(\lambda + \nu_{y_{2,1}} \; \mu \right) $$ $ i=2,\;j=2 $ $$ C^{k+1}_{3,2} \; \left(- \lambda + \nu_{x_{2,2}} \; \mu \right) + C^{k+1}_{2,3} \; \left(- \lambda + \nu_{y_{2,2}} \; \mu \right) + C^{k+1}_{2,2} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{2,2}} + \nu^{(0,1)}_{{y}_{2,2}}\right) + C^{k+1}_{1,2} \; \left(- \lambda - \nu_{x_{2,2}} \; \mu \right) + C^{k+1}_{2,1} \; \left(- \lambda - \nu_{y_{2,2}} \; \mu \right) \; = \\ \; C^{k}_{3,2} \; \left(\lambda - \nu_{x_{2,2}} \; \mu \right) + C^{k}_{2,3} \; \left(\lambda - \nu_{y_{2,2}} \; \mu \right) + C^{k}_{2,2} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{2,2}} - \nu^{(0,1)}_{{y}_{2,2}}\right) + C^{k}_{1,2} \; \left(\lambda + \nu_{x_{2,2}} \; \mu \right) + C^{k}_{2,1} \; \left(\lambda + \nu_{y_{2,2}} \; \mu \right) $$ $ i=2,\;j=3 $ $$ C^{k+1}_{3,3} \; \left(- \lambda + \nu_{x_{2,3}} \; \mu \right) + C^{k+1}_{2,4} \; \left(- \lambda + \nu_{y_{2,3}} \; \mu \right) + C^{k+1}_{2,3} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{2,3}} + \nu^{(0,1)}_{{y}_{2,3}}\right) + C^{k+1}_{1,3} \; \left(- \lambda - \nu_{x_{2,3}} \; \mu \right) + C^{k+1}_{2,2} \; \left(- \lambda - \nu_{y_{2,3}} \; \mu \right) \; = \\ \; C^{k}_{3,3} \; \left(\lambda - \nu_{x_{2,3}} \; \mu \right) + C^{k}_{2,4} \; \left(\lambda - \nu_{y_{2,3}} \; \mu \right) + C^{k}_{2,3} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{2,3}} - \nu^{(0,1)}_{{y}_{2,3}}\right) + C^{k}_{1,3} \; \left(\lambda + \nu_{x_{2,3}} \; \mu \right) + C^{k}_{2,2} \; \left(\lambda + \nu_{y_{2,3}} \; \mu \right) $$ $ i=3,\;j=1 $ $$ C^{k+1}_{4,1} \; \left(- \lambda + \nu_{x_{3,1}} \; \mu \right) + C^{k+1}_{3,2} \; \left(- \lambda + \nu_{y_{3,1}} \; \mu \right) + C^{k+1}_{3,1} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{3,1}} + \nu^{(0,1)}_{{y}_{3,1}}\right) + C^{k+1}_{2,1} \; \left(- \lambda - \nu_{x_{3,1}} \; \mu \right) + C^{k+1}_{3,0} \; \left(- \lambda - \nu_{y_{3,1}} \; \mu \right) \; = \\ \; C^{k}_{4,1} \; \left(\lambda - \nu_{x_{3,1}} \; \mu \right) + C^{k}_{3,2} \; \left(\lambda - \nu_{y_{3,1}} \; \mu \right) + C^{k}_{3,1} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{3,1}} - \nu^{(0,1)}_{{y}_{3,1}}\right) + C^{k}_{2,1} \; \left(\lambda + \nu_{x_{3,1}} \; \mu \right) + C^{k}_{3,0} \; \left(\lambda + \nu_{y_{3,1}} \; \mu \right) $$ $ i=3,\;j=2 $ $$ C^{k+1}_{4,2} \; \left(- \lambda + \nu_{x_{3,2}} \; \mu \right) + C^{k+1}_{3,3} \; \left(- \lambda + \nu_{y_{3,2}} \; \mu \right) + C^{k+1}_{3,2} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{3,2}} + \nu^{(0,1)}_{{y}_{3,2}}\right) + C^{k+1}_{2,2} \; \left(- \lambda - \nu_{x_{3,2}} \; \mu \right) + C^{k+1}_{3,1} \; \left(- \lambda - \nu_{y_{3,2}} \; \mu \right) \; = \\ \; C^{k}_{4,2} \; \left(\lambda - \nu_{x_{3,2}} \; \mu \right) + C^{k}_{3,3} \; \left(\lambda - \nu_{y_{3,2}} \; \mu \right) + C^{k}_{3,2} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{3,2}} - \nu^{(0,1)}_{{y}_{3,2}}\right) + C^{k}_{2,2} \; \left(\lambda + \nu_{x_{3,2}} \; \mu \right) + C^{k}_{3,1} \; \left(\lambda + \nu_{y_{3,2}} \; \mu \right) $$ $ i=3,\;j=3 $ $$ C^{k+1}_{4,3} \; \left(- \lambda + \nu_{x_{3,3}} \; \mu \right) + C^{k+1}_{3,4} \; \left(- \lambda + \nu_{y_{3,3}} \; \mu \right) + C^{k+1}_{3,3} \; \left(1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{3,3}} + \nu^{(0,1)}_{{y}_{3,3}}\right) + C^{k+1}_{2,3} \; \left(- \lambda - \nu_{x_{3,3}} \; \mu \right) + C^{k+1}_{3,2} \; \left(- \lambda - \nu_{y_{3,3}} \; \mu \right) \; = \\ \; C^{k}_{4,3} \; \left(\lambda - \nu_{x_{3,3}} \; \mu \right) + C^{k}_{3,4} \; \left(\lambda - \nu_{y_{3,3}} \; \mu \right) + C^{k}_{3,3} \; \left(1 - 4 \; \lambda - \nu^{(1,0)}_{{x}_{3,3}} - \nu^{(0,1)}_{{y}_{3,3}}\right) + C^{k}_{2,3} \; \left(\lambda + \nu_{x_{3,3}} \; \mu \right) + C^{k}_{3,2} \; \left(\lambda + \nu_{y_{3,3}} \; \mu \right) $$
  3.- Our goal was to rewrite the equations system, in the form:

$$ \underline{\underline{A}} \; \underline{C^{k+1}} \; = \; \underline{\underline{B}} \; \underline{C^{k}} $$
So, in that moment, we could finally put all that in a matrix form, being all $\;C^{k+1}_{i,j}\;$ the unknowns, and $\;C^{k}_{i,j}\;$ the knowns (and of course, all the constant parameters).

That was also the moment to impose the boundary conditions . We consider that the most fit are Neumann ones, that are to make zero all the border concentration; so we removed all terms containing i=0 or i=4 or j=0 or j=4. Note that, in order to facilitate the reading of the matrix, have been renamed some parameters, such as $\;\alpha \; = \; 1 + 4 \; \lambda\;$ and $\;\beta \; = \; 1 - 4 \; \lambda\;$:

$$ \begin{bmatrix} \begin{array}{ccc|ccc|ccc} \alpha + \nu^{(1,0)}_{{x}_{1,1}} + \nu^{(0,1)}_{{y}_{1,1}} & - \lambda + \nu_{y_{1,1}} \; \mu & 0 & - \lambda + \nu_{x_{1,1}} \; \mu & 0 & 0 & 0 & 0 & 0\\ - \lambda - \nu_{y_{1,2}} \; \mu & \alpha + \nu^{(1,0)}_{{x}_{1,2}} + \nu^{(0,1)}_{{y}_{1,2}} & - \lambda + \nu_{y_{1,2}} \; \mu & 0 & - \lambda + \nu_{x_{1,2}} \; \mu & 0 & 0 & 0 & 0\\ 0 & - \lambda - \nu_{y_{1,3}} \; \mu & \alpha + \nu^{(1,0)}_{{x}_{1,3}} + \nu^{(0,1)}_{{y}_{1,3}} & 0 & 0 & - \lambda + \nu_{x_{1,3}} \; \mu & 0 & 0 & 0\\ \hline\\ - \lambda - \nu_{x_{2,1}} \; \mu & 0 & 0 & \alpha + \nu^{(1,0)}_{{x}_{2,1}} + \nu^{(0,1)}_{{y}_{2,1}} & - \lambda + \nu_{y_{2,1}} \; \mu & 0 & - \lambda + \nu_{x_{2,1}} \; \mu & 0 & 0\\ 0 & - \lambda - \nu_{x_{2,2}} \; \mu & 0 & - \lambda - \nu_{y_{2,2}} \; \mu & \alpha + \nu^{(1,0)}_{{x}_{2,2}} + \nu^{(0,1)}_{{y}_{2,2}} & - \lambda + \nu_{y_{2,2}} \; \mu & 0 & - \lambda + \nu_{x_{2,2}} \; \mu & 0\\ 0 & 0 & - \lambda - \nu_{x_{2,3}} \; \mu & 0 & - \lambda - \nu_{y_{2,3}} \; \mu & \alpha + \nu^{(1,0)}_{{x}_{2,3}} + \nu^{(0,1)}_{{y}_{2,3}} & 0 & 0 & - \lambda + \nu_{x_{2,3}} \; \mu\\ \hline\\ 0 & 0 & 0 & - \lambda - \nu_{x_{3,1}} \; \mu & 0 & 0 & \alpha + \nu^{(1,0)}_{{x}_{3,1}} + \nu^{(0,1)}_{{y}_{3,1}} & - \lambda + \nu_{y_{3,1}} \; \mu & 0\\ 0 & 0 & 0 & 0 & - \lambda - \nu_{x_{3,2}} \; \mu & 0 & - \lambda - \nu_{y_{3,2}} \; \mu & \alpha + \nu^{(1,0)}_{{x}_{3,2}} + \nu^{(0,1)}_{{y}_{3,2}} & - \lambda + \nu_{y_{3,2}} \; \mu\\ 0 & 0 & 0 & 0 & 0 & - \lambda - \nu_{x_{3,3}} \; \mu & 0 & - \lambda - \nu_{y_{3,3}} \; \mu & \alpha + \nu^{(1,0)}_{{x}_{3,3}} + \nu^{(0,1)}_{{y}_{3,3}} \end{array} \end{bmatrix}\begin{bmatrix} C^{k+1}_{1,1}\\C^{k+1}_{1,2}\\C^{k+1}_{1,3}\\\hline\\C^{k+1}_{2,1}\\C^{k+1}_{2,2}\\C^{k+1}_{2,3}\\\hline\\C^{k+1}_{3,1}\\C^{k+1}_{3,2}\\C^{k+1}_{3,3}\end{bmatrix}=$$
$$\begin{bmatrix} \begin{array}{ccc|ccc|ccc} \beta - \nu^{(1,0)}_{{x}_{1,1}} - \nu^{(0,1)}_{{y}_{1,1}} & \lambda - \nu_{y_{1,1}} \; \mu & 0 & \lambda - \nu_{x_{1,1}} \; \mu & 0 & 0 & 0 & 0 & 0\\ \lambda + \nu_{y_{1,2}} \; \mu & \beta - \nu^{(1,0)}_{{x}_{1,2}} - \nu^{(0,1)}_{{y}_{1,2}} & \lambda - \nu_{y_{1,2}} \; \mu & 0 & \lambda - \nu_{x_{1,2}} \; \mu & 0 & 0 & 0 & 0\\ 0 & \lambda + \nu_{y_{1,3}} \; \mu & \beta - \nu^{(1,0)}_{{x}_{1,3}} - \nu^{(0,1)}_{{y}_{1,3}} & 0 & 0 & \lambda - \nu_{x_{1,3}} \; \mu & 0 & 0 & 0\\ \hline\\ \lambda + \nu_{x_{2,1}} \; \mu & 0 & 0 & \beta - \nu^{(1,0)}_{{x}_{2,1}} - \nu^{(0,1)}_{{y}_{2,1}} & \lambda - \nu_{y_{2,1}} \; \mu & 0 & \lambda - \nu_{x_{2,1}} \; \mu & 0 & 0\\ 0 & \lambda + \nu_{x_{2,2}} \; \mu & 0 & \lambda + \nu_{y_{2,2}} \; \mu & \beta - \nu^{(1,0)}_{{x}_{2,2}} - \nu^{(0,1)}_{{y}_{2,2}} & \lambda - \nu_{y_{2,2}} \; \mu & 0 & \lambda - \nu_{x_{2,2}} \; \mu & 0\\ 0 & 0 & \lambda + \nu_{x_{2,3}} \; \mu & 0 & \lambda + \nu_{y_{2,3}} \; \mu & \beta - \nu^{(1,0)}_{{x}_{2,3}} - \nu^{(0,1)}_{{y}_{2,3}} & 0 & 0 & \lambda - \nu_{x_{2,3}} \; \mu\\ \hline\\ 0 & 0 & 0 & \lambda + \nu_{x_{3,1}} \; \mu & 0 & 0 & \beta - \nu^{(1,0)}_{{x}_{3,1}} - \nu^{(0,1)}_{{y}_{3,1}} & \lambda - \nu_{y_{3,1}} \; \mu & 0\\ 0 & 0 & 0 & 0 & \lambda + \nu_{x_{3,2}} \; \mu & 0 & \lambda + \nu_{y_{3,2}} \; \mu & \beta - \nu^{(1,0)}_{{x}_{3,2}} - \nu^{(0,1)}_{{y}_{3,2}} & \lambda - \nu_{y_{3,2}} \; \mu\\ 0 & 0 & 0 & 0 & 0 & \lambda + \nu_{x_{3,3}} \; \mu & 0 & \lambda + \nu_{y_{3,3}} \; \mu & \beta -\nu^{(1,0)}_{{x}_{3,3}} - \nu^{(0,1)}_{{y}_{3,3}} \end{array} \end{bmatrix}\begin{bmatrix} C^{k}_{1,1}\\C^{k}_{1,2}\\C^{k}_{1,3}\\\hline\\C^{k}_{2,1}\\C^{k}_{2,2}\\C^{k}_{2,3}\\\hline\\C^{k}_{3,1}\\C^{k}_{3,2}\\C^{k}_{3,3}\end{bmatrix}$$


  3.- Now was time to generalize last matrix equation. To provide a better global vision, we divided the matrixes into $\;n^2\;$ matrix blocks of $\;n\;$ x $\;n\;$ size, and the vectors in $\;n\;$ vector blocks of $\;n\;$ size:

$$ \underline{\underline{A}} \; = \begin{bmatrix} \underline{\underline{A_{1,1}}} & \underline{\underline{A_{1,2}}} & \cdots & \underline{\underline{A_{1,n}}}\\ \underline{\underline{A_{2,1}}} & \underline{\underline{A_{2,2}}} & \cdots & \underline{\underline{A_{2,n}}}\\ \vdots & \vdots & \ddots & \vdots \\ \underline{\underline{A_{n,1}}} & \underline{\underline{A_{n,2}}} & \cdots & \underline{\underline{A_{n,n}}}\\ \end{bmatrix} \;\;\;\; \underline{C^{k+1}} \; = \begin{bmatrix} \underline{C^{k+1}_{1}} \\ \underline{C^{k+1}_{2}} \\ \vdots \\ \underline{C^{k+1}_{n}} \\ \end{bmatrix} \;\;\;\; \underline{\underline{B}} \; = \begin{bmatrix} \underline{\underline{B_{1,1}}} & \underline{\underline{B_{1,2}}} & \cdots & \underline{\underline{B_{1,n}}}\\ \underline{\underline{B_{2,1}}} & \underline{\underline{B_{2,2}}} & \cdots & \underline{\underline{B_{2,n}}}\\ \vdots & \vdots & \ddots & \vdots \\ \underline{\underline{B_{n,1}}} & \underline{\underline{B_{n,2}}} & \cdots & \underline{\underline{B_{n,n}}}\\ \end{bmatrix} \;\;\;\; \underline{C^{k}} \; = \begin{bmatrix} \underline{C^{k}_{1}} \\ \underline{C^{k}_{2}} \\ \vdots \\ \underline{C^{k}_{n}} \\ \end{bmatrix} $$
Each one of these submatrixes was easier to study than the global one, so it wasn't complex to, finally, realise which pattern they followed, and could generalized for a $\;n\;$ x $\;n\;$ space steps (so $\;i=1,2,...,n\;$ and $\;j=1,2,...,n$):

$$ \underline{\underline{A_{i,i}}} \; = \begin{bmatrix} 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,1}} + \nu^{(0,1)}_{{y}_{i,1}} & - \lambda + \nu_{y_{i,1}} \; \mu & 0 & \cdots & 0\\ - \lambda - \nu_{y_{i,2}} \; \mu & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,2}} + \nu^{(0,1)}_{{y}_{i,2}} & - \lambda + \nu_{y_{i,2}} \; \mu & \cdots & 0\\ 0 & - \lambda - \nu_{y_{i,3}} \; \mu & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,3}} + \nu^{(0,1)}_{{y}_{i,3}} & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1 + 4 \; \lambda + \nu^{(1,0)}_{{x}_{i,n}} + \nu^{(0,1)}_{{y}_{i,n}}\\ \end{bmatrix} $$ $$ \underline{\underline{A_{i,i-1}}} \; = \begin{bmatrix} - \lambda - \nu_{x_{i,1}} \; \mu & 0 & 0 & \cdots & 0\\ 0 & - \lambda - \nu_{x_{i,2}} \; \mu & 0 & \cdots & 0\\ 0 & 0 & - \lambda - \nu_{x_{i,3}} \; \mu & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & - \lambda - \nu_{x_{i,n}} \; \mu \\ \end{bmatrix} $$ $$ \underline{\underline{A_{i-1,i}}} \; = \begin{bmatrix} - \lambda + \nu_{x_{i-1,1}} \; \mu & 0 & 0 & \cdots & 0\\ 0 & - \lambda + \nu_{x_{i-1,2}} \; \mu & 0 & \cdots & 0\\ 0 & 0 & - \lambda + \nu_{x_{i-1,3}} \; \mu & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & - \lambda + \nu_{x_{i-1,n}} \; \mu \\ \end{bmatrix} $$
  4.- The last step focuses on isolating the unknowns vector, by doing:

$$ \underline{C^{k+1}} \; = \; \underline{\underline{A^{-1}}} \; \underline{\underline{B}} \; \underline{C^{k}} \; = \; \underline{\underline{\mathcal{M}}} \; \underline{C^{k}}$$
Where $\;\underline{\underline{\mathcal{M}}} \; = \; \underline{\underline{A^{-1}}} \; \underline{\underline{B}}$
That is the last equation system must be solved each time iteration. Note that, for the first iteration $\;(k\;=\;0)$, we get $\;\underline{C^{1}}\;=\;\underline{\underline{\mathcal{M}}} \; \underline{C^{0}}\;$, and this $\;\underline{C^{0}}\;$ vector, is given with the initial conditions , in our case, a Gaussian Distribution, centered in the middle of the plate (in the ideal case, is usually chosen a Delta of Dirac ($\delta(x,y)$) as initial conditions, but we are dealing a real model).