Inspired by the idea that one can relate forest fires to neuronal spikes.
The higher temporal resolution makes the data is too sparse. Thus we decreased it to 5 days per time step. We also decreased the spatial to 10 km$^2/$pixel.
Each node in the network is an HMM like decribed before.
As an example consider a node $C$ with transition probabilities matrix $P_0$ at time $t=0$.
If $C$ is adjacent to two active nodes at $t=0$, then at $t=1$ its transition probalities matrix becomes $P_1$.
Initial Pre-chosen Values:
$$\pi = \begin{pmatrix}0.005\\0.005\\0.990\end{pmatrix}, \ P = \begin{pmatrix}0.50 & 0.01 & 0.09\\0.25 & 0.90 & 0.01\\0.25 & 0.09 & 0.90\end{pmatrix},$$Solve for $\lambda$ with Baum-Welch algorithm.
However, within a spatial 2D network, we further update the transition probabilities matrix $P$ for each node in the network (e.g a patch of land), at each time step, conditional on the states of the neighboring nodes.
These updates occur as newly introduced weights $\alpha$ and $\beta$, which are applied to the states $a \rightarrow a$ and $q \rightarrow a$ of the transition matrix $P$ for each node at each time step.
We run a Monte Carlo simulation to find the $(\alpha, \beta)$ pair that minimizes the mean squared error (MSE) of the fire density in our training dataset. For example, here are 25 such pairs and their corresponding MSE's:
$\alpha = 1, \beta = 10$: adjacent quiecents get more affected.
$\alpha = 2, \beta = 10$: active nodes and adjacent quiecents get more affected.
$\alpha = 2, \beta = 2$: active nodes and adjacent quiecents get more affected.
$\alpha = 1, \beta = 2$: best multipliers.
(left: low resolution version of training data, right: the results)
However, we still have a long way to go...
If we can predict fire spread and magnitude from earlier remote sensed data: