This commit is contained in:
Joshua Coles 2023-03-17 14:25:06 +00:00
parent 78d0088ae7
commit 22ba78d068
5 changed files with 195 additions and 152 deletions

View File

@ -1,8 +1,34 @@
\section*{Appendix}
We will investigate these characteristics in turn as time and computational modelling allows through the following process. Starting with the provided code and working towards a more bespoke and customisable model.
\subsection*{Generic DLA Model}
The main tool used to generate the data in this report was the generic DLA framework written to support the paper. Here we will briefly discuss the process of creating a verifying this framework.
Innate within the problem of designing an exploratory model is the question of correctness, is something unusual you are observing a bug in your model, or an interesting new behaviour to explore.
% TODO Do I want to ref an example of this, it sound fun.
To counter this we operated on a system of repeatedly grounding alterations of our model to the previous version, treating the initially provided codebase as our root ground, once its behaviour had been verified to be roughly in accordance to the literature.
To this end, starting with the initially provided code we made the aforementioned minimal alterations such that it would run in reasonable time\fnmark{macos-speed} and output the data required for later analysis. This data was then analysed and compared with literature (refer to the results for this work).
\fntext{macos-speed}{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed and replaced with image generation mitigation as discussed later.}
Once rough accordance with literature was obtained (see Figure \ref{nc-fd-convergence}), and most importantly, consistency between runs (verifying against a ill behaved system is a fruitless and painful endeavour), we added the sticking probability alteration as the simplest alteration the DLA algorithm, verifying agreement between the traditional and probabilistic sticking models at $p_{stick} = 1$.
% TODO Rust vs C nc-fd-convergence graph
This then provided sufficient data for us to transition to our new generic framework, verifying that it agreed with this dataset to ensure correctness. In addition unit tests for a number of key behaviours were written, benefiting greatly from the composability of the system.
%We will investigate these characteristics in turn as time and computational modelling allows through the following process. Starting with the provided code and working towards a more bespoke and customisable model.
%
% We first took the initially provided code for DLA modelling \ref{IPC} and make minimal alterations such that it will run with reasonable speed \footnote[1]{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed and replaced with image generation mitigation as discussed later.} and output data for analysis.
This data will be analysed and compared with literature\ref{initial-fractal-dimension-data} to confirm agreement. This will then act as a baseline implementation of the DLA model against which we can compare future alterations to ground them and ensure preservation of correct behaviour. In addition we will be creating a small auxiliary program to generate static images of the final result for manual verification of qualitative characteristics as the rendering code is not suitable for large data collection\footnotemark[1].
We first took the initially provided code \cite{IPC} and made minimal alterations such such that the code ran in reasonable time\footnote{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed, visualisation was handled externally.} and output data for analysis. For large configuration space exploring runs the code was run using \cite{GNUParallel} to allow for substantially improved throughput.
We first took the initially provided code for DLA modelling \ref{IPC} and make minimal alterations such that it will run with reasonable speed \footnote[1]{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed and replaced with image generation mitigation as discussed later.} and output data for analysis. This data will be analysed and compared with literature\ref{initial-fractal-dimension-data} to confirm agreement. This will then act as a baseline implementation of the DLA model against which we can compare future alterations to ground them and ensure preservation of correct behaviour. In addition we will be creating a small auxiliary program to generate static images of the final result for manual verification of qualitative characteristics as the rendering code is not suitable for large data collection\footnotemark[1].
%TODO Should we reference git commits here? Or keep them all in one repo. Maybe a combo and have them as submodules in a report branch allowing for a linear history and also concurrent presentation for a report.

View File

@ -0,0 +1,117 @@
\singlecolumnabstract{
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius laoreet. Quisque rutrum. Aenean imperdiet. Etiam ultricies nisi vel augue.
}
\medskip
\section*{Introduction}
Diffusion-limited aggregation (DLA) models processes where the diffusion of small particles into a larger aggregate is the limiting factor in a system's growth. It is applicable to a wide range of systems such as, A, B, and C.
This process gives rise to structures which are fractal in nature (for example see Figure \ref{dla-eg}), ie objects which contain detailed structure at arbitrarily small scales. These objects are associated with a fractal dimension, $\mathrm{fd}$ or $df$. This number relates how measures of the object, such as mass, scale when the object itself is scaled. For non fractal this will be its traditional dimension: if you double the scale of a square, you quadruple its area, $2 ^ 2$; if you double the scale of a sphere, you octuple its volume, $2 ^ 3$. For a DLA aggregate in a 2D embedding space, its "traditional" dimension would be 1, it is not by nature 2D, but due to its fractal dimension it has a higher fractal dimension higher than that.
% TODO We need to clean up the symbol
% TODO Source the fractal dimension
In this paper we will consider a number of alterations the standard DLA process and the effect they have on the fractal dimension of the resulting aggregate. This data will be generated by a number of computational models derived initially from the code provided \cite{IPC} but altered and optimised as needed for the specific modelling problem.
% Mention MVA I think so I can reference it in the section on spaces alteration.
% TODO Explain Fractal Dimension
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figures/dla-eg}
\caption{A $5000$ particle aggregate on}
\label{dla-eg}
\end{figure}
\section*{Discussion}
As mentioned the DLA process models the growth of an aggregate (otherwise known as a cluster) within a medium through which smaller free moving particles can diffuse. These particles move freely until they "stick" to the aggregate adding to its extent. A high level description of the DLA algorithm is given as follows,
\begin{enumerate}
\item An initial seed aggregate is placed into the system, without mathematical loss of generality, at the origin. This is normally a single particle.
\item A new particle is then released at some sufficient distance from the seeded aggregate.
\item This particle is allowed to then diffuse until it sticks to the aggregate.
\item At this point the new particle stops moving and becomes part of the aggregate a new particle is released.
\end{enumerate}
An actual implementation of this system will involve a number of computational parameters and simplification for computational modelling. For example particles are spawned at a consistent radius from the aggregate, $r_{\mathrm{add}}$, rather than existing uniformly throughout the embedding medium. Further it is traditional to define a "kill circle", $r_{\mathrm{kill}}$ past which we consider the particle lost and stop simulating it \cite[p.~27]{sanderDiffusionlimitedAggregationKinetic2000} (this is especially important in $d > 2$ dimensional spaces where random walks are not guaranteed to reoccur \cite{lawlerIntersectionsRandomWalks2013} and could instead tend off to infinity).
While these are interesting and important to the performant modelling of the system, we aim to choose these such to maximise the fidelity to the original physical system, whilst minimising the computational effort required for simulation. From a modelling perspective however there are a number of interesting orthogonal behaviours within this loose algorithm description which we can vary to potentially provide interesting results.
The first is the seed which is used to start the aggregation process. The traditional choice of a single seed models the spontaneous growth of a cluster, but the system could be easily extended to diffusion onto a plate under influence of an external force field \cite{tanInfluenceExternalField2000}, or cluster-cluster aggregation where there are multiple aggregate clusters, which are capable of moving themselves \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
The next behaviour is in the spawning of the active particle. The choice of spawning location is traditionally made in accordance to a uniform distribution, which bar any physical motivation from a particular system being modelled, seems to intuitive choice. However the choice of a single particle is one which is open to more investigation. This is interesting in both the effect varying this will have on the behaviour of the system, but also if it can be done in a way to minimise the aforementioned effects, as a speed up for long running simulations.
Another characteristic behaviour of the algorithm is the choice of diffusion mechanism. Traditionally this is implemented as a random walk, with each possible neighbour being equally likely. This could be altered for example by the introduction of an external force to the system.
Finally we arrive at the final characteristic we will consider: the space that the DLA process takes place within. Traditionally this is done within a 2D orthogonal gridded space, however other gridded systems, such as hexagonal, can be used to explore any effect the spaces \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
\section*{Method}
%TODO Include a note on long running and exploration simulations in the methodology section?
To this end we designed a generic system such that these different alterations of the traditional DLA model could be written, explored, and composed quickly, whilst generating sufficient data for statistical measurements. This involved separating the various orthogonal behaviours of the DLA algorithm into components which could be combined in a variety of ways enabling a number of distinct models to be exist concurrently within the same codebase.
% TODO Verify stats for said statistical measurements!!!
This code was based off the initially provided code, altered to allow for data extraction and optimised for performance. For large configuration space exploring runs the code was run using GNU Parallel \nocite{GNUParallel} to allow for substantially improved throughput.
The code was written such that it is reproducible based on a user provided seed for the random number generator, this provided the needed balance between reproducibility and repeated runs. Instructions for building the specific models used in the paper can be found in the appendix.
We first took the initially provided code \cite{IPC} and made minimal alterations such such that the code ran in reasonable time\footnote{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed, visualisation was handled externally.} and output data for analysis.
\subsection*{Statistical Considerations}
% TODO Is this something we need to talk about?
\subsection*{Fractal Dimension Calculation}
There are two methods of determining the fractal dimension. The first is by noting that the number of particles in an aggregate $N_c$ grows with the maximum radius $r_\mathrm{max}$ as
\begin{equation*}
N_c(r_{\mathrm{max}}) = (\alpha r_{\mathrm{max}})^{df} + \beta
\end{equation*}
where $\alpha, \beta$ are two unknown constants. Taking the large $r_\mathrm{max}$ limit we can take $(\alpha r_{\mathrm{max}})^{df} \gg \beta$ and hence,
\begin{align*}
N_c(r_{\mathrm{max}}) &= (\alpha r_{\mathrm{max}})^{df} + \beta \\
&\approx (\alpha r_{\mathrm{max}})^{df} \\
\log N_c &\approx df \cdot \log\alpha + df \cdot \log r_{\mathrm{max}} \\
\end{align*}
from which we can either perform curve fitting on our data, or by taking $\alpha = 1$ and hence giving us,
\begin{align*}
\log N_c &= df \cdot \log r_{\mathrm{max}} \\
df &= \frac{\log N_c}{\log r_{\mathrm{max}}}
\end{align*}
This gives us a way to determine "instantaneous" fractal dimension.
A second method for determining the fractal dimension is known as box-count \cite{smithFractalMethodsResults1996a}. This involves placing box-grids of various granularities onto the aggregate and observing the number of boxes which have at least one particle within them. The number of these boxes $N$ should grow as,
\begin{equation*}
N \propto w^{-d}
\end{equation*}
where $w$ is the granularity of the box-grid and $d$ is the fractal dimension we wish to find. By a similar process as before we end up with,
\begin{equation*}
\log N = \log N_0 - d \log w
\end{equation*}
where $N_0$ is some proportionality constant. We will expect a plot of $(w, N)$ to exhibit two modes of behaviour,
\begin{enumerate}
\item A linear region from which we can extract fractal dimension data.
\item A saturation region where the box-grid is sufficiently fine such there each box contains either $1$ or none particles.
\end{enumerate}
we will fit on the linear region, dropping some data for accuracy.
\todo{How much of this is actually in the Fractal Dimension section}

View File

@ -60,6 +60,11 @@
}
\setlength{\marginparwidth}{1.2cm}
\usepackage{refcount}% http://ctan.org/pkg/refcount
\newcounter{fncntr}
\newcommand{\fnmark}[1]{\refstepcounter{fncntr}\label{#1}\footnotemark[\getrefnumber{#1}]}
\newcommand{\fntext}[2]{\footnotetext[\getrefnumber{#1}]{#2}}
%%% END Article customizations

View File

@ -14,156 +14,8 @@
\date{March 21, 2023} % Due Date
\begin{document}
\singlecolumnabstract{
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius laoreet. Quisque rutrum. Aenean imperdiet. Etiam ultricies nisi vel augue.
}
\medskip
\section*{Introduction}
Diffusion-limited aggregation (DLA) models processes where the diffusion of small particles into a larger aggregate is the limiting factor in a system's growth. It is applicable to a wide range of systems such as, A, B, and C.
This process gives rise to structures which are fractal in nature (for example see Figure \ref{dla-eg}), ie objects which contain detailed structure at arbitrarily small scales. These objects are associated with a fractal dimension, $\mathrm{fd}$ or $df$. This number relates how measures of the object, such as mass, scale when the object itself is scaled. For non fractal this will be its traditional dimension: if you double the scale of a square, you quadruple its area, $2 ^ 2$; if you double the scale of a sphere, you octuple its volume, $2 ^ 3$. For a DLA aggregate in a 2D embedding space, its "traditional" dimension would be 1, it is not by nature 2D, but due to its fractal dimension it has a higher fractal dimension higher than that.
% TODO We need to clean up the symbol
% TODO Source the fractal dimension
In this paper we will consider a number of alterations the standard DLA process and the effect they have on the fractal dimension of the resulting aggregate. This data will be generated by a number of computational models derived initially from the code provided \cite{IPC} but altered and optimised as needed for the specific modelling problem.
% Mention MVA I think so I can reference it in the section on spaces alteration.
% TODO Explain Fractal Dimension
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figures/dla-eg}
\caption{A $5000$ particle aggregate on}
\label{dla-eg}
\end{figure}
\section*{Discussion}
As mentioned the DLA process models the growth of an aggregate (otherwise known as a cluster) within a medium through which smaller free moving particles can diffuse. These particles move freely until they "stick" to the aggregate adding to its extent. A high level description of the DLA algorithm is given as follows,
\begin{enumerate}
\item An initial seed aggregate is placed into the system, without mathematical loss of generality, at the origin. This is normally a single particle.
\item A new particle is then released at some sufficient distance from the seeded aggregate.
\item This particle is allowed to then diffuse until it sticks to the aggregate.
\item At this point the new particle stops moving and becomes part of the aggregate a new particle is released.
\end{enumerate}
An actual implementation of this system will involve a number of computational parameters and simplification for computational modelling. For example particles are spawned at a consistent radius from the aggregate, $r_{\mathrm{add}}$, rather than existing uniformly throughout the embedding medium. Further it is traditional to define a "kill circle", $r_{\mathrm{kill}}$ past which we consider the particle lost and stop simulating it \cite[p.~27]{sanderDiffusionlimitedAggregationKinetic2000} (this is especially important in $d > 2$ dimensional spaces where random walks are not guaranteed to reoccur \cite{lawlerIntersectionsRandomWalks2013} and could instead tend off to infinity).
While these are interesting and important to the performant modelling of the system, we aim to choose these such to maximise the fidelity to the original physical system, whilst minimising the computational effort required for simulation. From a modelling perspective however there are a number of interesting orthogonal behaviours within this loose algorithm description which we can vary to potentially provide interesting results.
The first is the seed which is used to start the aggregation process. The traditional choice of a single seed models the spontaneous growth of a cluster, but the system could be easily extended to diffusion onto a plate under influence of an external force field \cite{tanInfluenceExternalField2000}, or cluster-cluster aggregation where there are multiple aggregate clusters, which are capable of moving themselves \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
The next behaviour is in the spawning of the active particle. The choice of spawning location is traditionally made in accordance to a uniform distribution, which bar any physical motivation from a particular system being modelled, seems to intuitive choice. However the choice of a single particle is one which is open to more investigation. This is interesting in both the effect varying this will have on the behaviour of the system, but also if it can be done in a way to minimise the aforementioned effects, as a speed up for long running simulations.
Another characteristic behaviour of the algorithm is the choice of diffusion mechanism. Traditionally this is implemented as a random walk, with each possible neighbour being equally likely. This could be altered for example by the introduction of an external force to the system.
Finally we arrive at the final characteristic we will consider: the space that the DLA process takes place within. Traditionally this is done within a 2D orthogonal gridded space, however other gridded systems, such as hexagonal, can be used to explore any effect the spaces \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
\subsection*{Method}
%TODO Include a note on long running and exploration simulations in the methodology section?
To this end we designed a generic system such that these different alterations of the traditional DLA model could be written, explored, and composed quickly, whilst generating sufficient data for statistical measurements.
This code was based off the
The code was written such that it is reproducible based on a user provided seed for the random number generator, this provided the needed balance between reproducibility and repeated runs. Instructions for building the specific models used in the paper can be found in the appendix.
\subsection*{Fractal Dimension Calculation}
There are two methods of determining the fractal dimension. The first is by noting that the number of particles in an aggregate $N_c$ grows with the maximum radius $r_\mathrm{max}$ as
\begin{equation*}
N_c(r_{\mathrm{max}}) = (\alpha r_{\mathrm{max}})^{df} + \beta
\end{equation*}
where $\alpha, \beta$ are two unknown constants. Taking the large $r_\mathrm{max}$ limit we can take $(\alpha r_{\mathrm{max}})^{df} \gg \beta$ and hence,
\begin{align*}
N_c(r_{\mathrm{max}}) &= (\alpha r_{\mathrm{max}})^{df} + \beta \\
&\approx (\alpha r_{\mathrm{max}})^{df} \\
\log N_c &\approx df \cdot \log\alpha + df \cdot \log r_{\mathrm{max}} \\
\end{align*}
from which we can either perform curve fitting on our data, or by taking $\alpha = 1$ and hence giving us,
\begin{align*}
\log N_c &= df \cdot \log r_{\mathrm{max}} \\
df &= \frac{\log N_c}{\log r_{\mathrm{max}}}
\end{align*}
This gives us a way to determine "instantaneous" fractal dimension.
A second method for determining the fractal dimension is known as box-count \cite{smithFractalMethodsResults1996a}. This involves placing box-grids of various granularities onto the aggregate and observing the number of boxes which have at least one particle within them. The number of these boxes $N$ should grow as,
\begin{equation*}
N \propto w^{-d}
\end{equation*}
where $w$ is the granularity of the box-grid and $d$ is the fractal dimension we wish to find. By a similar process as before we end up with,
\begin{equation*}
\log N = \log N_0 - d \log w
\end{equation*}
where $N_0$ is some proportionality constant. We will expect a plot of $(w, N)$ to exhibit two modes of behaviour,
\begin{enumerate}
\item A linear region from which we can extract fractal dimension data.
\item A saturation region where the box-grid is sufficiently fine such there each box contains either $1$ or none particles.
\end{enumerate}
we will fit on the linear region, dropping some data for accuracy.
\todo{How much of this is actually in the Fractal Dimension section}
\section{Results}
\subsection*{Preliminary Work: Testing Initial Implementation and Fractal Dimension Calculations}
\label{ii-fdc}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/rmax-n.png}
\caption{The growth of $N$ vs $r_{\mathrm{max}}$ for $20$ runs of the standard DLA model. When fitting the first $1000$ points are not included when fitting to improve accuracy. The
$50$ data points are not included as the data contains to much noise to be meaningfully displayed. Also included in the figure is the value from theory, $1.71 \pm 0.01$ from \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.}
\label{rmax-n}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{figures/nc-fd-convergence.png}
\caption{The converge of the fractal dimension of $20$ runs of the standard DLA model. This uses the the simple calculation method. The first $50$ data points are not included as the data contains to much noise to be meaningfully displayed. Also included in the figure is the value from theory, $1.71 \pm 0.01$ from \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.}
\label{nc-fd-convergence}
\end{figure}
To start we do $20$ runs, with seeds $1, 2, \dots, 20$, of the standard DLA model using the minimally altered initially provided code. The fractal dimension is calculated using the simple method \todo{do I want to ref this?} and averaged across the $20$ runs. This is shown in Figure \ref{nc-fd-convergence} along with the result from literature, $d = 1.7 \pm 0.6$ \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.
\todo{Do I need to find a grid value for this I think this might be continuous}
Taking an average of the trailing $5000$ readings we come to a value of $d = 1.73$ \todo{errors and rounding}, while this values diverges slightly from the value reported in literature, this result provides a reasonable grounding for our model as being roughly correct and provides a useful point of comparison for future work.
This also allows us to say with reasonable confidence that we can halt our model around $N_C = 5000$ as a reasonable trade off between computational time and accuracy. This should be verified for particular model variations however.
\subsection{Probabilistic Sticking}
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{figures/sp-fd}
\caption{The fractal dimension for the DLA system on a 2D grid lattice a sticking probability $p_{stick}$. This value was obtained from $100$ runs with different seeds, by computing the value of the fractal dimension using the simple method, taking a mean across the last $100$ measurements on a $2000$ particle cluster.
% TODO These numbers are way too small given the results of Figure 1.
}
\label{sp-fd}
\end{figure}
As discussed one of the possible alterations of the system is the introduction of a probabilistic component to the sticking behaviour of the DLA system. Here we introduced a probability $p_{stick}$ to the initial grid based sticking behaviour of the particles, with the particle being given this probability to stick at each site (for example, if the particle was adjacent to two cells in the aggregate, then the probabilistic aspect would apply twice).
This was also the case used to ground both the minimally altered code, and our new generic system, to ensure they are functioning correctly. The data for both is presented in Figure \ref{sp-fig}. As we would expect we see a g
\section*{Hexagonal}
\input{introduction-dicussion-method.tex}
\input{results.tex}
% TODO Formatting of these (for one its in american date formats ughhh)
\printbibliography

43
results.tex Normal file
View File

@ -0,0 +1,43 @@
\section*{Results}
\subsection*{Preliminary Work: Testing Initial Implementation and Fractal Dimension Calculations}
\label{ii-fdc}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/rmax-n.png}
\caption{The growth of $N$ vs $r_{\mathrm{max}}$ for $20$ runs of the standard DLA model. When fitting the first $1000$ points are not included when fitting to improve accuracy. The
$50$ data points are not included as the data contains to much noise to be meaningfully displayed. Also included in the figure is the value from theory, $1.71 \pm 0.01$ from \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.}
\label{rmax-n}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{figures/nc-fd-convergence.png}
\caption{The converge of the fractal dimension of $20$ runs of the standard DLA model. This uses the the simple calculation method. The first $50$ data points are not included as the data contains to much noise to be meaningfully displayed. Also included in the figure is the value from theory, $1.71 \pm 0.01$ from \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.}
\label{nc-fd-convergence}
\end{figure}
To start we do $20$ runs, with seeds $1, 2, \dots, 20$, of the standard DLA model using the minimally altered initially provided code. The fractal dimension is calculated using the simple method \todo{do I want to ref this?} and averaged across the $20$ runs. This is shown in Figure \ref{nc-fd-convergence} along with the result from literature, $d = 1.7 \pm 0.6$ \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.
\todo{Do I need to find a grid value for this I think this might be continuous}
Taking an average of the trailing $5000$ readings we come to a value of $d = 1.73$ \todo{errors and rounding}, while this values diverges slightly from the value reported in literature, this result provides a reasonable grounding for our model as being roughly correct and provides a useful point of comparison for future work.
This also allows us to say with reasonable confidence that we can halt our model around $N_C = 5000$ as a reasonable trade off between computational time and accuracy. This should be verified for particular model variations however.
\subsection{Probabilistic Sticking}
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{figures/sp-fd}
\caption{The fractal dimension for the DLA system on a 2D grid lattice a sticking probability $p_{stick}$. This value was obtained from $100$ runs with different seeds, by computing the value of the fractal dimension using the simple method, taking a mean across the last $100$ measurements on a $2000$ particle cluster.
% TODO These numbers are way too small given the results of Figure 1.
}
\label{sp-fd}
\end{figure}
As discussed one of the possible alterations of the system is the introduction of a probabilistic component to the sticking behaviour of the DLA system. Here we introduced a probability $p_{stick}$ to the initial grid based sticking behaviour of the particles, with the particle being given this probability to stick at each site (for example, if the particle was adjacent to two cells in the aggregate, then the probabilistic aspect would apply twice).
This was also the case used to ground both the minimally altered code, and our new generic system, to ensure they are functioning correctly. The data for both is presented in Figure \ref{sp-fig}. As we would expect we see a g
\subsection*{Higher Dimensions}
\subsection*{Hexagonal}