\section*{Appendix} \subsection*{Generic DLA Model} The main tool used to generate the data in this report was the generic DLA framework written to support the paper. Here we will briefly discuss the process of creating a verifying this framework. Innate within the problem of designing an exploratory model is the question of correctness, is something unusual you are observing a bug in your model, or an interesting new behaviour to explore. % TODO Do I want to ref an example of this, it sound fun. To counter this we operated on a system of repeatedly grounding alterations of our model to the previous version, treating the initially provided codebase as our root ground, once its behaviour had been verified to be roughly in accordance to the literature. To this end, starting with the initially provided code we made the aforementioned minimal alterations such that it would run in reasonable time\fnmark{macos-speed} and output the data required for later analysis. This data was then analysed and compared with literature (refer to the results for this work). \fntext{macos-speed}{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed and replaced with image generation mitigation as discussed later.} Once rough accordance with literature was obtained (see Figure \ref{nc-fd-convergence}), and most importantly, consistency between runs (verifying against a ill behaved system is a fruitless and painful endeavour), we added the sticking probability alteration as the simplest alteration the DLA algorithm, verifying agreement between the traditional and probabilistic sticking models at $p_{stick} = 1$. % TODO Rust vs C nc-fd-convergence graph This then provided sufficient data for us to transition to our new generic framework, verifying that it agreed with this dataset to ensure correctness. In addition unit tests for a number of key behaviours were written, benefiting greatly from the composability of the system. %We will investigate these characteristics in turn as time and computational modelling allows through the following process. Starting with the provided code and working towards a more bespoke and customisable model. % % We first took the initially provided code for DLA modelling \ref{IPC} and make minimal alterations such that it will run with reasonable speed \footnote[1]{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed and replaced with image generation mitigation as discussed later.} and output data for analysis. This data will be analysed and compared with literature\ref{initial-fractal-dimension-data} to confirm agreement. This will then act as a baseline implementation of the DLA model against which we can compare future alterations to ground them and ensure preservation of correct behaviour. In addition we will be creating a small auxiliary program to generate static images of the final result for manual verification of qualitative characteristics as the rendering code is not suitable for large data collection\footnotemark[1]. We first took the initially provided code \cite{IPC} and made minimal alterations such such that the code ran in reasonable time\footnote{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed, visualisation was handled externally.} and output data for analysis. For large configuration space exploring runs the code was run using \cite{GNUParallel} to allow for substantially improved throughput. %TODO Should we reference git commits here? Or keep them all in one repo. Maybe a combo and have them as submodules in a report branch allowing for a linear history and also concurrent presentation for a report. Once this minimal viable alteration is complete we will implement our first proper change to the system, introducing a sticking probability, $p_{stick}$, such that a particle is no longer guaranteed to stick when moving adjacent to the cluster, but instead has a chance of simply passing by. This represents a change in our first identified orthogonal behaviour of the model, and the simplest to implement in the framework of the initially provided code. We will verify behaviour against the minimal viable alteration to ensure it is correct. Once this has been done this data will then be analysed to identify a quantitive relationship between $p_{stick}$ and our observables previously listed. \todo{Do we want to show that bouncing has no real effect} \todo{Do we want to talk about testing, for example that we get a uniform offset, etc.} \todo{Do we have any theory to link for this? Probably in results but worth bearing in mind} For further alterations a new codebase will be engineered to allow for more efficient alteration of the other two, more systematic, orthogonal characteristics of the system, containing initially the sticking probability alteration. To ensure fidelity of results we will compare the behaviour and observables of this new system to that of the minimal viable alteration, as well as the sticking probability alteration of it \todo{Better word}. Once accuracy has been determined the model will be embedded in spaces of higher dimensions, with different values of $p_{stick}$, to observe changes in our desired behaviour and compared against literature where possible. Finally a system for more complex particle motion will be developed such that we can plug in multiple walk modes in addition to a standard random walk, for example by introducing an external force or various varieties. \section{Specific Method} \begin{enumerate} \item Choice of maxParticles such that it converges? \item Use of {convergent} eee \end{enumerate} Choice of maxParticles such that it converges?