Here is the result of analyzing your file(s) with TeXtidote. Hover the mouse over highlighted portions of the document to read a tooltip that gives you some writing advice.
1
\singlecolumnabstract{
2
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius laoreet. Quisque rutrum. Aenean imperdiet. Etiam ultricies nisi vel augue.
3
}
4
5
6
\medskip
7
8
9
10
11
\section*{Introduction}
12
13
Diffusion-limited aggregation (DLA) models processes where the diffusion of small particles into a larger aggregate is the limiting factor in a system's growth. It is applicable to a wide range of systems such as, A, B, and C.
14
15
16
17
\begin{figure}[htb]
18
\includegraphics[width=\columnwidth]{figures/dla-eg}
19
\caption{A $5000$ particle aggregate on a 2D square grid.}
20
\label{dla-eg}
21
\end{figure}
22
23
This process gives rise to structures which are fractal in nature (for example see Figure \ref{dla-eg}), ie objects which contain detailed structure at arbitrarily small scales. These objects are associated with a fractal dimension, $\mathrm{fd}$ or $df$. This number relates how measures of the object, such as mass, scale when the object itself is scaled. For non fractal this will be its traditional dimension: if you double the scale of a square, you quadruple its area, $2 ^ 2$; if you double the scale of a sphere, you octuple its volume, $2 ^ 3$. For a DLA aggregate in a 2D embedding space, its \enquote{traditional} dimension would be 1, it is not by nature 2D, but due to its fractal dimension it has a higher fractal dimension higher than that. Fractals are often associated with a scale invariance, ie they have the same observables at various scales. This can be observed for DLA aggregates in Figure \ref{scale-comparison} where we have two aggregates of different sizes, scaled as too fill the same physical space.
24
25
26
27
28
In this paper we will consider a number of alterations the standard DLA process and the affect they have on the fractal dimension of the resulting aggregate. This data will be generated by a number of computational models derived initially from the code provided \cite{IPC} but altered and optimised as needed for the specific modelling problem.
29
30
\begin{figure}[htb]
31
\includegraphics[width=\columnwidth]{figures/scale-comparison.png}
32
\caption{A $5000$ and $10000$ particle aggregate scaled to fill the same physical space. Note the similar structure and pattern between the two objects.}
33
\label{scale-comparison}
34
\end{figure}
35
36
37
38
39
\section*{Discussion}
40
41
As mentioned the DLA process models the growth of an aggregate (otherwise known as a cluster) within a medium through which smaller free moving particles can diffuse. These particles move freely until they \enquote{stick} to the aggregate adding to its extent. A high level description of the DLA algorithm is given as follows,
42
43
\begin{enumerate}
44
\item An initial seed aggregate is placed into the system, without mathematical loss of generality, at the origin. This is normally a single particle.
45
\item A new particle is then released at some sufficient distance from the seeded aggregate.
46
\item This particle is allowed to then diffuse until it sticks to the aggregate.
47
\item At this point the new particle stops moving and becomes part of the aggregate a new particle is released.
48
\end{enumerate}
49
50
An actual implementation of this system will involve a number of computational parameters and simplification for computational modelling. For example particles are spawned at a consistent radius from the aggregate, $r_{\mathrm{add}}$, rather than existing uniformly throughout the embedding medium. Further it is traditional to define a \enquote{kill circle}, $r_{\mathrm{kill}}$ past which we consider the particle lost and stop simulating it \cite[p.~27]{sanderDiffusionlimitedAggregationKinetic2000} (this is especially important in $d > 2$ dimensional spaces where random walks are not guaranteed to reoccur \cite{lawlerIntersectionsRandomWalks2013} and could instead tend off to infinity).
51
52
While these are interesting and important to the performant modelling of the system, we aim to choose these such to maximise the fidelity to the original physical system, whilst minimising the computational effort required for simulation. From a modelling perspective however there are a number of interesting orthogonal behaviours within this loose algorithm description which we can vary to potentially provide interesting results.
53
54
The first is the seed which is used to start the aggregation process. The traditional choice of a single seed models the spontaneous growth of a cluster, but the system could be easily extended to diffusion onto a plate under influence of an external force field \cite{tanInfluenceExternalField2000}, or cluster-cluster aggregation where there are multiple aggregate clusters, which are capable of moving themselves \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
55
56
The next behaviour is in the spawning of the active particle. The choice of spawning location is traditionally made in accordance to a uniform distribution, which bar any physical motivation from a particular system being modelled, seems to intuitive choice. However the choice of a single particle is one which is open to more investigation. This is interesting in both the effect varying this will have on the behaviour of the system, but also if it can be done in a way to minimise the aforementioned effects, as a speed up for long running simulations.
57
58
Another characteristic behaviour of the algorithm is the choice of diffusion mechanism. Traditionally this is implemented as a random walk, with each possible neighbour being equally likely. This could be altered for example by the introduction of an external force to the system.
59
60
Finally we arrive at the final characteristic we will consider: the space that the DLA process takes place within. Traditionally this is done within a 2D orthogonal gridded space, however other gridded systems, such as hexagonal, can be used to explore any effect the spaces \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
61
62
\section*{Method}
63
64
To this end we designed a generic system such that these different alterations of the traditional DLA model could be written, explored, and composed quickly, whilst generating sufficient data for statistical measurements. This involved separating the various orthogonal behaviours of the DLA algorithm into components which could be combined in a variety of ways enabling a number of distinct models to be exist concurrently within the same codebase.
65
66
67
This code was based off the initially provided code, altered to allow for data extraction and optimised for performance. For large configuration space exploring runs the code was run using GNU Parallel \nocite{GNUParallel} to allow for substantially improved throughput (this is opposed to long running, high $N$ simulations where they were simply left to run).
68
69
The code was written such that it is reproducible based on a user provided seed for the random number generator, this provided the needed balance between reproducibility and repeated runs. Instructions for building the specific models used in the paper can be found in the appendix.
70
71
72
73
74
75
\subsection*{Fractal Dimension Calculation}
76
77
We will use two methods of determining the fractal dimension of our aggregates. The first is the mass method and the second box-count \cite{smithFractalMethodsResults1996a}.
78
79
80
81
For the mass method we note that the number of particles in an aggregate $N_c$ grows with the maximum radius $r_\mathrm{max}$ as
82
83
\begin{equation*}
84
N_c(r_{\mathrm{max}}) = (\alpha r_{\mathrm{max}})^{df} + \beta
85
\end{equation*}
86
87
where $\alpha, \beta$ are two unknown constants. Taking the large $r_\mathrm{max}$ limit we can take $(\alpha r_{\mathrm{max}})^{df} \gg \beta$ and hence,
88
89
\begin{align*}
90
N_c(r_{\mathrm{max}}) &= (\alpha r_{\mathrm{max}})^{df} + \beta \\
91
&\approx (\alpha r_{\mathrm{max}})^{df} \\
92
\log N_c &\approx df \cdot \log\alpha + df \cdot \log r_{\mathrm{max}} \\
93
\end{align*}
94
95
from which we can either perform curve fitting on our data.
96
97
In addition if we take $\alpha = 1$ as this is an entirely computational model and we can set our length scales without loss of generality we obtain,
98
99
\begin{align*}
100
\log N_c &= df \cdot \log r_{\mathrm{max}} \\
101
df &= \frac{\log N_c}{\log r_{\mathrm{max}}}
102
\end{align*}
103
104
giving us a way to determine \enquote{instantaneous} fractal dimension at any particular point the modelling process.
105
106
107
108
A second method for determining the fractal dimension is known as box-count \cite{smithFractalMethodsResults1996a}. This involves placing box-grids of various granularities onto the aggregate and observing the number of boxes which have at least one particle within them. The number of these boxes $N$ should grow as,
109
110
\begin{equation*}
111
N \propto w^{-d}
112
\end{equation*}
113
114
where $w$ is the granularity of the box-grid and $d$ is the fractal dimension we wish to find. By a similar process as before we end up with,
115
116
\begin{equation*}
117
\log N = \log N_0 - d \log w
118
\end{equation*}
119
120
where $N_0$ is some proportionality constant. We will expect a plot of $(w, N)$ to exhibit two modes of behaviour,
121
122
\begin{enumerate}
123
\item A linear region from which we can extract fractal dimension data.
124
\item A saturation region where the box-grid is sufficiently fine such there each box contains either $1$ or none particles.
125
\end{enumerate}
126
127
we will fit on the linear region, dropping some data for accuracy.
128
129
\todo{How much of this is actually in the Fractal Dimension section}
1
\section*{Results}
2
3
\begin{figure}[t]
4
\includegraphics[width=\columnwidth]{figures/rmax-n.png}
5
\caption{The growth of $N$ vs $r_{\mathrm{max}}$ for $20$ runs of the standard DLA model. Also included is a line of best fit for the data, less the first $50$ which are removed to improve accuracy.
6
7
8
}
9
\label{rmax-n}
10
\end{figure}
11
12
\subsection*{Preliminary Work: Testing Initial Implementation and Fractal Dimension Calculations}
13
\label{ii-fdc}
14
15
\begin{figure}[hbt]
16
\includegraphics[width=\columnwidth]{figures/nc-fd-convergence.png}
17
\caption{The converge of the fractal dimension of $20$ runs of the standard DLA model. This uses the mass method. The first $50$ data points are not included as the data contains to much noise to be meaningfully displayed. Also included in the figure is the value from literature, $1.71 \pm 0.01$ from \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.}
18
\label{nc-fd-convergence}
19
\end{figure}
20
21
To start we do $20$ runs, with seeds $1, 2, \dots, 20$, of the standard DLA model using the minimally altered initially provided code. The fractal dimension is calculated using the mass method and averaged across the $20$ runs. This is shown in Figure \ref{nc-fd-convergence} along with the result from literature, $d = 1.7 \pm 0.6$ \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.
22
23
24
Taking an average of the trailing $5000$ readings we come to a value of $fd = 1.73$. As can be seen on the figure this is divergence from the literature (we suspect due to the gridded nature of the embedding space) the result is reasonable and consistent across runs. We consider this, along with the sourcing of the initially provided code, to be sufficient grounding the start of our trust chain.
25
26
This also allows us to say with reasonable confidence that we can halt our model around $N_C = 5000$ as a trade off between computational time and accuracy. This should be verified for particular model variations however.
27
28
\subsection{Probabilistic Sticking}
29
30
\begin{figure}[hbt]
31
\includegraphics[width=\columnwidth]{figures/eg-across-sp/sp-range.png}
32
\caption{Here we see the result of three different DLA simulations with $p_{stick} = 0.1,0.5,1.0$ from left to right. Note the thickening of the arms at low probabilities.}
33
\label{sp-dla-comparison}
34
\end{figure}
35
36
\begin{figure}[hbt]
37
\includegraphics[width=\columnwidth]{figures/sp-fd}
38
\caption{The fractal dimension for the DLA system on a 2D grid lattice a sticking probability $p_{stick}$. This value was obtained from $100$ runs with different seeds, by computing the value of the fractal dimension using the mass method, taking a mean across the last $100$ measurements on a $2000$ particle cluster.
39
40
}
41
\label{sp-fd}
42
\end{figure}
43
44
The first alteration we shall make to the DLA model is the introduction of a probabilistic component to the sticking behaviour. We parametrise this behaviour by a sticking probability $p_{stick} \in (0, 1]$, with the particle being given this probability to stick at each site (for example, if the particle was adjacent to two cells in the aggregate, then the probabilistic aspect would apply twice).
45
46
Comparing first the clusters for different values of $p_{stick}$ we can see in Figure \ref{sp-dla-comparison} a clear thickening of the arms with lower values of $p_{stick}$. This aligns with data for the fractal dimension, as seen in Figure \ref{sp-fd}, with thicker arms bringing the cluster closer to a non-fractal two dimensional object.
47
48
As discussed in the Appendix, \nameref{generic-dla}, this also provides the next chain of grounding between the initially provided code, and the new generic framework. Further details can be found in the aforementioned appendix.
49
50
\subsection*{Higher Dimensions}
51
52
The next alteration to explore is changing the embedding space to be higher dimensional. Here we use a k-dimensional tree structure to store the aggregate as opposed to an array based grid allowing us to greatly reduce memory consumption ($O(\text{grid\_size}^D) \to O(n)$ where $n$ is the number of particles in the aggregate) whilst retaining a strong access and search time complexity of $O(n \log n)$\cite{bentleyMultidimensionalBinarySearch1975}.
53
54
To start we model two styles of random walk: direct, where only those cells which are directly adjacent to its current location current location are accessible; off-axis, where all the full $3 \times 3 \times 3$ cubic (bar the centre position) are available. The $N_c, fd$ correspondence is shown in Figure \ref{3d-nc-fd-convergence} where we can see tha
55
56
Modelling the system across the range of $p_{stick}$ we obtain results as shown in Figure \ref{sp-fd-2d-3d}. These show a similar pattern as was seen in the 2D
57
58
Running with various values of $p_{stick}$ we get the results shown in Figure \ref{sp-fd-2d-3d}.
59
60
\begin{figure}
61
\includegraphics[width=\columnwidth]{figures/sp-fd-2d-3d}
62
\caption{TODO}
63
\label{sp-fd-2d-3d}
64
\end{figure}
65
66
67
\begin{figure}
68
\includegraphics[width=\columnwidth]{figures/3d-nc-fd-convergence}
69
\caption{A comparison of direct and off-axis walks in 3 dimensions, using both the new framework (NF) and the initial provided code (IPC). Note a slight divergence between the NF and IPC lines but a complete agreement between the direct and off-axis walks for the NF. Errors are not displayed as they are to small to be visible on this graph due to the large sample size.}
70
\label{3d-nc-fd-convergence}
71
\end{figure}
72
73
74
75
76
77
\begin{enumerate}
78
\item The next obvious extension is 3D
79
\item Try with on axis and off-axis movement
80
\item See that off-axis has no effect but is quicker as traverses space quicker (I mean also validate this)
81
\item Now try 3D + SP
82
\end{enumerate}
83
84
85
86
\subsection*{Continuous Space}
87
88
\begin{enumerate}
89
\item We get a divergence from theory, what happens if we use continuous
90
\end{enumerate}
91
92
\subsection*{Hexagonal}
93
94
\subsection*{External force onto wall}