Stash
This commit is contained in:
parent
17a624cc84
commit
9e0c932109
32
appendix.tex
32
appendix.tex
@ -1,33 +1 @@
|
||||
\section*{Appendix}
|
||||
|
||||
\subsection*{Generic DLA Model}
|
||||
\label{generic-dla}
|
||||
|
||||
The main tool used to generate the data in this report was the generic DLA framework written to support the report. Here we will briefly discuss the process of creating and verifying this framework.
|
||||
|
||||
An intrinsic problem of developing computational models for exploratory work is the question of correctness: is some novel result you find a bug in your model, or exactly the interesting new behaviour you set out to explore.
|
||||
|
||||
To mitigate this issue the model for this system was created iteratively, with each step being checked against the last where their domains overlap (naturally the newer model is likely to cover a superset of the domain of the old model so there will be some areas where they do not) and unit testing of specific behaviours and verifying expectations\fnmark{unit-test-egs}.
|
||||
|
||||
\fntext{unit-test-egs}{Examples that came up in development include: ensuring our uniform random walks are indeed uniform, that they visit all the desired neighbours, etc}
|
||||
|
||||
This creates a chain of grounding between one model and the next, where our trust model $N+1$ is grounded in our trust of model $N$ and our unit testing however this trust chain, depends on our trust of $N = 0$, the initially provided code. For this we rely on both the extensive history of the code, and (rough) agreement with literature (see the results section for this comparison).
|
||||
|
||||
To this end, starting with the initially provided code we made the minimal alterations necessary such that it would run in reasonable time\fnmark{macos-speed} and output the data required for later analysis. This was done explicitly with the goal of perturbing the initial code's behaviour as little as possible, including not performing relatively obvious performance improvement that might introduce bugs (the previously mentioned performance improvements were predominantly code removal as opposed to code change). This allowed us to collect the data we needed and ground the initial model in theory.
|
||||
|
||||
\fntext{macos-speed}{When running on macOS systems the rendering code slows down the model by several orders of magnitude making it unsuitable for large scale modelling, hence it is removed and replaced with image generation mitigation as discussed later.}
|
||||
|
||||
Once rough accordance with literature was obtained (see Figure \ref{nc-fd-convergence}), and most importantly, consistency between runs (verifying against a ill behaved system is a fruitless and painful endeavour), we added the sticking probability alteration as the simplest alteration the DLA algorithm, verifying agreement between the traditional and probabilistic sticking models at $p_{stick} = 1$. See Figure \ref{sp-fd-rust-vs-c} for this comparison.
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=\columnwidth]{figures/sp-fd-rust-vs-c.png}
|
||||
\caption{A comparison of the reported fractal dimension the probabilistic sticking extension of the Initially Provided Code (IPC + PS) in blue, and the New Framework with probabilistic sticking enabled (NF) in red. We can clearly see a high degree of agreement grounding our new framework and the basic functions of the model.}
|
||||
\label{sp-fd-rust-vs-c}
|
||||
\end{figure}
|
||||
|
||||
|
||||
This then provided sufficient data for us to transition to our new generic framework, verifying that it agreed with this dataset to ensure correctness.
|
||||
|
||||
\subsection*{Auxiliary Programs}
|
||||
|
||||
A number of auxiliary programs were also developed to assist in the running and visualisation of the model. Most notably was the image generation tool which allowed for the model to focus on one thing: modelling DLA, separating out generating visualisations of the system. This was used to generate images such as that shown in Figure \ref{dla-eg} which are both useful for presentation, and visual qualitative assessment of model correctness. Additional tools can be found under the tools executable of the rust-codebase.
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
\singlecolumnabstract{
|
||||
Diffusion-limited aggregation is a well known model simulating the growth of complex bodies across a range of disciplines. Modelling the process under a variety of conditions is useful in exploring its behaviours in novel applications. Here we discuss possible altered conditions for the DLA model and discuss the development of a framework to test this behaviour. Of these conditions we determine a fractal dimension for the standard DLA model in 2D as $\mathrm{fd} = 1.735 \pm 0.020$ and in 3D as $\mathrm{fd} = 2.03 \pm 0.06$, as well as exploring the change in the fractal dimension in these two settings, when introducing probabilistic sticking behaviour
|
||||
{\lipsum[1]}
|
||||
}
|
||||
|
||||
\medskip
|
||||
@ -8,106 +8,60 @@ Diffusion-limited aggregation is a well known model simulating the growth of com
|
||||
|
||||
\section*{Introduction}
|
||||
|
||||
Diffusion-limited aggregation (DLA) models processes where the diffusion of small particles into a larger aggregate is the limiting factor in a system's growth. It is applicable to a wide range of fields and systems such as:
|
||||
the growth of dust particles, modelling dielectric breakdown, and in urban growth.
|
||||
The Ising Model is a simplified model of Ferromagnetic Materials to explore behaviour across the magnetised—non-magnetised phase transition. It is comprised of a grid of spin-sites which can take a value of either $+1$ or $-1$, representing up or down spins respectively. These spins interact in a local manner, with their nearest neighbours in the grid, having an interaction energy equal to,
|
||||
|
||||
\begin{figure}[htb]
|
||||
\includegraphics[width=\columnwidth]{figures/dla-eg}
|
||||
\caption{A $5000$ particle aggregate on a 2D square grid, the lighter colours being placed later in the process.}
|
||||
\label{dla-eg}
|
||||
\end{figure}
|
||||
$$
|
||||
E_i = -J\sum_{\ip{ij}}
|
||||
$$
|
||||
|
||||
This process gives rise to structures which are fractal in nature (for example see Figure \ref{dla-eg}), i.e. objects which contain detailed structure at arbitrarily small scales. These objects are associated with a fractal dimension, $\mathrm{fd}$, (occasionally written as $\mathrm{fd}$ or $d$). This number relates how measures of the object, such as mass, scale when the object itself is scaled. For non fractal this will be its traditional dimension: if you double the scale of a square, you quadruple its area, $2 ^ 2$; if you double the scale of a sphere, you octuple its volume, $2 ^ 3$. For a DLA aggregate in a 2D embedding space, its \enquote{traditional} dimension would be 1, it is not by nature 2D, but due to its fractal dimension it has a higher fractal dimension higher than that. Fractals are often associated with a scale invariance, i.e. they have the same observables at various scales. This can be observed for DLA aggregates in Figure \ref{scale-comparison} where we have two aggregates of different sizes, scaled as to fill the same physical space.
|
||||
where $J > 0$ is the strength of the \todo{what are the units?} interaction, and $i, j$ are specific sites in the grid\fnmark{sites}. We can see trivially that this energy expression preferences same spin neighbours having $(\pm 1)^2 = 1 \implies E_i < 0$.
|
||||
|
||||
In this paper we will consider a number of alterations the standard DLA process and the effect they have on the fractal dimension of the resulting aggregate. This data will be generated by a number of computational models derived initially from the code provided \cite{IPC} but altered and optimised as needed for the specific modelling problem.
|
||||
\fntext{sites}{When performing a calculation a \emph{site} is mapped to two integers $(x, y) \in \N$ representing the location of this site in the overall grid, however in mathematics we abstract this to a single index for conceptual ease.}
|
||||
|
||||
\begin{figure}[htb]
|
||||
\includegraphics[width=\columnwidth]{figures/scale-comparison.png}
|
||||
\caption{A $5000$ and $10000$ particle aggregate scaled to fill the same physical space. Note the similar structure and pattern between the two objects.}
|
||||
\label{scale-comparison}
|
||||
\end{figure}
|
||||
---
|
||||
|
||||
A phase transition is defined as a defined by a singularity in thermodynamic potential or its derivatives.
|
||||
|
||||
When crossing a critical point we use an order parameter, in this case $m$ the mean magnetisation, to characterise which phase we in. For the ordered phase this parameter has a non-zero value, in the ordered phase it has a zero value, up to thermodynamic variance.
|
||||
|
||||
A phase transition occurs when there is a singularity in the free energy or one of its derivatives. What is often visible is a sharp change in the properties of a substance. The transitions from liquid to gas, from a normal conductor to a superconductor, or from paramagnet to ferromagnet are common examples. The phase diagram of a typical fluid
|
||||
|
||||
|
||||
|
||||
In chemistry, thermodynamics, and other related fields, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure. This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point, resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point.
|
||||
|
||||
|
||||
A wide variety of physical systems undergo rearrangements of their internal constituents in response to the thermodynamic conditions to which they are subject. Two classic examples of systems displaying such phase transitions are the ferromagnet and fluid systems. As the temperature of a ferromagnet is increased, its magnetic moment is observed to decrease smoothly, until at a certain temperature known as the critical temperature, it vanishes altogether.
|
||||
|
||||
|
||||
\begin{enumerate}
|
||||
\item What does the Ising Model model?
|
||||
\item Why do we care
|
||||
\item What is the Ising Model
|
||||
\end{enumerate}
|
||||
|
||||
|
||||
|
||||
|
||||
The Ising model (German pronunciation: [ˈiːzɪŋ]) (or Lenz-Ising model or Ising-Lenz model), named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually a lattice (where the local structure repeats periodically in all directions), allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.[1]
|
||||
|
||||
The Ising model was invented by the physicist Wilhelm Lenz (1920), who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model was solved by Ising (1925) alone in his 1924 thesis;[2] it has no phase transition. The two-dimensional square-lattice Ising model is much harder and was only given an analytic description much later, by Lars Onsager (1944). It is usually solved by a transfer-matrix method, although there exist different approaches, more related to quantum field theory.
|
||||
|
||||
In dimensions greater than four, the phase transition of the Ising model is described by mean-field theory. The Ising model for greater dimensions was also explored with respect to various tree topologies in the late 1970’s, culminating in an exact solution of the zero-field, time-independent Barth (1981) model for closed Cayley trees of arbitrary branching ratio, and thereby, arbitrarily large dimensionality within tree branches. The solution to this model exhibited a new, unusual phase transition behavior, along with non-vanishing long-range and nearest-neighbor spin-spin correlations, deemed relevant to large neural networks as one of its possible applications.
|
||||
|
||||
The Ising problem without an external field can be equivalently formulated as a graph maximum cut (Max-Cut) problem that can be solved via combinatorial optimization.
|
||||
|
||||
\section*{Discussion}
|
||||
|
||||
As mentioned the DLA process models the growth of an aggregate (otherwise known as a cluster) within a medium through which smaller free moving particles can diffuse. These particles move freely until they \enquote{stick} to the aggregate adding to its extent. A high level description of the DLA algorithm is given as follows,
|
||||
|
||||
\begin{enumerate}
|
||||
\item An initial seed aggregate is placed into the system, without mathematical loss of generality, at the origin. This is normally a single particle.
|
||||
\item A new particle is then released at some sufficient distance from the seeded aggregate.
|
||||
\item This particle is allowed to then diffuse until it sticks to the aggregate.
|
||||
\item At this point the new particle stops moving and becomes part of the aggregate a new particle is released.
|
||||
\end{enumerate}
|
||||
|
||||
An actual implementation of this system will involve a number of computational parameters and simplification for computational modelling. For example particles are spawned at a consistent radius from the aggregate, $r_{\mathrm{add}}$, rather than existing uniformly throughout the embedding medium. Further it is traditional to define a \enquote{kill circle}, $r_{\mathrm{kill}}$ past which we consider the particle lost and stop simulating it \cite[p.~27]{sanderDiffusionlimitedAggregationKinetic2000} (this is especially important in $d > 2$ dimensional spaces where random walks are not guaranteed to reoccur \cite{lawlerIntersectionsRandomWalks2013} and could instead tend off to infinity).
|
||||
|
||||
While these are interesting and important to the performant modelling of the system, we aim to choose these such to maximise the fidelity to the original physical system, whilst minimising the computational effort required for simulation. From a modelling perspective however there are a number of interesting orthogonal behaviours within this loose algorithm description which we can vary to potentially provide interesting results.
|
||||
|
||||
The first is the seed which is used to start the aggregation process. The traditional choice of a single seed models the spontaneous growth of a cluster, but the system could be easily extended to diffusion onto a plate under influence of an external force field \cite{tanInfluenceExternalField2000}, or cluster-cluster aggregation where there are multiple aggregate clusters, which are capable of moving themselves \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
|
||||
|
||||
The next behaviour is in the spawning of the active particle. The choice of spawning location is traditionally made in accordance to a uniform distribution, which bar any physical motivation from a particular system being modelled, seems to intuitive choice. However, the choice of a single particle is one which is open to more investigation. This is interesting in both the effect varying this will have on the behaviour of the system, but also if it can be done in a way to minimise the aforementioned effects, as a speed-up for long-running simulations.
|
||||
|
||||
Another characteristic behaviour of the algorithm is the choice of diffusion mechanism. Traditionally this is implemented as a random walk, with each possible neighbour being equally likely. This could be altered for example by the introduction of an external force to the system.
|
||||
|
||||
Finally, we arrive at the last characteristic we will consider: the space that the DLA process takes place within. Traditionally this is done within a 2D orthogonal gridded space, however other gridded systems, such as hexagonal, can be used to explore any effect the spaces \cite[pp.~210-211]{sanderDiffusionlimitedAggregationKinetic2000}.
|
||||
|
||||
We will explore a number of these alterations in the report that follows.
|
||||
|
||||
\section*{Method}
|
||||
|
||||
To this end we designed a generic system such that these different alterations of the traditional DLA model could be written, explored, and composed quickly, whilst generating sufficient data for statistical measurements. This involved separating the various orthogonal behaviours of the DLA algorithm into components which could be combined in a variety of ways enabling a number of distinct models to be coexist within the same codebase.
|
||||
The Ising Model was implemented on a square cell grid with periodic boundary conditions.
|
||||
|
||||
This code was based off the initially provided code (IPC), altered to allow for data extraction and optimised for performance. For large configuration space exploring runs the code was run using GNU Parallel \nocite{GNUParallel} to allow for substantially improved throughput (this is opposed to long-running, high $N$ simulations where they were simply left to run).
|
||||
|
||||
The code was written such that it is reproducible based on a user provided seed for the random number generator, this provided the needed balance between reproducibility and repeated runs. Instructions for building the specific models used in the paper can be found in the appendix.
|
||||
\subsection*{Convergence}
|
||||
|
||||
\subsection*{Fractal Dimension Calculation}
|
||||
\begin{figure}[hbt]
|
||||
\includegraphics[width=\columnwidth]{./figures/convergence-rate-varying-beta.png}
|
||||
\caption{Her}
|
||||
\end{figure}
|
||||
|
||||
We will use two methods of determining the fractal dimension of our aggregates. The first is the mass method and the second box-count \cite{smithFractalMethodsResults1996a}.
|
||||
|
||||
For the mass method we note that the number of particles in an aggregate $N_c$ grows with the maximum radius $r_\mathrm{max}$ as
|
||||
|
||||
\begin{equation*}
|
||||
N_c(r_{\mathrm{max}}) = (\alpha r_{\mathrm{max}})^{\mathrm{fd}} + \beta,
|
||||
\end{equation*}
|
||||
|
||||
where $\alpha, \beta$ are two unknown constants. Taking the large $r_\mathrm{max}$ limit we can take $(\alpha r_{\mathrm{max}})^{\mathrm{fd}} \gg \beta$ and hence,
|
||||
|
||||
\begin{align*}
|
||||
N_c(r_{\mathrm{max}}) &= (\alpha r_{\mathrm{max}})^{\mathrm{fd}} + \beta \\
|
||||
&\approx (\alpha r_{\mathrm{max}})^{\mathrm{fd}} \\
|
||||
\log N_c &\approx \mathrm{fd} \cdot \log\alpha + \mathrm{fd} \cdot \log r_{\mathrm{max}} \\
|
||||
\end{align*}
|
||||
|
||||
from which we can either perform curve fitting on our data.
|
||||
|
||||
In addition if we take $\alpha = 1$, as this is an entirely computational model and we can set our length scales without loss of generality we obtain,
|
||||
|
||||
\begin{align*}
|
||||
\log N_c &= \mathrm{fd} \cdot \log r_{\mathrm{max}} \\
|
||||
\mathrm{fd} &= \frac{\log N_c}{\log r_{\mathrm{max}}}
|
||||
\end{align*}
|
||||
|
||||
giving us a way to determine \enquote{instantaneous} fractal dimension at any particular point the modelling process.
|
||||
|
||||
% TODO If we don't end up using this, bin this section it is just going to be
|
||||
|
||||
A second method for determining the fractal dimension is known as box-count \cite{smithFractalMethodsResults1996a}. This involves placing box-grids of various granularities onto the aggregate and observing the number of boxes which have at least one particle within them. The number of these boxes $N$ should grow as,
|
||||
|
||||
\begin{equation*}
|
||||
N \propto w^{-d}
|
||||
\end{equation*}
|
||||
|
||||
where $w$ is the granularity of the box-grid and $d$ is the fractal dimension we wish to find. By a similar process as before we end up with,
|
||||
|
||||
\begin{equation*}
|
||||
\log N = \log N_0 - d \log w
|
||||
\end{equation*}
|
||||
|
||||
where $N_0$ is some proportionality constant. We will expect a plot of $(w, N)$ to exhibit two modes of behaviour,
|
||||
|
||||
\begin{enumerate}
|
||||
\item A linear region from which we can extract fractal dimension data.
|
||||
\item A saturation region where the box-grid is sufficiently fine such there each box contains either $1$ or none particles.
|
||||
\end{enumerate}
|
||||
|
||||
We will fit on the linear region, dropping some data for accuracy.
|
||||
|
||||
14
prelude.tex
14
prelude.tex
@ -46,6 +46,7 @@
|
||||
\renewcommand{\cftsecfont}{\rmfamily\mdseries\upshape}
|
||||
\renewcommand{\cftsecpagefont}{\rmfamily\mdseries\upshape} % No bold!
|
||||
\usepackage{authblk}
|
||||
\usepackage{lipsum}
|
||||
|
||||
\usepackage{hyperref}
|
||||
|
||||
@ -71,4 +72,17 @@
|
||||
|
||||
%%% END Article customizations
|
||||
|
||||
\include{shortcuts.tex}
|
||||
|
||||
\newcommand{\nab}{\nabla}
|
||||
\newcommand{\divrg}{\nab \cdot}
|
||||
\newcommand{\curl}{\nab \cp}
|
||||
\newcommand{\lap}{\Delta}
|
||||
\newcommand{\p}{\partial}
|
||||
\renewcommand{\d}{\mathrm{d}}
|
||||
\newcommand{\rd}{~\mathrm{d}}
|
||||
\newcommand{\ip}[1]{\left\langle#1\right\rangle}
|
||||
\newcommand{\N}{\mathbb{N}}
|
||||
\newcommand{\R}{\mathbb{R}}
|
||||
|
||||
%%% The "real" document content comes below...
|
||||
|
||||
@ -6,13 +6,15 @@
|
||||
\addbibresource{static.bib}
|
||||
\setlength{\marginparwidth}{1.2cm}
|
||||
|
||||
\title{\textbf{Modelling Diffusion Limited Aggregation under a Variety of Conditions}}
|
||||
\title{\textbf{Comparison of Models for Ferromagnetic Systems near the Critical Point}}
|
||||
\author{Candidate Number: 24829}
|
||||
\affil{Department of Physics, University of Bath}
|
||||
\date{March 21, 2023} % Due Date
|
||||
\date{May 12, 2023} % Due Date
|
||||
|
||||
\begin{document}
|
||||
|
||||
\input{introduction-dicussion-method.tex}
|
||||
|
||||
\input{results.tex}
|
||||
|
||||
\printbibliography
|
||||
|
||||
78
results.tex
78
results.tex
@ -1,82 +1,4 @@
|
||||
\section*{Results}
|
||||
|
||||
\begin{figure}[t]
|
||||
\includegraphics[width=\columnwidth]{figures/rmax-n.png}
|
||||
\caption{The growth of $N$ vs $r_{\mathrm{max}}$ for $20$ runs of the standard DLA model to a maximum value of $N_C = 10000$. Also included is a line of best fit for the data, less the first $50$ which are removed to improve accuracy, with form $\log N_C = a_0 + \mathrm{fd} \cdot \log r_{\mathrm{max}}$ and coefficients $\mathrm{fd} = 1.7685 \pm 0.0004$, $a_0 = -0.1815 \pm 0.002$. % TODO Verify rounding
|
||||
}
|
||||
\label{rmax-n}
|
||||
\end{figure}
|
||||
|
||||
\subsection*{Preliminary Work: Testing Initial Implementation and Fractal Dimension Calculations}
|
||||
\label{ii-fdc}
|
||||
|
||||
\begin{figure}[hbt]
|
||||
\includegraphics[width=\columnwidth]{figures/nc-fd-convergence.png}
|
||||
\caption{The converge of the fractal dimension of $20$ runs of the standard DLA model. This uses the mass method. The first $50$ data points are not included as the data contains to much noise to be meaningfully displayed. Also included in the figure is the value from literature, $\mathrm{fd} = 1.71 \pm 0.01$ from \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.}
|
||||
\label{nc-fd-convergence}
|
||||
\end{figure}
|
||||
|
||||
To start we do $20$ runs, with seeds $1, 2, \dots, 20$, of the standard DLA model using the minimally altered IPC. We use both the instantaneous and line-fitting mass method, as shown in Figure \ref{nc-fd-convergence} and Figure \ref{rmax-n} respectively. For the instantaneous case, the fractal dimension is calculated using the mass method and averaged across the $20$ runs. This is shown in Figure \ref{nc-fd-convergence} along with the result from literature, $\mathrm{fd} = 1.7 \pm 0.6$ \cite[Table 1, $\langle D(d = 2)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.
|
||||
|
||||
% TODO Errors
|
||||
Taking an average of the trailing $5000$ readings we come to a value of $\mathrm{fd} = 1.735 \pm 0.020$. As can be seen on the figure this is divergence from the literature (we suspect due to the gridded nature of the embedding space) the result is reasonable and consistent across runs. We consider this, along with the sourcing of the IPC, to be sufficient grounding the start of our trust chain.
|
||||
|
||||
This also allows us to say with reasonable confidence that we can halt our model around $N_C = 5000$ as a trade-off between computational time and accuracy. However care must be taken to verify this is appropriate for any particular model variation.
|
||||
|
||||
\subsection*{Probabilistic Sticking}
|
||||
|
||||
\begin{figure}[hbt]
|
||||
\includegraphics[width=\columnwidth]{figures/eg-across-sp/sp-range.png}
|
||||
\caption{Here we see the result of three different DLA simulations with $p_{stick} = 0.1,0.5,1.0$ from left to right. Note the thickening of the arms at low probabilities.}
|
||||
\label{sp-dla-comparison}
|
||||
\end{figure}
|
||||
|
||||
\begin{figure}[hbt]
|
||||
\includegraphics[width=\columnwidth]{figures/sp-fd}
|
||||
\caption{The fractal dimension for the DLA system on a 2D grid lattice a sticking probability $p_{stick}$. This data was obtained in two batches: in the $p_{stick} \in [0.1, 1]$ range $100$ samples were taken with different seeds with $N_C= 2000$, the fractal dimension being computed by the mass method; for the $p_{stick} \in (0.001, 0.1)$ range a $100$ samples of $N_C = 5000$ clusters were used.
|
||||
}
|
||||
\label{sp-fd}
|
||||
\end{figure}
|
||||
|
||||
The first alteration we shall make to the DLA model is the introduction of a probabilistic component to the sticking behaviour. We parametrise this behaviour by a sticking probability $p_{stick} \in (0, 1]$, with the particle being given this probability to stick at each site (for example, if the particle was adjacent to two cells in the aggregate, then the probabilistic aspect would apply twice).
|
||||
|
||||
Comparing first the clusters for different values of $p_{stick}$ we can see in Figure \ref{sp-dla-comparison} a clear thickening of the arms with lower values of $p_{stick}$. This aligns with data for the fractal dimension, as seen in Figure \ref{sp-fd}, with thicker arms bringing the cluster closer to a non-fractal two-dimensional object.
|
||||
|
||||
In the low $p_{stick}$ domain we record values of $\mathrm{fd} > 2$, greater that the 2D space they are embedded within. This is unexpected and points towards a possible failure of our mass-method fractal dimension calculation. More work and analysis is required to verify these results.
|
||||
|
||||
%As discussed in the Appendix, \nameref{generic-dla}, this also provides the next chain of grounding between the initially provided code, and the new generic framework (see the aforementioned appendix for more).
|
||||
|
||||
\subsection*{Higher Dimensions}
|
||||
|
||||
\begin{figure}[hbt]
|
||||
\includegraphics[width=\columnwidth]{figures/3d-eg}
|
||||
\caption{A 3D DLA aggregate on a 3D orthogonal grid, with $N_C = 5000$, coloured by deposition time. Note that the view of the particles as spheres is an artifact of the rendering process, they are in fact cubes. Here we can observe the expected tendril structure of a DLA aggregate.}
|
||||
\label{3d-eg}
|
||||
\end{figure}
|
||||
|
||||
\begin{figure}[hbt]
|
||||
\includegraphics[width=\columnwidth]{figures/sp-fd-2d-3d}
|
||||
\caption{A comparison of the fractal dimension of DLA aggregates in 2- and 3-dimensional embedding space. The datasets were obtained by averages of $100$ and $200$ for 2D and 3D respectively, both with data recorded at a increment of $\Delta p_{stick} \approx 0.1$, and an aggregate size of $2000$.}
|
||||
\label{sp-fd-2d-3d}
|
||||
\end{figure}
|
||||
|
||||
%\begin{figure}[hbt]
|
||||
%\includegraphics[width=\columnwidth]{figures/3d-nc-fd-convergence}
|
||||
%\caption{A comparison of direct and off-axis walks in 3 dimensions, using both the new framework (NF) and the initial provided code (IPC). Note a slight divergence between the NF and IPC lines but a complete agreement between the direct and off-axis walks for the NF. Errors are not displayed as they are to small to be visible on this graph due to the large sample size. Also included is the result from literature, $\mathrm{fd} = 2.51 \pm 0.01$\cite[Table 1, $\langle D(d = 3)\rangle$]{nicolas-carlockUniversalDimensionalityFunction2019}.}
|
||||
%\label{3d-nc-fd-convergence}
|
||||
%\end{figure}
|
||||
|
||||
The next alteration to explore is changing the embedding space to be higher dimensional, an example of which can be seen in Figure \ref{3d-eg}. Here we use a k-dimensional tree structure to store the aggregate as opposed to an array based grid allowing us to greatly reduce memory consumption ($O(\text{grid\_size}^D) \to O(n)$ where $n$ is the number of particles in the aggregate) whilst retaining a strong access and search time complexity of $O(n \log n)$\cite{bentleyMultidimensionalBinarySearch1975}.
|
||||
|
||||
To start we model two forms of random walk: direct, where the particle can only access directly adjacent cells and off-axis, where all the full $3 \times 3 \times 3$ cubic (bar the centre position) are available. These behave identically to each other, varying only slightly from a naive implementation in the IPC included to provide assurance of the correct behaviour. These off axis walks do however offer a speed boost as the larger range of motion leads to faster movement within the space.
|
||||
|
||||
Modelling the system across the range of $p_{stick}$ we obtain results as shown in Figure \ref{sp-fd-2d-3d}. These show a similar pattern as was seen in the 2D case of Figure \ref{sp-fd}. We note that whilst these lines are similar, they are not parallel showing distinct behaviour.
|
||||
|
||||
Note that the divergence from the value expected from the literature is greater here than in the 2D case, with a fractal dimension reported of $\mathrm{fd} = 2.03 \pm 0.06$. This along with our inability to find a satisfactory analytic form for this behaviour suggests further analysis on different grids is required.
|
||||
|
||||
% Extensions Do I want to do higher dimensions still 4d?
|
||||
% Look at theory to see if I can find a curve for these sp-fd graphs or at the very least note similarities and differences between them. "Given the erroneous behaviour for low sp we are uncertain as to the correctness). Maybe take another crack at boxcount since you've mentioned it and it might be interesting.
|
||||
|
||||
\section*{Conclusion}
|
||||
|
||||
In this report we have presented findings for the fractal dimension of DLA aggregates in 2D and 3D on orthogonal grids, as well as qualitative assessments of the variation of the fractal dimension across a range of sticking probabilities. In addition, we have validated the framework used to be consistent with previous models allowing for quick iteration. Future work is required to determine analytic or physical explanations for the data presented, specifically the $(p_{stick}, \mathrm{fd})$ relation, in addition to identifying the cause of the divergence between reported results and previous models, with literature, possibly through different geometries.
|
||||
|
||||
0
shortcuts.tex
Normal file
0
shortcuts.tex
Normal file
Loading…
Reference in New Issue
Block a user