Fix draft

This commit is contained in:
2018-03-21 22:58:54 +00:00
parent b640946dd7
commit 15d0648e23

View File

@@ -58,7 +58,7 @@
\maketitle \maketitle
\begin{abstract} \begin{abstract}
In the past few years, data visualisation on the web are becoming more popular. D3, a JavaScript library, has a module that focuses on simulating physical forces on particles for creating force-directed layouts. However, the currently-available algorithms does not scale very well for multidimensional scaling. To solve this problem, the Hybrid Layout algorithm and its pivot-based near neighbour search enhancement was implemented and integrated into the D3 module. The existing D3's and Bartasius' implementation of Chalmers' 1996 algorithm were also optimised for the use case and compared against the Hybrid algorithm. Furthermore, experiments were also performed to evaluate the impact of each user-defined parameters. The results show that for larger data sets, the Hybrid Layout consistently produces fairy good layouts in a shorter amount of time. It is also capable of working on larger data sets, compared to the D3's algorithm. In the past few years, data visualisation on the web is becoming more popular. D3, a JavaScript library, has a module that focuses on simulating physical forces on particles for creating force-directed layouts. However, the currently-available algorithms does not scale very well for multidimensional scaling. To solve this problem, the Hybrid Layout algorithm and its pivot-based near neighbour search enhancement was implemented and integrated into the D3 module. The existing D3's and Bartasius' implementation of Chalmers' 1996 algorithm were also optimised for the use case and compared against the Hybrid algorithm. Furthermore, experiments were also performed to evaluate the impact of each user-defined parameters. The results show that for larger data sets, the Hybrid Layout consistently produces fairly good layouts in a shorter amount of time. It is also capable of working on larger data sets, compared to the D3's algorithm.
\end{abstract} \end{abstract}
%\educationalconsent %\educationalconsent
@@ -80,7 +80,7 @@ In the past few years, data visualisation on the web are becoming more popular.
\section{Motivation} \section{Motivation}
In the age of Web 2.0, new data are being generated at an overwhelming speed. Raw data made up of numbers, letters, and boolean values are hard for humans to comprehend and infer any relation from it. To make it easier and faster for us, humans, to understand a data set, various techniques were created to map raw data to a visual representation. In the age of Web 2.0, new data are being generated at an overwhelming speed. Raw data made up of numbers, letters, and boolean values are hard for humans to comprehend and infer any relation from it. To make it easier and faster for us, humans, to understand a data set, various techniques were created to map raw data to a visual representation.
Many data sets have many features while humans live in a 3D space, leading to the challenge of dimensionality scaling. There are many approaches to this problem, each with its own pros and cons. One of the approaches is multidimensional scaling (MDS) which hi-light the similarity and clustering of data to the audiences. The idea is to map a data point to a particle in 2D space and place them in a way that the distance between each pair of particle in 2D space represents their distance in high-dimensional space. Many data sets have many features while humans perceive 2D illustrations best, leading to the challenge of dimensionality scaling. There are many approaches to this problem, each with its own pros and cons. One of the approaches is multidimensional scaling (MDS) which hi-light the similarity and clustering of data to the audiences. The idea is to map a data point to a particle in 2D space and place them in a way that the distance between each pair of particle in 2D space represents their distance in high-dimensional space.
\begin{figure}[h] \begin{figure}[h]
\centering \centering
@@ -102,7 +102,7 @@ One of the most popular free open-source data visualisation library is Data Driv
The D3-Force module, part of the D3 library, provides a framework for simulating physical forces on particles. Along with that, a spring-model algorithm was also implemented to allow for creation of a force-directed layout. While the implementation is fast for several thousands particles, it does not scale well with larger data sets, both in term of memory and time complexity. By solving these issues, the use cases covered by the module would expand to support more complicated data sets. The motivation of the project is to improve this scalability issues with better algorithms from the School of Computing Science. The D3-Force module, part of the D3 library, provides a framework for simulating physical forces on particles. Along with that, a spring-model algorithm was also implemented to allow for creation of a force-directed layout. While the implementation is fast for several thousands particles, it does not scale well with larger data sets, both in term of memory and time complexity. By solving these issues, the use cases covered by the module would expand to support more complicated data sets. The motivation of the project is to improve this scalability issues with better algorithms from the School of Computing Science.
\section{Project Description} \section{Project Description}
The University of Glasgow's School of Computing Science have some of the fastest force-directed layout drawing algorithms in the world. Some of these are Chalmers' 1996 Neighbour and Sampling technique\cite{Algo1996}, 2002 Hybrid Layout algorithm\cite{Algo2002} and its 2003 enhanced variant\cite{Algo2003}. These algorithms provide huge improvements, both in term of speed and memory complexity. However, these algorithms are only implemented in an older version of Java which limits its practical use. In 2017, Bartasius have implemented the 1996 algorithm along with several others and a visual interface in order to compare each algorithm against the other\cite{LastYear}. The University of Glasgow's School of Computing Science have some of the fastest force-directed layout drawing algorithms in the world. Some of these are Chalmers' 1996 Neighbour and Sampling technique\cite{Algo1996}, 2002 Hybrid Layout algorithm\cite{Algo2002} and its 2003 enhanced variant\cite{Algo2003}. These algorithms provide huge improvements, both in term of speed and memory complexity. However, these algorithms are only implemented in an older version of Java which limits its practical use. In 2017, Bartasius has implemented the 1996 algorithm along with several others and a visual interface in order to compare each algorithm against the other\cite{LastYear}.
In short, the goal of the project is to In short, the goal of the project is to
\begin{itemize} \begin{itemize}
@@ -152,18 +152,18 @@ With the emergence of more complex data, each having many features, the need of
\end{figure} \end{figure}
Unlike the two previously mentioned techniques, multidimensional scaling (MDS) took another approach by aiming to reduce data dimension by preserving the level of similarity, rather than the values. Unlike the two previously mentioned techniques, multidimensional scaling (MDS) took another approach by aiming to reduce data dimension by preserving the level of similarity, rather than the values.
Classical MDS (also known as Principal Coordinates Analysis or PCoA)\cite{cMDS} achieves this goal by creating new dimensions for scatter-plotting, each made up of a linear combination of the original dimensions, while minimising a loss function called strain. In a way, it is similar to finding the a camera angle to project the high-dimensional scatterplot onto a 2D image. Classical MDS (also known as Principal Coordinates Analysis or PCoA)\cite{cMDS} achieves this goal by creating new dimensions for scatter-plotting, each made up of a linear combination of the original dimensions, while minimising a loss function called strain. For simple cases, it can be thought of as finding the a camera angle to project the high-dimensional scatterplot onto a 2D image.
Because strain assumes Euclidean distances, making it incompatible with other dissimilarity ratings. Metric MDS improves upon classical MDS by generalising the solution to support a variety of loss functions\cite{mcMDS}. However, the disadvantage of $O(N^3)$ time complexity still remains and linear combination may not be enough for some data sets. Because strain assumes Euclidean distances, making it incompatible with other dissimilarity ratings. Metric MDS improves upon classical MDS by generalising the solution to support a variety of loss functions\cite{mcMDS}. However, the disadvantage of $O(N^3)$ time complexity still remains and linear combination may not be enough for some data sets.
This project focuses on several non-linear MDS algorithms using force-directed layout. The idea is to attach each pair of data points with a spring whose equilibrium length is proportional to the high-dimensional distance between the two points, although the spring model we know today does not use Hooke's law to calculate the spring force\cite{Eades}. Several improvements have been introduced to the idea over the past decade. For example, the concept of 'temperature' purposed by Fruchterman and Reingold\cite{SpringTemp} solves the problem where the system is unable to reach an equilibrium state and improves execution time. The project focuses on an iterative spring-model-based algorithm introduced by Chalmers\cite{Algo1996} and the Hybrid approach which will be detailed in subsequent sections of this chapter. This project focuses on several non-linear MDS algorithms using force-directed layout. The idea is to attach each pair of data points with a spring whose equilibrium length is proportional to the high-dimensional distance between the two points, although the spring model we know today does not necessary use Hooke's law to calculate the spring force\cite{Eades}. Several improvements have been introduced to the idea over the past decade. For example, the concept of 'temperature' purposed by Fruchterman and Reingold\cite{SpringTemp} solves the problem where the system is unable to reach an equilibrium state and improves execution time. The project focuses on an iterative spring-model-based algorithm introduced by Chalmers\cite{Algo1996} and the Hybrid approach which will be detailed in subsequent sections of this chapter.
A number of other non-linear MDS algorithms were also introduced in the last few years. t-distributed Stochastic Neighbour Embedding (t-SNE)\cite{tSNE}, for example, is very popular in the field of machine learning. It is based on SNE\cite{SNE} where probability distributions are constructed over each pair of data point in a way that the more similar objects have higher probability of being picked. The distributions derived from the high-dimensional and low-dimensional distances are then compared and the 2D position of each data points are then iteratively adjusted to minimise the difference between the two distributions. The downside is that it have both time and memory complexity of $O(N^2)$ per iteration. In 2017, Bartasius\cite{LastYear} implemented t-SNE in D3 and found that not only is it the slowest algorithm in his test, the produced layout is also the many times worse in term of Stress, a metric which will be introduced in section \ref{sec:bg_metrics}. There is a number of other non-linear MDS algorithms. t-distributed Stochastic Neighbour Embedding (t-SNE)\cite{tSNE}, for example, is very popular in the field of machine learning. It is based on SNE\cite{SNE} where probability distributions are constructed over each pair of data point in a way that the more similar objects have higher probability of being picked. The distributions derived from both high-dimensional and low-dimensional distances are compared using the KullbackLeibler divergence, a metric to measure the similarity between two probability distributions. Then, the 2D position of each data points are then iteratively adjusted to maximize the similarity. The biggest downside is that it have both time and memory complexity of $O(N^2)$ per iteration. In 2017, Bartasius\cite{LastYear} implemented t-SNE in D3 and found that not only is it the slowest algorithm in his test, the produced layout is also the many times worse in term of Stress, a metric which will be introduced in section \ref{sec:bg_metrics}. However, comparing the Stress of a t-SNE layout is unfair as t-SNE is designed to optimise the KullbackLeibler divergence and not Stress.
The rest of this chapter will describes each of the algorithm and performance metrics used in this project in detail. The rest of this chapter will describes each of the algorithm and performance metrics used in this project in detail.
\section{Link Force} \section{Link Force}
\label{sec:linkbg} \label{sec:linkbg}
D3 library, which will be described in section \ref{sec:des_d3}, have several different force models implemented for creating a force-directed graph. One of them is Link Force. In this model, a force is applied between the two nodes at the end of each link. The force pushes the nodes together or apart with varying strength, proportional to the error between the desired and current distance on the graph. Essentially, it shares the same basis as the spring model. An example of a graph produced by the D3 link force is shown in figure \ref{fig:bg_linkForce}. In MDS where the high-dimensional distance between every pair of nodes can be calculated, a link will be created to represent each pair, resulting in a complete graph. D3 library, which will be described in section \ref{sec:des_d3}, have several different force models implemented for creating a force-directed graph. One of them is Link Force. In this brute-force method, a force is applied between the two nodes at the end of each link. The force pushes the nodes together or apart with varying strength, proportional to the error between the desired and current distance on the graph. Essentially, is the spring model with a custom spring-force calculation formula. An example of a graph produced by the D3 link force is shown in figure \ref{fig:bg_linkForce}. In MDS where the high-dimensional distance between every pair of nodes can be calculated, a link will be created to represent each pair, resulting in a complete graph.
\begin{figure}[h] \begin{figure}[h]
\centering \centering
@@ -172,8 +172,8 @@ D3 library, which will be described in section \ref{sec:des_d3}, have several di
\label{fig:bg_linkForce} \label{fig:bg_linkForce}
\end{figure} \end{figure}
The Link Force algorithm is inefficient. In each time step (iteration), a calculation have to be done for each pair of nodes connected with a link. This means that for MDS with $N$ nodes, the algorithm will have to perform $N(N-1)$ force calculations per iteration, essentially $O(N^2)$. It is also believed that the number of iterations required to create a good layout is proportional to the size of the data set, hence the total time complexity of $O(N^3)$. The Link Force algorithm is inefficient. In each time step (iteration), a calculation has to be done for each pair of nodes connected with a link. This means that for MDS with $N$ nodes, the algorithm will have to perform $N(N-1)$ force calculations per iteration, essentially $O(N^2)$. It is also believed that the number of iterations required to create a good layout is proportional to the size of the data set, hence the total time complexity of $O(N^3)$.
The model also cache the desired distance of each link in memory to improve speed across multiple iterations. While this greatly reduces the number of calls to the distance-calculating function, the memory complexity also increases to $O(N^2)$. Because JavaScript memory heap is limited, it runs out of memory when trying to process a complete graph of more than a thousands data points. The model also cache the desired distance of each link in memory to improve speed across multiple iterations. While this greatly reduces the number of calls to the distance-calculating function, the memory complexity also increases to $O(N^2)$. Because JavaScript memory heap is limited, it runs out of memory when trying to process a complete graph of more than around three thousands data points, depending on the features of the data.
\section{Chalmers' 1996 algorithm} \section{Chalmers' 1996 algorithm}
In 1996, Matthew Chalmers proposed a technique to reduce the time complexity down to $O(N^2)$, which is a massive improvement over link force's $O(N^3)$, potentially at the cost of accuracy. This is done by reducing the number of spring force calculations per iterations, using random samples\cite{Algo1996}. In 1996, Matthew Chalmers proposed a technique to reduce the time complexity down to $O(N^2)$, which is a massive improvement over link force's $O(N^3)$, potentially at the cost of accuracy. This is done by reducing the number of spring force calculations per iterations, using random samples\cite{Algo1996}.
@@ -227,7 +227,7 @@ To find a parent of an object, a distance calculation is first performed against
\begin{algorithmic} \begin{algorithmic}
\item Pre-processing: \item Pre-processing:
\ForAll{pivot in $k$} \ForAll{pivot in $k$}
\ForAll{points in $S$ that is not $k$} \ForAll{points in $(S-k)$}
\State Perform distance calculation \State Perform distance calculation
\EndFor \EndFor
\EndFor \EndFor
@@ -249,7 +249,7 @@ With this method, the parent found is not guaranteed to be the closest point. Pr
\section{Performance Metrics} \section{Performance Metrics}
\label{sec:bg_metrics} \label{sec:bg_metrics}
To compare different algorithms, they have to be tested against the same set of performance metric. During the development, a number of metrics were used to objectively judge the produced layout and computation requirements. The evaluation process in chapter \ref{ch:eval} will focuses on the following metrics. To compare different algorithms, they have to be tested against the same set of performance metrices. During the development, a number of metrics were used to objectively judge the produced layout and computation requirements. The evaluation process in chapter \ref{ch:eval} will focuses on the following metrics.
\begin{itemize} \begin{itemize}
\item \textbf{Execution time} is a broadly used metric to evaluate any algorithms that require any significant computational power. Some applications aim to be interactive and the algorithm have to finish the calculations within the time constraints for the program to stay responsive. This project, however, focuses on large data sets with minimal user interaction. Hence, the execution time in this project simply measures of the time an algorithm takes to produce its ``final'' result. The criteria to consider a layout ``finished'' will be discussed in details in section \ref{ssec:eval_termCriteria}. \item \textbf{Execution time} is a broadly used metric to evaluate any algorithms that require any significant computational power. Some applications aim to be interactive and the algorithm have to finish the calculations within the time constraints for the program to stay responsive. This project, however, focuses on large data sets with minimal user interaction. Hence, the execution time in this project simply measures of the time an algorithm takes to produce its ``final'' result. The criteria to consider a layout ``finished'' will be discussed in details in section \ref{ssec:eval_termCriteria}.
@@ -258,11 +258,11 @@ To compare different algorithms, they have to be tested against the same set of
While Stress is a good metric to evaluate a layout, its calculation is an expensive operation ($O(N^2)$). At the same time, it is not part of operation of any algorithm. Thus, by adding this optional measurement step between iterations, every algorithm will takes a lot longer that complete, invalidating the measured execution time of the run. While Stress is a good metric to evaluate a layout, its calculation is an expensive operation ($O(N^2)$). At the same time, it is not part of operation of any algorithm. Thus, by adding this optional measurement step between iterations, every algorithm will takes a lot longer that complete, invalidating the measured execution time of the run.
\item \textbf{Memory usage:} With more interests in the field of machine learning, the number of data points in a data set is getting bigger. It is common to encounter data sets with tens or hundreds of thousands instances, each with possibly hundreds of attributes. Therefore, memory usage shows how an algorithm scales to larger data sets and how many data points can a computer system handle. \item \textbf{Memory usage:} With more interests in the field of machine learning, the number of data points in a data set is getting bigger. It is common to encounter data sets with millions instances, each with possibly hundreds of attributes. Therefore, memory usage shows how an algorithm scales to larger data sets and how many data points can a computer system handle.
\end{itemize} \end{itemize}
\section{Summary} \section{Summary}
In this chapter, several techniques of visualising multidimensional data have been explored. As the focus of the project is on three spring-model-based algorithms, the theory of each of the method have been discussed. Finally, in order to measure the performance of each algorithm, different metrics were introduced and will be used for the evaluation process. In this chapter, several techniques of visualising multidimensional data have been explored. As the focus of the project is on three spring-model-based algorithms, the principles of each of the method have been discussed. Finally, in order to measure the performance of each algorithm, different metrics were introduced and will be used for the evaluation process.
%============================================================================== %==============================================================================
%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%
@@ -277,7 +277,7 @@ This chapter discusses decisions for selecting technologies and libraries during
\section{Technologies} \section{Technologies}
With the goal of reaching as many audience as possible, the project advisor set a requirement that the application must run on a modern web browser. This section briefly introduce web technologies used to develop the project. With the goal of reaching as wide audience as possible, the project advisor set a requirement that the application must run on a modern web browser. This section briefly introduce web technologies used to develop the project.
%============================ %============================
\subsection{HTML, CSS, and SVG} \subsection{HTML, CSS, and SVG}
@@ -333,7 +333,7 @@ Due to the sheer amount of experiments to run, manually changing functions and f
Figure \ref{fig:des_gui} shows the modified GUI used in this project. At the top is the canvas to draw the produced layout. The controls below are then divided into 3 columns. The left column controls data set input, rendering, and iterations limit. The middle column are a set radio and slider buttons for selecting the algorithm and parameters to use. The right contains a list of distance functions to choose from. Figure \ref{fig:des_gui} shows the modified GUI used in this project. At the top is the canvas to draw the produced layout. The controls below are then divided into 3 columns. The left column controls data set input, rendering, and iterations limit. The middle column are a set radio and slider buttons for selecting the algorithm and parameters to use. The right contains a list of distance functions to choose from.
\begin{figure} \begin{figure}[h]
\centering \centering
\includegraphics[height=10cm]{images/GUI.png} \includegraphics[height=10cm]{images/GUI.png}
\caption{The graphical interface.} \caption{The graphical interface.}
@@ -341,6 +341,9 @@ Figure \ref{fig:des_gui} shows the modified GUI used in this project. At the top
\end{figure} \end{figure}
%============================ %============================
\section{Summary}
In this chapter, several technologies and alternatives were discussed. In the end, the project is set out to build on Bartasius's repository, using D3.js with standard JavaScript, HTML, CSS and SVG.
%============================================================================== %==============================================================================
%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%
@@ -361,14 +364,14 @@ Depending on the distance functions, per-dimension mean, variance, and other att
Several values used for evaluation such as execution time, total force applied per iteration, and stress may also be computed. However, these values are printed out to JavaScript console instead. Several values used for evaluation such as execution time, total force applied per iteration, and stress may also be computed. However, these values are printed out to JavaScript console instead.
Due to the growing number of algorithms and variables, the main JavaScript code have been refactored. Functions for controlling each algorithm have been extracted to their own file, and some unused codes are removed or commented out. Due to the growing number of algorithms and variables, the main JavaScript code has been refactored. Functions for controlling each algorithm have been extracted to their own file, and some unused codes are removed or commented out.
\section{Algorithms} \section{Algorithms}
This section discusses implementation decisions for each algorithm, some of which are already implemented in D3 force module and the d3-neighbour-sampling plugin. Adjustments made to third-party-implemented algorithms are also discussed. This section discusses implementation decisions for each algorithm, some of which are already implemented in D3 force module and the d3-neighbour-sampling plugin. Adjustments made to third-party-implemented algorithms are also discussed.
\subsection{Link force} \subsection{Link force}
\label{sec:imp_linkForce} \label{sec:imp_linkForce}
D3-force module have an algorithm implemented to produce a force-directed layout. The main idea is to change the velocity vector of each pair of node connected via a link at every time step, simulating force application. For example, if two nodes are further apart than the desired distance, a force is applied to both nodes to pull them together. The implementation also supports incomplete graphs, thus the links have to be specified. The force is also, by default, scaled on each node depending on how many springs it is attached to, in order to balance the force applied to heavily and lightly connected nodes, improving overall stability. Without such scaling, the graph would expands in every direction. D3-force module has an algorithm implemented to produce a force-directed layout. The main idea is to change the velocity vector of each pair of node connected via a link at every time step, simulating force application. For example, if two nodes are further apart than the desired distance, a force is applied to both nodes to pull them together. The implementation also supports incomplete graphs, thus the links have to be specified. The force is also, by default, scaled on each node depending on how many springs it is attached to, in order to balance the force applied to heavily and lightly connected nodes, improving overall stability. Without such scaling, the graph would expands in every direction.
In the early stages of the project, when assessing the library, it is observed that many of the available features are unused for Multidimensional scaling. In order to reduce the computation time and memory usage, I created a modified version of Force Link as part of the plug-in. The following are the improved aspects. In the early stages of the project, when assessing the library, it is observed that many of the available features are unused for Multidimensional scaling. In order to reduce the computation time and memory usage, I created a modified version of Force Link as part of the plug-in. The following are the improved aspects.
@@ -428,11 +431,11 @@ After optimisation, the execution time decreases marginally while memory consump
Next, \texttt{jiggle()} function was assessed. As shown in line 5-7 of code \ref{lst:impl_LinkD3}, in cases where two nodes are projected to be on the exact same location, \texttt{x}, \texttt{y} and, in-turn, \texttt{l}, could be 0. This would cause a divide-by-zero error in line 8. Rather than throwing an error, JavaScript would return the result as \texttt{Infinity} or \texttt{-Infinity}. Any subsequent arithmetic operations, except for modulus, with other numbers will also results in \texttt{$\pm{}$Infinity}, effectively deleting the coordinate and velocity values from the entire system. To prevent such error, when \texttt{x} or \texttt{y} is calculated to be zero, D3 will replace the values with a very small random number generated by \texttt{jiggle()}. While extremely unlikely, there is still a chance that \texttt{jiggle()} will random return a random value of 0. This case can rarely be observed when every nodes are initially placed at the exact same position. To counter this, I modified \texttt{jiggle()} to re-random a number until a non-zero value is found. Next, \texttt{jiggle()} function was assessed. As shown in line 5-7 of code \ref{lst:impl_LinkD3}, in cases where two nodes are projected to be on the exact same location, \texttt{x}, \texttt{y} and, in-turn, \texttt{l}, could be 0. This would cause a divide-by-zero error in line 8. Rather than throwing an error, JavaScript would return the result as \texttt{Infinity} or \texttt{-Infinity}. Any subsequent arithmetic operations, except for modulus, with other numbers will also results in \texttt{$\pm{}$Infinity}, effectively deleting the coordinate and velocity values from the entire system. To prevent such error, when \texttt{x} or \texttt{y} is calculated to be zero, D3 will replace the values with a very small random number generated by \texttt{jiggle()}. While extremely unlikely, there is still a chance that \texttt{jiggle()} will random return a random value of 0. This case can rarely be observed when every nodes are initially placed at the exact same position. To counter this, I modified \texttt{jiggle()} to re-random a number until a non-zero value is found.
Finally, a feature is added to track the average force applied to the system in each iteration. A threshold value are set so that once average force falls below the threshold, a user-defined function is called. In this case, a handler is added to Bartasius' application to stop the simulation. This is feature will be heavily used in the evaluation process (section \ref{ssec:eval_termCriteria}). Finally, a feature is added to track the average force applied to the system in each iteration. A threshold value is set so that once average force falls below the threshold, a user-defined function is called. In this case, a handler is added to Bartasius' application to stop the simulation. This is feature will be heavily used in the evaluation process (section \ref{ssec:eval_termCriteria}).
%============================ %============================
\subsection{Chalmers' 1996} \subsection{Chalmers' 1996}
\label{sec:imp_neigbour} \label{sec:imp_neigbour}
Bartasius' d3-neighbour-sampling plug-in have the main focus on Chalmers' 1996 algorithm. The idea is to use the exact same force calculation function as D3 Force Link for a fair comparison. The algorithm was also implemented as a Force object to be used by a Simulation. As part of the project, I refactored the code base to ease the development process and improved a shortcoming. Bartasius' d3-neighbour-sampling plug-in has the main focus on Chalmers' 1996 algorithm. The idea is to use the exact same force calculation function as D3 Force Link for a fair comparison. The algorithm was also implemented as a Force object to be used by a Simulation. As part of the project, I refactored the code base to ease the development process and improved a shortcoming.
Aside from formatting the code, Bartasius' implementation does not have spring force scaling, making the graph explodes in every direction. Originally, the example implementation used decaying $alpha$, a variable controlled by the Simulation used for artificially scaling down the force applied to the system over time, to make the system contract back. A constant \texttt{dataSizeFactor}, similar to that in the custom Link Force, have been added to mitigate the requirement of decaying alpha. Aside from formatting the code, Bartasius' implementation does not have spring force scaling, making the graph explodes in every direction. Originally, the example implementation used decaying $alpha$, a variable controlled by the Simulation used for artificially scaling down the force applied to the system over time, to make the system contract back. A constant \texttt{dataSizeFactor}, similar to that in the custom Link Force, have been added to mitigate the requirement of decaying alpha.
@@ -479,7 +482,7 @@ The D3 API extensively with the Method Chaining design pattern. The main idea is
} }
\end{lstlisting} \end{lstlisting}
As shown in code \ref{lst:impl_HybridUsage}, the algorithm-specific parameters for each Chalmers' force objects are set in advance by the user. Since the Hybrid object interacts with the Simulation and force-calculation objects via general interfaces, other force calculators could potentially be used without having to modify the Hybrid object as well. In fact, D3's original implementation of Force Link also works with the Hybrid object. To terminate the force calculations in the first and last phase, the Hybrid object have an internal iteration counter to stop the calculations after predefined number of time steps. In addition, the applied force threshold events are also supported as an alternative termination criteria. As shown in code \ref{lst:impl_HybridUsage}, the algorithm-specific parameters for each Chalmers' force objects are set in advance by the user. Since the Hybrid object interacts with the Simulation and force-calculation objects via general interfaces, other force calculators could potentially be used without having to modify the Hybrid object as well. In fact, D3's original implementation of Force Link also works with the Hybrid object. To terminate the force calculations in the first and last phase, the Hybrid object have an internal iteration counter to stop the calculations after predefined number of time steps. In addition, the applied force threshold events are also supported as an alternative termination criterion.
For interpolation, two separate functions were created for each method. After the parent is found, both functions call the same third function to handle the rest of the process (step 2 to 8 of in section \ref{sec:bg_hybrid}). For interpolation, two separate functions were created for each method. After the parent is found, both functions call the same third function to handle the rest of the process (step 2 to 8 of in section \ref{sec:bg_hybrid}).
@@ -517,7 +520,7 @@ For interpolation, two separate functions were created for each method. After th
\end{algorithmic} \end{algorithmic}
\end{multicols} \end{multicols}
Since the original paper did not specify the ``satisfactory'' metric for of placing a node $n$ on the circle around its parent (step 3 to 4), Matthew Chalmers, the project advisor who also took part in developing the algorithm, was contacted for clarification. Unfortunately, the knowledge was lost. Instead, sum of distance error between $n$ and every member of $s$ was proposed as an alternative. Preliminary testings shows that it works well and is used for this implementation. Since the original paper did not specify the ``satisfactory'' metric for placing a node $n$ on the circle around its parent (step 3 to 4), Matthew Chalmers, the project advisor who also took part in developing the algorithm, was contacted for clarification. Unfortunately, the knowledge was lost. Instead, sum of distance error between $n$ and every member of $s$ was proposed as an alternative. Preliminary testings shows that it works well and is used for this implementation.
With that decision, the high-dimensional distances between $n$ and each member of $s$ becomes used multiple times for binary searching and placement refinement (step 7 and 8). To reduce the distance function calls, a distance cache have been created. For brute-force parent finding, the cache can be filled while the parent is being selected as $s\subset{S}$. On the other hand, pivot-based searching might not cover every member of $s$. Thus, the missing caches are filled after parent searching. With that decision, the high-dimensional distances between $n$ and each member of $s$ becomes used multiple times for binary searching and placement refinement (step 7 and 8). To reduce the distance function calls, a distance cache have been created. For brute-force parent finding, the cache can be filled while the parent is being selected as $s\subset{S}$. On the other hand, pivot-based searching might not cover every member of $s$. Thus, the missing caches are filled after parent searching.
%============================ %============================
@@ -532,7 +535,7 @@ p2 = performance.now();
console.log("Execution time", p2-p1); console.log("Execution time", p2-p1);
\end{lstlisting} \end{lstlisting}
Stress calculation is done as defined by the formula in section \ref{sec:bg_metrics}. The calculation is independent of the algorithm. In fact, it does not depend on D3 at all. Only an array of node objects and a distance calculation is required. Due to its very long calculation time, this function is only called on-demand when the value have to be recorded. The exact implementation is shown in code \ref{lst:impl_Stress}. Stress calculation is done as defined by the formula in section \ref{sec:bg_metrics}. The calculation is independent of the algorithm. In fact, it does not depend on D3 at all. Only an array of node objects and a distance calculation is required. Due to its very long calculation time, this function is only called on-demand when the value has to be recorded. The exact implementation is shown in code \ref{lst:impl_Stress}.
\begin{lstlisting}[language=JavaScript,caption={Stress calculation function.},label={lst:impl_Stress}] \begin{lstlisting}[language=JavaScript,caption={Stress calculation function.},label={lst:impl_Stress}]
export function getStress(nodes, distance) { export function getStress(nodes, distance) {
@@ -558,14 +561,14 @@ export function getStress(nodes, distance) {
%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%
\chapter{Evaluation} \chapter{Evaluation}
\label{ch:eval} \label{ch:eval}
This chapter presents comparisons between each of the implemented algorithm. First, used data sets will be described. The experiment setup is then introduced, along with decision behind the each test design. Lastly, the results are shown and briefly interpreted. This chapter presents comparisons between each of the implemented algorithms. First, used data sets will be described. The experiment setup is then introduced, along with decision behind the each test design. Lastly, the results are shown and briefly interpreted.
\section{Data Sets} \section{Data Sets}
\label{sec:EvalDataSet} \label{sec:EvalDataSet}
The data sets utilized during the development are the Iris, Poker Hands\cite{UCL_Data}, and Antarctic data set\cite{Antartica_Data}. The data sets utilized during the development are the Iris, Poker Hands\cite{UCL_Data}, and Antarctic data set\cite{Antartica_Data}.
The Iris is one of the most popular data set to get started in Machine Learning. It contain 150 measurements from flowers of Iris Setosa, Iris Versicolour and Iris Virginica species, each with four parameters: petal and sepal width and height in centimeter. It was chosen as a starting point for development because it is a classification data set where the parameters can be used by the distance function and the label can be used to colour each instance. Each species is also clustered quite clearly, making it easier to see if the algorithm is working as intended. The Iris is one of the most popular data set to get started in Machine Learning. It contain 150 measurements from flowers of Iris Setosa, Iris Versicolour and Iris Virginica species, each with four parameters: petal and sepal width and height in centimeter. It was chosen as a starting point for development because it is a classification data set where the parameters can be used by the distance function and the label can be used to colour each instance. Each species is also clustered quite clearly, making it easier to see if the algorithm is working as intended.
The Poker Hands is another classification data set containing possible hands of 5 playing cards, each is described in rank (Ace, 2, 3,...) and suit (Hearts, Spades, etc). Each hand is labelled as a poker hand (unrecognized, Flush, Full house, etc). This data set is selected for the experiment because it contains over a million records. In each test, only subsets of the data is used due to size limitation. The Poker Hands is another classification data set containing hands of 5 playing cards drawn from a standard deck, each is described in rank (Ace, 2, 3,...) and suit (Hearts, Spades, etc). Each hand is labelled as a poker hand (Nothing, Flush, Full house, etc). This data set is selected for the experiment because it contains over a million records. In each test, only subsets of the data is used due to size limitation.
\begin{figure} \begin{figure}
\centering \centering
@@ -590,7 +593,7 @@ The web page is also refreshed after every run to make sure that everything, inc
Both Link force and the Chalmers' 1996 algorithm create a layout that stabilises over time. In D3, calculations are performed for a predefined number of iterations. This have a drawback of having to select an appropriate value. Choosing the number too high means that execution time is wasted calculating minute details with no visible change to the layout while the opposite can results in a bad layout. Both Link force and the Chalmers' 1996 algorithm create a layout that stabilises over time. In D3, calculations are performed for a predefined number of iterations. This have a drawback of having to select an appropriate value. Choosing the number too high means that execution time is wasted calculating minute details with no visible change to the layout while the opposite can results in a bad layout.
Determining the constant number can be problematic, considering that each algorithm may stabilise after different number of iterations, especially when considering that the interpolation result from the Hybrid algorithm can vary greatly from run-to-run(section \ref{ssec:eval_selectParams}). Determining the constant number can be problematic, considering that each algorithm may stabilise after different number of iterations, especially when considering that the interpolation result from the Hybrid algorithm can vary greatly from run-to-run(section \ref{ssec:eval_selectParams}).
An alternative method is to stop when a condition is met. One such condition purposed is the difference in velocity ($\Delta{v}$) of the system between iterations\cite{Algo2002}. In other word, once the amount of force applied in that iteration is lower than a scalar threshold, the calculation may stop. Taking note of stress and average force applied over multiple iterations as illustrated in figure \ref{fig:eval_stressVeloOverTime}, it is clear that Link Force converges to a complete stillness while the Chalmers algorithm reaches and fluctuate around a constant as stated in section \ref{sec:imp_neigbour}. It can also be seen that stress of each layout converges a value as the average force converges a constant, indicating that the best layout each algorithm can create can be obtained once the system stabilises. An alternative method is to stop when a condition is met. One such condition proposed is the difference in velocity ($\Delta{v}$) of the system between iterations\cite{Algo2002}. In other word, once the amount of force applied in that iteration is lower than a scalar threshold, the calculation may stop. Taking note of stress and average force applied over multiple iterations as illustrated in figure \ref{fig:eval_stressVeloOverTime}, it is clear that Link Force converges to a complete stillness while the Chalmers algorithm reaches and fluctuate around a constant as stated in section \ref{sec:imp_neigbour}. It can also be seen that stress of each layout converges a value as the average force converges a constant, indicating that the best layout each algorithm can create can be obtained once the system stabilises.
\begin{figure} \begin{figure}
\centering \centering
@@ -612,7 +615,7 @@ Some of the algorithms have variables that are predefined constant numbers. Care
The Chalmers' algorithm have two adjustable parameters: $Neighbours_{size}$, $Samples_{size}$. The Chalmers' algorithm have two adjustable parameters: $Neighbours_{size}$, $Samples_{size}$.
According to previous evaluations\cite{LastYear}\cite{Algo2002}, favorable layout could be achieved with values as low as $10$ for both variables. Preliminary testings seems to confirm the findings and the values are selected for the experiments. In the other hand, Link force have no adjustable parameter whatsoever so no attention is required. According to previous evaluations\cite{LastYear}\cite{Algo2002}, favorable layout could be achieved with values as low as $10$ for both variables. Preliminary testings seems to confirm the findings and the values are selected for the experiments. In the other hand, Link force have no adjustable parameter whatsoever so no attention is required.
Hybrid layout have multiple parameters during the interpolation phase. For the parent-finding stage, there is a choice of weather to use brute-force or pivot-based searching method. In case of pivot-based, the number of pivots ($k$) have to also be chosen. Experiments have been run to find the accuracy of pivot-based searching, starting from 1 pivot to determine reasonable numbers to use in subsequence experiments. As shown in figure \ref{fig:eval_interpVariations}, the randomly selected $S$ set (the $\sqrt{N}$ samples used in the first stage) can greatly affect the interpolation result, especially with smaller data set with many small clusters. Therefore, each tests have to be run multiple times to generalise the result. From figure \ref{fig:eval_pivotHits}, it can be seen that the more pivots used, the higher accuracy and consistency. The diminishing returns can be observed at around 6 to 10 pivots, depending on the number of data points. Hence, higher number of pivots are not considered for the experiment. Hybrid layout have multiple parameters during the interpolation phase. For the parent-finding stage, there is a choice of whether to use brute-force or pivot-based searching method. In case of pivot-based, the number of pivots ($k$) have to also be chosen. Experiments have been run to find the accuracy of pivot-based searching, starting from 1 pivot to determine reasonable numbers to use in subsequent experiments. As shown in figure \ref{fig:eval_interpVariations}, the randomly selected $S$ set (the $\sqrt{N}$ samples used in the first stage) can greatly affect the interpolation result, especially with smaller data set with many small clusters. Therefore, each tests have to be run multiple times to generalise the result. From figure \ref{fig:eval_pivotHits}, it can be seen that the more pivots used, the higher accuracy and consistency. The diminishing returns can be observed at around 6 to 10 pivots, depending on the number of data points. Hence, higher number of pivots are not considered for the experiment.
Finally, the last step of interpolation is to refine the placement for a constant number of times. Preliminary testings shows that this step helps clean up a lot of interpolation artifacts. For example, a clear radial pattern and straight lines can be seen in figure \ref{sfig:eval_refineCompareA}, especially in the lower right corner. While these artifacts are no longer visible in figure \ref{sfig:eval_refineCompareB}, it is still impossible to obtain a desirable layout, even after more refinement steps were added. Thus, running the Chalmers' algorithm over the entire data set after the interpolation phase is unavoidable. For the rest of the experiment, only two values, 0 and 20 were selected, representing with and without interpolation artifacts cleaning. Finally, the last step of interpolation is to refine the placement for a constant number of times. Preliminary testings shows that this step helps clean up a lot of interpolation artifacts. For example, a clear radial pattern and straight lines can be seen in figure \ref{sfig:eval_refineCompareA}, especially in the lower right corner. While these artifacts are no longer visible in figure \ref{sfig:eval_refineCompareB}, it is still impossible to obtain a desirable layout, even after more refinement steps were added. Thus, running the Chalmers' algorithm over the entire data set after the interpolation phase is unavoidable. For the rest of the experiment, only two values, 0 and 20 were selected, representing with and without interpolation artifacts cleaning.
@@ -658,7 +661,7 @@ Finally, the last step of interpolation is to refine the placement for a constan
%============================ %============================
\subsection{Performance metrics} \subsection{Performance metrics}
As discussed in section \ref{sec:bg_metrics}, there are three main criteria to evaluate each algorithm: execution time, memory consumption, and the produced layout. Although stress is a good metric to judge the quality of a layout, it does not necessary means that layouts of the same stress are equally as good for data exploration. Thus, the looks of the product itself have to also be compared. Since Bartasius have found that Link Force provide a layout with the least stress in all cases\cite{LastYear}, the its layout will be used as a baseline for comparison (recall figure \ref{fig:eval_idealSample}). As discussed in section \ref{sec:bg_metrics}, there are three main criteria to evaluate each algorithm: execution time, memory consumption, and the produced layout. Although stress is a good metric to judge the quality of a layout, it does not necessarily means that layouts of the same stress are equally as good for data exploration. Thus, the looks of the product itself have to also be compared. Since Bartasius have found that Link Force provide a layout with the least stress in all cases\cite{LastYear}, its layout will be used as a baseline for comparison (recall figure \ref{fig:eval_idealSample}).
It should also be noted that for ease of comparison, the visualisations shown in this chapter may be uniformly scaled and rotated. This manipulation should not effect the evaluation as the only concern of a spring model is relative distance between data points. It should also be noted that for ease of comparison, the visualisations shown in this chapter may be uniformly scaled and rotated. This manipulation should not effect the evaluation as the only concern of a spring model is relative distance between data points.
%============================ %============================
@@ -687,9 +690,9 @@ The hybrid layout has multiple phases, each with different theoretical memory co
\label{fig:eval_ram} \label{fig:eval_ram}
\end{figure} \end{figure}
The comparison have been made between the 3 algorithms with hybrid layout running 10 pivots to represent the worst case scenario for interpolation. Rendering is also turned off to minimize the impact due to DOM elements manipulation\cite{LastYear}. The results are displayed in figure \ref{fig:eval_ram}. The modified Link Force, which use less memory compared to the D3's implementation (section \ref{sec:imp_linkForce}), scales badly compared to all others, even with automatic garbage collection. The difference in the base memory usage between the 1996 algorithm and the final stage of Hybrid layout is also within the margin of error, confirming that they both have the same memory requirement. If the final phase of the Hybrid layout is skipped, memory requirement will grow at a slightly lower rate. The comparison has made between the 3 algorithms with hybrid layout running 10 pivots to represent the worst case scenario for interpolation. Rendering is also turned off to minimize the impact due to DOM elements manipulation\cite{LastYear}. The results are displayed in figure \ref{fig:eval_ram}. The modified Link Force, which use less memory compared to the D3's implementation (section \ref{sec:imp_linkForce}), scales badly compared to all others, even with automatic garbage collection. The difference in the base memory usage between the 1996 algorithm and the final stage of Hybrid layout is also within the margin of error, confirming that they both have the same memory requirement. If the final phase of the Hybrid layout is skipped, memory requirement will grow at a slightly lower rate.
Although the original researchers, Chalmers, Morrison, nor Ross, have explored this memory aspect before, Bartasius experimented with the maximum data size the application handle before Out of Memory exception occurred\cite{LastYear}. A similar test is re-performed to find if there has been any changes. Although the original researchers, Chalmers, Morrison and Ross, have explored this memory aspect before, Bartasius experimented with the maximum data size the application handle before Out of Memory exception occurred\cite{LastYear}. A similar test is re-performed to find if there has been any changes.
Due to JavaScript limitation, Link Force crashes the browser tab at 50,000 data points before any spring force is calculated, failing the test entirely. The similar behavior can also be observed with the D3's implementation. In contrast, the Chalmers' and hybrid algorithm can process as much as 470,000 data points. Interestingly, while the Chalmers' algorithm can also handle 600,000 data points with rendering, the 8GB memory is all used up, causing heavy thrashing and slowing down the entire machine. Considering that, paging does not occur when Link Force crashes the browser tab, memory requirement may not the only limiting factor in play. Due to JavaScript limitation, Link Force crashes the browser tab at 50,000 data points before any spring force is calculated, failing the test entirely. The similar behavior can also be observed with the D3's implementation. In contrast, the Chalmers' and hybrid algorithm can process as much as 470,000 data points. Interestingly, while the Chalmers' algorithm can also handle 600,000 data points with rendering, the 8GB memory is all used up, causing heavy thrashing and slowing down the entire machine. Considering that, paging does not occur when Link Force crashes the browser tab, memory requirement may not the only limiting factor in play.
All in all, since a desirable result can not be obtained from Hybrid algorithm if the final stage is skipped, there is no benefit in term of memory usage from using the Hybrid layout, compared to Chalmers' algorithm. Both of them have a lot smaller memory footprint compared to Link Force and can work with a lot more data points on the same hardware constraint. All in all, since a desirable result can not be obtained from Hybrid algorithm if the final stage is skipped, there is no benefit in term of memory usage from using the Hybrid layout, compared to Chalmers' algorithm. Both of them have a lot smaller memory footprint compared to Link Force and can work with a lot more data points on the same hardware constraint.
@@ -710,7 +713,7 @@ It should also be noted that while the original researchers had a similar experi
\label{fig:eval_hybridParams10k} \label{fig:eval_hybridParams10k}
\end{figure} \end{figure}
Figure \ref{fig:eval_hybridParams10k} and \ref{fig:eval_hybridParams100k} shows that most of the execution time is spent in the final phase, making the number of iterations very important. While refining the interpolation result takes an insignificant amount of time, it both reduces the stress of the final layout and help the last phase reach stability much faster across the board. Figure \ref{fig:eval_pivotToggleRefine} also suggest that the produced layout is much more accurate. Without refining, it can be seen that a lot of ``One pair'' (orange) and ``Two pair'' (green) data points circles around ``Unrecognized'' (blue) when they should not. Thus, there is no compelling reason to disable this refinement step. Figure \ref{fig:eval_hybridParams10k} and \ref{fig:eval_hybridParams100k} shows that most of the execution time is spent in the final phase, making the number of iterations very important. While refining the interpolation result takes an insignificant amount of time, it both reduces the stress of the final layout and help the last phase reach stability much faster across the board. Figure \ref{fig:eval_pivotToggleRefine} also suggest that the produced layout is much more accurate. Without refining, it can be seen that a lot of ``One pair'' (orange) and ``Two pair'' (green) data points circles around ``Nothing'' (blue) when they should not. Thus, there is no compelling reason to disable this refinement step.
Surprisingly, despite lower time complexity, selecting higher number of pivots on the smaller data set can results in higher execution time than brute-force, negating any benefits of using it. At 10,000 data points, 3 pivots takes approximately as much time as brute-force, marking the highest sensible number of pivots to use. Looking at the lower number, the time saved from using 1 pivot does not reflect on the total time used by the layout. At 100,000 points, however, a significant speed up can be observed and is reflected in the total execution time. This suggests that pivot-based searching could shine with even larger data set and slower distance functions. Surprisingly, despite lower time complexity, selecting higher number of pivots on the smaller data set can results in higher execution time than brute-force, negating any benefits of using it. At 10,000 data points, 3 pivots takes approximately as much time as brute-force, marking the highest sensible number of pivots to use. Looking at the lower number, the time saved from using 1 pivot does not reflect on the total time used by the layout. At 100,000 points, however, a significant speed up can be observed and is reflected in the total execution time. This suggests that pivot-based searching could shine with even larger data set and slower distance functions.
@@ -744,7 +747,7 @@ Surprisingly, despite lower time complexity, selecting higher number of pivots o
Between brute-force and 1 pivot, there is no visual difference aside from variation from run-to-run. The stress measurement seems to support this subjective opinion. On the other hand, brute-force seems to results in a more consistent total execution time. Considering that refinement is stronger with bigger data set as there are more points to compare against, it make sense that the effect of low accuracy is easily corrected in larger data set. Between brute-force and 1 pivot, there is no visual difference aside from variation from run-to-run. The stress measurement seems to support this subjective opinion. On the other hand, brute-force seems to results in a more consistent total execution time. Considering that refinement is stronger with bigger data set as there are more points to compare against, it make sense that the effect of low accuracy is easily corrected in larger data set.
In summary, to obtain quality layout, the refining step of the interpolation phase can not be ignored. Pivot-based searching only provide a significant benefit with very large data set and/or slow distance function. Otherwise, brute-force method can consistently yield a better layout in less time. In summary, to obtain quality layout, the refining step of the interpolation phase can not be ignored. Pivot-based searching only provides a significant benefit with very large data set and/or slow distance function. Otherwise, brute-force method can consistently yield a better layout in less time.
%============================ %============================
\subsection{The 3-way comparison} \subsection{The 3-way comparison}
@@ -789,7 +792,7 @@ Figure \ref{sfig:eval_multiAlgoTime} shows the execution time and stress of the
As for the stress, a relative value is used for comparison. Figure \ref{sfig:eval_multiAlgoStress} shows that the Hybrid algorithm results in a layout of lower stress overall. A trend also implies that the more data points available, the better the Chalmers' and Hybrid algorithm perform. In any cases, Link Force's always has the lowest stress. As for the stress, a relative value is used for comparison. Figure \ref{sfig:eval_multiAlgoStress} shows that the Hybrid algorithm results in a layout of lower stress overall. A trend also implies that the more data points available, the better the Chalmers' and Hybrid algorithm perform. In any cases, Link Force's always has the lowest stress.
Comparing the produced layout, at 10,000 data points (figure \ref{fig:eval_Poker10k}), Hybrid can better reproduce the space between large clusters as seen in the Link Force's layout. For example, ``Unrecongnized'' (blue) and ``One pair'' (orange) have a clearer gap; ``Two pairs'' (green) and ``Three of a kind'' (red) overlap less; ``Three of a kind'' and ``Straight'' (brown) mixes together in Chalmers' layout but more separated in the Hybrid layout. However, for other classes with less data points (colored brown, purple, pink, ...), the hybrid layout fail to form a cluster, causing them to spread out even more. The same phenomenon can be observed at 100,000 data points (figure \ref{fig:eval_Poker100k}). Comparing the produced layout, at 10,000 data points (figure \ref{fig:eval_Poker10k}), Hybrid can better reproduce the space between large clusters as seen in the Link Force's layout. For example, ``Nothing'' (blue) and ``One pair'' (orange) have a clearer gap; ``Two pairs'' (green) and ``Three of a kind'' (red) overlap less; ``Three of a kind'' and ``Straight'' (brown) mixes together in Chalmers' layout but more separated in the Hybrid layout. However, for other classes with less data points (colored brown, purple, pink, ...), the hybrid layout fail to form a cluster, causing them to spread out even more. The same phenomenon can be observed at 100,000 data points (figure \ref{fig:eval_Poker100k}).
\begin{figure}[h] % Poker 100k \begin{figure}[h] % Poker 100k
\centering \centering
@@ -806,7 +809,7 @@ Comparing the produced layout, at 10,000 data points (figure \ref{fig:eval_Poker
\label{fig:eval_Poker100k} \label{fig:eval_Poker100k}
\end{figure} \end{figure}
The area where the 1996 and Hybrid algorithm fall short is the consistency in the layout quality with smaller data points. Sometime, both algorithms stops at the local minimum stress, instead of than global, resulting in an inaccurate result. Figure \ref{fig:eval_IrisBad} and \ref{fig:eval_Poker100Bad} shows examples of such occurrence. If the 1996 algorithms were allowed to continue the calculation, the layout will eventually reach the true stable position, depending on when a right combination of $Samples$ set is randomized to trip the system off its local stable position. The area where the 1996 and Hybrid algorithm fall short is the consistency in the layout quality with smaller data points. Sometime, both algorithms stop at the local minimum stress, instead of than global, resulting in an inaccurate result. Figure \ref{fig:eval_IrisBad} and \ref{fig:eval_Poker100Bad} shows examples of such occurrence. If the 1996 algorithms were allowed to continue the calculation, the layout will eventually reach the true stable position, depending on when a right combination of $Samples$ set is randomized to trip the system off its local stable position.
\begin{figure}[h] % Iris BAD \begin{figure}[h] % Iris BAD
\centering \centering
@@ -872,7 +875,7 @@ Moving to the Antartica data set with a more complicated pattern, all three algo
%============================ %============================
\section{Summary} \section{Summary}
Each algorithm demonstrates their own strengths and weaknesses in different tests. For smaller data sets with a few thousands data points, Link Force works great and perform consistently. Most information visualisations on a web page will not hit the limitation of the algorithm. In addition, it allows the real-time object interactions and produces smooth animations which might be more important to most users. However, for a fully-connected spring model with over 1,000 data points, the startup time spent on distance caching starts to become noticeable and the each iteration can takes longer than 17ms time limit, dropping the animation below 60fps, causing visible stuttering and slowdown. Its memory-hungry nature also limits the ability to run on lower-end computers that a significant margin of the Internet users possess. Each algorithm demonstrates its own strengths and weaknesses in different tests. For smaller data sets with a few thousand data points, Link Force works great and perform consistently. Most information visualisations on a web page will not hit the limitation of the algorithm. In addition, it allows the real-time object interactions and produces smooth animations which might be more important to most users. However, for a fully-connected spring model with over 1,000 data points, the startup time spent on distance caching starts to become noticeable and the each iteration can takes longer than 17ms time limit, dropping the animation below 60fps, causing visible stuttering and slowdown. Its memory-hungry nature also limits the ability to run on lower-end computers that a significant margin of the Internet users possess.
When bigger data sets are loaded and interactivity is not a concern, performing the Hybrid layout's interpolation strategy before running the 1996 algorithm results in a better layout in a shorter amount of time. It should be noted that, this method does not work consistently with smaller data set, making Link Force a better option. As for interpolation, simple brute-force method is the better choice in general. Pivot-based searching does not significantly decrease the computation time, unless a very large data set is concerned, and the result is less predictable. When bigger data sets are loaded and interactivity is not a concern, performing the Hybrid layout's interpolation strategy before running the 1996 algorithm results in a better layout in a shorter amount of time. It should be noted that, this method does not work consistently with smaller data set, making Link Force a better option. As for interpolation, simple brute-force method is the better choice in general. Pivot-based searching does not significantly decrease the computation time, unless a very large data set is concerned, and the result is less predictable.
@@ -953,6 +956,10 @@ Most of the settings are available on the web interface. However, the cut-off va
\end{verbatim} \end{verbatim}
and edit the parameter of \texttt{stableVelocity()} method. and edit the parameter of \texttt{stableVelocity()} method.
Aside from the Poker Hands data set, all tests uses the ``General'' distance function, which is similar to the Euclidean distance but scales distances per feature and supports strings and dates. It also ignores \texttt{index} and \texttt{type} field in the CSV file so that the label of the Iris data set is not taken into account.
Euclidean distance function ignores \texttt{class}, \texttt{app}, \texttt{user}, \texttt{weekday} and \texttt{type} fields. This is for other data sets not used in this project. It will also crash when other fields contain non-number values. The error can be seen in the JavaScript console.
\chapter{Setting up development environment} \chapter{Setting up development environment}
The API references and instruction for building the plug-in is available in README.md file. Please note that the build scripts are written for Ubuntu and may have to be adapted for other distributions or operating systems. A built JavaScript file for the plug-in is already included with the submission, hence re-building is unnecessary. The API references and instruction for building the plug-in is available in README.md file. Please note that the build scripts are written for Ubuntu and may have to be adapted for other distributions or operating systems. A built JavaScript file for the plug-in is already included with the submission, hence re-building is unnecessary.
@@ -971,6 +978,8 @@ The output files will be located in the \texttt{build} directory.
The evaluation web page is self-contained and can be edited with any text editor without Node.js. It will load the plug-in from the \texttt{build} directory. When the new build of the plug-in is compiled, simply refresh the web page will load up the new build. The evaluation web page is self-contained and can be edited with any text editor without Node.js. It will load the plug-in from the \texttt{build} directory. When the new build of the plug-in is compiled, simply refresh the web page will load up the new build.
The code is currently hosted on a personal publicly-accessible Git service at \url{https://git.win32exe.tech/brian/d3-spring-model}. Since this is on a personal bare-metal server, it will be maintained with best-effort without guarantee.
\end{appendices} \end{appendices}