We present an algorithm to reduce the number of slices from 2D contour cross sections. The main aim of the algorithm is to filter less significant slices while preserving an acceptable level of output quality and keeping the computational cost to reconstruct surface(s) at a minimal level. This research is motivated mainly by two factors; first 2D cross sections data is often huge in size and high in precisions – the computational cost to reconstruct surface(s) from them is closely related to the size and complexity of this data. Second, we can trades visual fidelity with speed of computations if we can remove visually insignificant data from the original dataset which may contains redundant information. In our algorithm we use the number of contour points on a pair of slices to calculate the distance between them. Selection to retain/reject a slice is based on the value of distance compared against a threshold value. Optimal threshold value is derived to produce set of slices that collectively represent the feature of the dataset. We tested our algorithm over six different set of data, varying in complexities and sizes. The results show slice reduction rate depends on the complexity of the dataset, where highest reduction percentage is achieved for objects with lots of constant local variations. Our derived optimal thresholds seem to be able to produce the right set of slices with the potential of creating surface(s) that traded off the accuracy and speed requirements.
Landmarks, also known as feature points, are one of the important geometry primitives that describe the predominant characteristics of a surface. In this study we proposed a self-contained framework to generate landmarks on surfaces extracted from volumetric data. The framework is designed to be a three-fold pipeline structure. The pipeline comprises three phases which are surface construction, crest line extraction and landmark identification. With input as a volumetric data and output as landmarks, the pipeline takes in 3D raw data and produces a 0D geometry feature. In each phase we investigate existing methods, extend and tailor the methods to fit the pipeline design. The pipeline is designed to be functional as it is modularised to have a dedicated function in each phase. We extended the implicit surface polygonizer for surface construction in first phase, developed an alternative way to compute the gradient of maximal curvature for crest line extraction in second phase and finally we combine curvature information and K-means clustering method to identify the landmarks in the third phase. The implementations are firstly carried on a controlled environment, i.e. synthetic data, for proof of concept. Then the method is tested on a small scale data set and subsequently on huge data set. Issues and justifications are addressed accordingly for each phase.