Displaying all 8 publications

Abstract:
Sort:
  1. Al-Ameen Z, Sulong G
    Interdiscip Sci, 2015 Sep;7(3):319-25.
    PMID: 26199211 DOI: 10.1007/s12539-015-0022-1
    In computed tomography (CT), blurring occurs due to different hardware or software errors and hides certain medical details that are present in an image. Image blur is difficult to avoid in many circumstances and can frequently ruin an image. For this, many methods have been developed to reduce the blurring artifact from CT images. The problems with these methods are the high implementation time, noise amplification and boundary artifacts. Hence, this article presents an amended version of the iterative Landweber algorithm to attain artifact-free boundaries and less noise amplification in a faster application time. In this study, both synthetic and real blurred CT images are used to validate the proposed method properly. Similarly, the quality of the processed synthetic images is measured using the feature similarity index, structural similarity and visual information fidelity in pixel domain metrics. Finally, the results obtained from intensive experiments and performance evaluations show the efficiency of the proposed algorithm, which has potential as a new approach in medical image processing.
  2. Al-Ameen Z, Sulong G
    Scanning, 2015 Mar-Apr;37(2):116-25.
    PMID: 25663630 DOI: 10.1002/sca.21187
    Contrast is a distinctive visual attribute that indicates the quality of an image. Computed Tomography (CT) images are often characterized as poor quality due to their low-contrast nature. Although many innovative ideas have been proposed to overcome this problem, the outcomes, especially in terms of accuracy, visual quality and speed, are falling short and there remains considerable room for improvement. Therefore, an improved version of the single-scale Retinex algorithm is proposed to enhance the contrast while preserving the standard brightness and natural appearance, with low implementation time and without accentuating the noise for CT images. The novelties of the proposed algorithm consist of tuning the standard single-scale Retinex, adding a normalized-ameliorated Sigmoid function and adapting some parameters to improve its enhancement ability. The proposed algorithm is tested with synthetically and naturally degraded low-contrast CT images, and its performance is also verified with contemporary enhancement techniques using two prevalent quality evaluation metrics-SSIM and UIQI. The results obtained from intensive experiments exhibited significant improvement not only in enhancing the contrast but also in increasing the visual quality of the processed images. Finally, the proposed low-complexity algorithm provided satisfactory results with no apparent errors and outperformed all the comparative methods.
  3. Al-Ameen Z, Sulong G
    Interdiscip Sci, 2015 Feb 06.
    PMID: 25663110
    In computed tomography (CT), blurring occurs due to different hardware or software errors, and hides certain medical details that are present in an image. Image blur is difficult to avoid in many circumstances and can frequently ruin an image. For this, many methods have been developed to reduce the blurring artifact from CT images. The problems with these methods are the high implementation time, noise amplification and boundary artifacts. Hence, this article presents an amended version of the iterative Landweber algorithm to attain artifact-free boundaries and less noise amplification in a faster application time. In this study, both synthetic and real blurred CT images are used to validate the proposed method properly. Similarly, the quality of the processed synthetic images is measured using the Feature Similarity Index (FSIM), Structural Similarity (SSIM) and Visual Information Fidelity in Pixel Domain (VIFP) metrics. Finally, the results obtained from intensive experiments and performance evaluations show the efficiency of the proposed algorithm, which has potential as a new approach in medical image processing.
  4. Tan GJ, Sulong G, Rahim MSM
    Forensic Sci Int, 2017 Oct;279:41-52.
    PMID: 28843097 DOI: 10.1016/j.forsciint.2017.07.034
    This paper presents a review on the state of the art in offline text-independent writer identification methods for three major languages, namely English, Chinese and Arabic, which were published in literatures from 2011 till 2016. For ease of discussions, we grouped the techniques into three categories: texture-, structure-, and allograph-based. Results are analysed, compared and tabulated along with datasets used for fair and just comparisons. It is observed that during that period, there are significant progresses achieved on English and Arabic; however, the growth on Chinese is rather slow and far from satisfactory in comparison to its wide usage. This is due to its complex writing structure. Meanwhile, issues on datasets used by previous studies are also highlighted because the size matter - accuracy of the writer identification deteriorates as database size increases.
  5. Hamoud Al-Tamimi MS, Sulong G, Shuaib IL
    Magn Reson Imaging, 2015 Jul;33(6):787-803.
    PMID: 25865822 DOI: 10.1016/j.mri.2015.03.008
    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor.
  6. Ismael Al-Sanjary O, Ahmed AA, Sulong G
    Forensic Sci Int, 2016 Sep;266:565-572.
    PMID: 27574113 DOI: 10.1016/j.forsciint.2016.07.013
    Forgery is an act of modifying a document, product, image or video, among other media. Video tampering detection research requires an inclusive database of video modification. This paper aims to discuss a comprehensive proposal to create a dataset composed of modified videos for forensic investigation, in order to standardize existing techniques for detecting video tampering. The primary purpose of developing and designing this new video library is for usage in video forensics, which can be consciously associated with reliable verification using dynamic and static camera recognition. To the best of the author's knowledge, there exists no similar library among the research community. Videos were sourced from YouTube and by exploring social networking sites extensively by observing posted videos and rating their feedback. The video tampering dataset (VTD) comprises a total of 33 videos, divided among three categories in video tampering: (1) copy-move, (2) splicing, and (3) swapping-frames. Compared to existing datasets, this is a higher number of tampered videos, and with longer durations. The duration of every video is 16s, with a 1280×720 resolution, and a frame rate of 30 frames per second. Moreover, all videos possess the same formatting quality (720p(HD).avi). Both temporal and spatial video features were considered carefully during selection of the videos, and there exists complete information related to the doctored regions in every modified video in the VTD dataset. This database has been made publically available for research on splicing, Swapping frames, and copy-move tampering, and, as such, various video tampering detection issues with ground truth. The database has been utilised by many international researchers and groups of researchers.
  7. Kolivand H, Fern BM, Rahim MSM, Sulong G, Baker T, Tully D
    PLoS One, 2018;13(2):e0191447.
    PMID: 29420568 DOI: 10.1371/journal.pone.0191447
    In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost.
  8. Man MY, Ong MS, Mohamad MS, Deris S, Sulong G, Yunus J, et al.
    Malays J Med Sci, 2015 Dec;22(Spec Issue):9-19.
    PMID: 27006633 MyJurnal
    Neuroimaging is a new technique used to create images of the structure and function of the nervous system in the human brain. Currently, it is crucial in scientific fields. Neuroimaging data are becoming of more interest among the circle of neuroimaging experts. Therefore, it is necessary to develop a large amount of neuroimaging tools. This paper gives an overview of the tools that have been used to image the structure and function of the nervous system. This information can help developers, experts, and users gain insight and a better understanding of the neuroimaging tools available, enabling better decision making in choosing tools of particular research interest. Sources, links, and descriptions of the application of each tool are provided in this paper as well. Lastly, this paper presents the language implemented, system requirements, strengths, and weaknesses of the tools that have been widely used to image the structure and function of the nervous system.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links