Displaying publications 1 - 20 of 113 in total

Abstract:
Sort:
  1. Chong JWR, Khoo KS, Chew KW, Ting HY, Show PL
    Biotechnol Adv, 2023;63:108095.
    PMID: 36608745 DOI: 10.1016/j.biotechadv.2023.108095
    Identification of microalgae species is of importance due to the uprising of harmful algae blooms affecting both the aquatic habitat and human health. Despite this occurence, microalgae have been identified as a green biomass and alternative source due to its promising bioactive compounds accumulation that play a significant role in many industrial applications. Recently, microalgae species identification has been conducted through DNA analysis and various microscopy techniques such as light, scanning electron, transmission electron, and atomic force -microscopy. The aforementioned procedures have encouraged researchers to consider alternate ways due to limitations such as costly validation, requiring skilled taxonomists, prolonged analysis, and low accuracy. This review highlights the potential innovations in digital microscopy with the incorporation of both hardware and software that can produce a reliable recognition, detection, enumeration, and real-time acquisition of microalgae species. Several steps such as image acquisition, processing, feature extraction, and selection are discussed, for the purpose of generating high image quality by removing unwanted artifacts and noise from the background. These steps of identification of microalgae species is performed by reliable image classification through machine learning as well as deep learning algorithms such as artificial neural networks, support vector machines, and convolutional neural networks. Overall, this review provides comprehensive insights into numerous possibilities of microalgae image identification, image pre-processing, and machine learning techniques to address the challenges in developing a robust digital classification tool for the future.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  2. Al-Khatib AR, Rajion ZA, Masudi SM, Hassan R, Anderson PJ, Townsend GC
    Orthod Craniofac Res, 2011 Nov;14(4):243-53.
    PMID: 22008304 DOI: 10.1111/j.1601-6343.2011.01529.x
    To investigate tooth size and dental arch dimensions in Malays using a stereophotogrammetric system.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  3. Said MA, Musarudin M, Zulkaffli NF
    Ann Nucl Med, 2020 Dec;34(12):884-891.
    PMID: 33141408 DOI: 10.1007/s12149-020-01543-x
    OBJECTIVE: 18F is the most extensively used radioisotope in current clinical practices of PET imaging. This selection is based on the several criteria of pure PET radioisotopes with an optimum half-life, and low positron energy that contributes to a smaller positron range. In addition to 18F, other radioisotopes such as 68Ga and 124I are currently gained much attention with the increase in interest in new PET tracers entering the clinical trials. This study aims to determine the minimal scan time per bed position (Tmin) for the 124I and 68Ga based on the quantitative differences in PET imaging of 68Ga and 124I relative to 18F.

    METHODS: The European Association of Nuclear Medicine (EANM) procedure guidelines version 2.0 for FDG-PET tumor imaging has adhered for this purpose. A NEMA2012/IEC2008 phantom was filled with tumor to background ratio of 10:1 with the activity concentration of 30 kBq/ml ± 10 and 3 kBq/ml ± 10% for each radioisotope. The phantom was scanned using different acquisition times per bed position (1, 5, 7, 10 and 15 min) to determine the Tmin. The definition of Tmin was performed using an image coefficient of variations (COV) of 15%.

    RESULTS: Tmin obtained for 18F, 68Ga and 124I were 3.08, 3.24 and 32.93 min, respectively. Quantitative analyses among 18F, 68Ga and 124I images were performed. Signal-to-noise ratio (SNR), contrast recovery coefficients (CRC), and visibility (VH) are the image quality parameters analysed in this study. Generally, 68Ga and 18F gave better image quality as compared to 124I for all the parameters studied.

    CONCLUSION: We have defined Tmin for 18F, 68Ga and 124I SPECT CT imaging based on NEMA2012/IEC2008 phantom imaging. Despite the long scanning time suggested by Tmin, improvement in the image quality is acquired especially for 124I. In clinical practice, the long acquisition time, nevertheless, may cause patient discomfort and motion artifact.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  4. Majeed A, Mt Piah AR, Ridzuan Yahya Z
    PLoS One, 2016;11(3):e0149921.
    PMID: 26967643 DOI: 10.1371/journal.pone.0149921
    Maxillofacial trauma are common, secondary to road traffic accident, sports injury, falls and require sophisticated radiological imaging to precisely diagnose. A direct surgical reconstruction is complex and require clinical expertise. Bio-modelling helps in reconstructing surface model from 2D contours. In this manuscript we have constructed the 3D surface using 2D Computerized Tomography (CT) scan contours. The fracture part of the cranial vault are reconstructed using GC1 rational cubic Ball curve with three free parameters, later the 2D contours are flipped into 3D with equidistant z component. The constructed surface is represented by contours blending interpolant. At the end of this manuscript a case report of parietal bone fracture is also illustrated by employing this method with a Graphical User Interface (GUI) illustration.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Islam MS, Hannan MA, Basri H, Hussain A, Arebey M
    Waste Manag, 2014 Feb;34(2):281-90.
    PMID: 24238802 DOI: 10.1016/j.wasman.2013.10.030
    The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. Sim KS, Lai MA, Tso CP, Teo CC
    J Med Syst, 2011 Feb;35(1):39-48.
    PMID: 20703587 DOI: 10.1007/s10916-009-9339-9
    A novel technique to quantify the signal-to-noise ratio (SNR) of magnetic resonance images is developed. The image SNR is quantified by estimating the amplitude of the signal spectrum using the autocorrelation function of just one single magnetic resonance image. To test the performance of the quantification, SNR measurement data are fitted to theoretically expected curves. It is shown that the technique can be implemented in a highly efficient way for the magnetic resonance imaging system.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Sim KS, Kiani MA, Nia ME, Tso CP
    J Microsc, 2014 Jan;253(1):1-11.
    PMID: 24164248 DOI: 10.1111/jmi.12089
    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. Pathan RK, Biswas M, Yasmin S, Khandaker MU, Salman M, Youssef AAF
    Sci Rep, 2023 Oct 09;13(1):16975.
    PMID: 37813932 DOI: 10.1038/s41598-023-43852-x
    Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous studies have successfully recognized sign language, it requires many costly instruments including sensors, devices, and high-end processing power. However, such drawbacks can be easily overcome by employing artificial intelligence-based techniques. Since, in this modern era of advanced mobile technology, using a camera to take video or images is much easier, this study demonstrates a cost-effective technique to detect American Sign Language (ASL) using an image dataset. Here, "Finger Spelling, A" dataset has been used, with 24 letters (except j and z as they contain motion). The main reason for using this dataset is that these images have a complex background with different environments and scene colors. Two layers of image processing have been used: in the first layer, images are processed as a whole for training, and in the second layer, the hand landmarks are extracted. A multi-headed convolutional neural network (CNN) model has been proposed and tested with 30% of the dataset to train these two layers. To avoid the overfitting problem, data augmentation and dynamic learning rate reduction have been used. With the proposed model, 98.981% test accuracy has been achieved. It is expected that this study may help to develop an efficient human-machine communication system for a deaf-mute society.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  9. Abu A, Susan LL, Sidhu AS, Dhillon SK
    BMC Bioinformatics, 2013;14:48.
    PMID: 23398696 DOI: 10.1186/1471-2105-14-48
    Digitised monogenean images are usually stored in file system directories in an unstructured manner. In this paper we propose a semantic representation of these images in the form of a Monogenean Haptoral Bar Image (MHBI) ontology, which are annotated with taxonomic classification, diagnostic hard part and image properties. The data we used are basically of the monogenean species found in fish, thus we built a simple Fish ontology to demonstrate how the host (fish) ontology can be linked to the MHBI ontology. This will enable linking of information from the monogenean ontology to the host species found in the fish ontology without changing the underlying schema for either of the ontologies.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  10. Sheikh Abdullah SN, Bohani FA, Nayef BH, Sahran S, Al Akash O, Iqbal Hussain R, et al.
    Comput Math Methods Med, 2016;2016:8603609.
    PMID: 27516807 DOI: 10.1155/2016/8603609
    Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  11. Aly CA, Abas FS, Ann GH
    Sci Prog, 2021;104(2):368504211005480.
    PMID: 33913378 DOI: 10.1177/00368504211005480
    INTRODUCTION: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.

    OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.

    METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.

    RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.

    CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  12. Ninomiya K, Arimura H, Chan WY, Tanaka K, Mizuno S, Muhammad Gowdh NF, et al.
    PLoS One, 2021;16(1):e0244354.
    PMID: 33428651 DOI: 10.1371/journal.pone.0244354
    OBJECTIVES: To propose a novel robust radiogenomics approach to the identification of epidermal growth factor receptor (EGFR) mutations among patients with non-small cell lung cancer (NSCLC) using Betti numbers (BNs).

    MATERIALS AND METHODS: Contrast enhanced computed tomography (CT) images of 194 multi-racial NSCLC patients (79 EGFR mutants and 115 wildtypes) were collected from three different countries using 5 manufacturers' scanners with a variety of scanning parameters. Ninety-nine cases obtained from the University of Malaya Medical Centre (UMMC) in Malaysia were used for training and validation procedures. Forty-one cases collected from the Kyushu University Hospital (KUH) in Japan and fifty-four cases obtained from The Cancer Imaging Archive (TCIA) in America were used for a test procedure. Radiomic features were obtained from BN maps, which represent topologically invariant heterogeneous characteristics of lung cancer on CT images, by applying histogram- and texture-based feature computations. A BN-based signature was determined using support vector machine (SVM) models with the best combination of features that maximized a robustness index (RI) which defined a higher total area under receiver operating characteristics curves (AUCs) and lower difference of AUCs between the training and the validation. The SVM model was built using the signature and optimized in a five-fold cross validation. The BN-based model was compared to conventional original image (OI)- and wavelet-decomposition (WD)-based models with respect to the RI between the validation and the test.

    RESULTS: The BN-based model showed a higher RI of 1.51 compared with the models based on the OI (RI: 1.33) and the WD (RI: 1.29).

    CONCLUSION: The proposed model showed higher robustness than the conventional models in the identification of EGFR mutations among NSCLC patients. The results suggested the robustness of the BN-based approach against variations in image scanner/scanning parameters.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  13. Ab Hamid F, Che Azemin MZ, Salam A, Aminuddin A, Mohd Daud N, Zahari I
    Curr Eye Res, 2016 Jun;41(6):823-31.
    PMID: 26268475 DOI: 10.3109/02713683.2015.1056375
    PURPOSE: The goal of this study was to provide the empirical evidence of fractal dimension as an indirect measure of retinal vasculature density.

    MATERIALS AND METHODS: Two hundred retinal samples of right eye [57.0% females (n = 114) and 43.0% males (n = 86)] were selected from baseline visit. A custom-written software was used for vessel segmentation. Vessel segmentation is the process of transforming two-dimensional color images into binary images (i.e. black and white pixels). The circular area of approximately 2.6 optic disc radii surrounding the center of optic disc was cropped. The non-vessels fragments were removed. FracLac was used to measure the fractal dimension and vessel density of retinal vessels.

    RESULTS: This study suggested that 14.1% of the region of interest (i.e. approximately 2.6 optic disk radii) comprised retinal vessel structure. Using correlation analysis, vessel density measurement and fractal dimension estimation are linearly and strongly correlated (R = 0.942, R(2) = 0.89, p 

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Othman SA, Ahmad R, Mericant AF, Jamaludin M
    Aust Orthod J, 2013 May;29(1):58-65.
    PMID: 23785939
    Fast and non-invasive systems of the three-dimensional (3D) technology are a recent trend in orthodontics. The reproducibility of facial landmarks is important so that 3D facial measurements are accurate and may-be applied clinically. The aim of this study is to evaluate the reproducibility of facial soft tissue landmarks using a non-invasive stereo-photogrammetry 3D camera.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Isa ZM, Abdulhadi LM
    J Oral Sci, 2012;54(2):159-63.
    PMID: 22790408
    We investigated the relationship of the maxillary central incisors to the incisive papilla in wearers of complete dentures. First, image analyzer software was used to examine the relationship of the midpoint of the incisive papilla to the labial surface of the maxillary central incisors on occlusal photographs of 120 maxillary casts from dentate Malaysian adults. Then, an Alma denture gauge was used to identify the position of the labial surface of the maxillary central incisors in relation to the midpoint of the incisive papilla in complete dentures from 51 patients who requested replacement dentures at the Faculty of Dentistry, University of Malaya. The mean incisor distance to the incisive papilla in dentate adults was 9.59 ± 1.00 mm, while the mean incisor distance to the incisive papilla in complete dentures was 6.34 ± 1.87 mm. Thus, in our sample of edentulous patients, the anterior teeth in complete dentures were positioned approximately 3 mm closer to the incisive papilla, as compared with the position of the central incisors in natural dentition, and did not duplicate the position of the natural anterior teeth.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  16. Chew KM, Sudirman R, Seman N, Yong CY
    Biomed Mater Eng, 2014;24(1):199-207.
    PMID: 24211899 DOI: 10.3233/BME-130800
    The study was conducted based on two objectives as framework. The first objective is to determine the point of microwave signal reflection while penetrating into the simulation models and, the second objective is to analyze the reflection pattern when the signal penetrate into the layers with different relative permittivity, εr. Thus, several microwave models were developed to make a close proximity of the in vivo human brain. The study proposed two different layers on two different characteristics models. The radii on the second layer and the corresponding antenna positions are the factors for both models. The radii for model 1 is 60 mm with an antenna position of 10 mm away, in contrast, model 2 is 10 mm larger in size with a closely adapted antenna without any gap. The layers of the models were developed with different combination of materials such as Oil, Sandy Soil, Brain, Glycerin and Water. Results show the combination of Glycerin + Brain and Brain + Sandy Soil are the best proximity of the in vivo human brain grey and white matter. The results could benefit subsequent studies for further enhancement and development of the models.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Bilal M, Anis H, Khan N, Qureshi I, Shah J, Kadir KA
    Biomed Res Int, 2019;2019:6139785.
    PMID: 31119178 DOI: 10.1155/2019/6139785
    Background: Motion is a major source of blurring and ghosting in recovered MR images. It is more challenging in Dynamic Contrast Enhancement (DCE) MRI because motion effects and rapid intensity changes in contrast agent are difficult to distinguish from each other.

    Material and Methods: In this study, we have introduced a new technique to reduce the motion artifacts, based on data binning and low rank plus sparse (L+S) reconstruction method for DCE MRI. For Data binning, radial k-space data is acquired continuously using the golden-angle radial sampling pattern and grouped into various motion states or bins. The respiratory signal for binning is extracted directly from radially acquired k-space data. A compressed sensing- (CS-) based L+S matrix decomposition model is then used to reconstruct motion sorted DCE MR images. Undersampled free breathing 3D liver and abdominal DCE MR data sets are used to validate the proposed technique.

    Results: The performance of the technique is compared with conventional L+S decomposition qualitatively along with the image sharpness and structural similarity index. Recovered images are visually sharper and have better similarity with reference images.

    Conclusion: L+S decomposition provides improved MR images with data binning as preprocessing step in free breathing scenario. Data binning resolves the respiratory motion by dividing different respiratory positions in multiple bins. It also differentiates the respiratory motion and contrast agent (CA) variations. MR images recovered for each bin are better as compared to the method without data binning.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Sim KS, Tan YY, Lai MA, Tso CP, Lim WK
    J Microsc, 2010 Apr 1;238(1):44-56.
    PMID: 20384837 DOI: 10.1111/j.1365-2818.2009.03328.x
    An exponential contrast stretching (ECS) technique is developed to reduce the charging effects on scanning electron microscope images. Compared to some of the conventional histogram equalization methods, such as bi-histogram equalization and recursive mean-separate histogram equalization, the proposed ECS method yields better image compensation. Diode sample chips with insulating and conductive surfaces are used as test samples to evaluate the efficiency of the developed algorithm. The algorithm is implemented in software with a frame grabber card, forming the front-end video capture element.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. AlDahoul N, Md Sabri AQ, Mansoor AM
    Comput Intell Neurosci, 2018;2018:1639561.
    PMID: 29623089 DOI: 10.1155/2018/1639561
    Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Abdullah KA, McEntee MF, Reed W, Kench PL
    J Med Imaging Radiat Oncol, 2016 Aug;60(4):459-68.
    PMID: 27241506 DOI: 10.1111/1754-9485.12473
    The aim of this systematic review is to evaluate the radiation dose reduction achieved using iterative reconstruction (IR) compared to filtered back projection (FBP) in coronary CT angiography (CCTA) and assess the impact on diagnostic image quality. A systematic search of seven electronic databases was performed to identify all studies using a developed keywords strategy. A total of 14 studies met the criteria and were included in a review analysis. The results showed that there was a significant reduction in radiation dose when using IR compared to FBP (P  0.05). The mean ± SD difference of image noise, signal-noise ratio (SNR) and contrast-noise ratio (CNR) were 1.05 ± 1.29 HU, 0.88 ± 0.56 and 0.63 ± 1.83 respectively. The mean ± SD percentages of overall image quality scores were 71.79 ± 12.29% (FBP) and 67.31 ± 22.96% (IR). The mean ± SD percentages of coronary segment analysis were 95.43 ± 2.57% (FBP) and 97.19 ± 2.62% (IR). In conclusion, this review analysis shows that CCTA with the use of IR leads to a significant reduction in radiation dose as compared to the use of FBP. Diagnostic image quality of IR at reduced dose (30-41%) is comparable to FBP at standard dose in the diagnosis of CAD.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links