Displaying publications 1 - 20 of 247 in total

Abstract:
Sort:
  1. Abdul Jalil N, Abdul Rahim N, Md Shalleh N, Rossetti C
    Singapore Med J, 2008 Jul;49(7):e178-82.
    PMID: 18695852
    A majority of the clinical use of positron emission tomography (PET)-computed tomography (CT) is related to cancer management. Its application in evaluating inflammatory diseases and pyrexia of unknown origin is becoming popular. We reviewed the fluorine-18-fluorodeoxyglucose PET-CT findings of an 80-year-old woman with nonspecific clinical presentation consisting of generalised malaise, moderately high fever and weight loss. Prior CT and magnetic resonance imaging were not helpful in providing a clinical diagnosis. The diagnosis was Horton's arteritis, and the patient responded well to high-dose steroids.
    Matched MeSH terms: Image Processing, Computer-Assisted
  2. Ahmad Fadzil MH, Prakasa E, Asirvadam VS, Nugroho H, Affandi AM, Hussein SH
    Comput Biol Med, 2013 Nov;43(11):1987-2000.
    PMID: 24054912 DOI: 10.1016/j.compbiomed.2013.08.009
    Psoriasis is an incurable skin disorder affecting 2-3% of the world population. The scaliness of psoriasis is a key assessment parameter of the Psoriasis Area and Severity Index (PASI). Dermatologists typically use visual and tactile senses in PASI scaliness assessment. However, the assessment can be subjective resulting in inter- and intra-rater variability in the scores. This paper proposes an assessment method that incorporates 3D surface roughness with standard clustering techniques to objectively determine the PASI scaliness score for psoriasis lesions. A surface roughness algorithm using structured light projection has been applied to 1999 3D psoriasis lesion surfaces. The algorithm has been validated with an accuracy of 94.12%. Clustering algorithms were used to classify the surface roughness measured using the proposed assessment method for PASI scaliness scoring. The reliability of the developed PASI scaliness algorithm was high with kappa coefficients>0.84 (almost perfect agreement).
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Kamarudin ND, Ooi CY, Kawanabe T, Odaguchi H, Kobayashi F
    J Healthc Eng, 2017;2017:7460168.
    PMID: 29065640 DOI: 10.1155/2017/7460168
    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  4. Tang JR, Mat Isa NA, Ch'ng ES
    PLoS One, 2015;10(11):e0142830.
    PMID: 26560331 DOI: 10.1371/journal.pone.0142830
    Despite the effectiveness of Pap-smear test in reducing the mortality rate due to cervical cancer, the criteria of the reporting standard of the Pap-smear test are mostly qualitative in nature. This study addresses the issue on how to define the criteria in a more quantitative and definite term. A negative Pap-smear test result, i.e. negative for intraepithelial lesion or malignancy (NILM), is qualitatively defined to have evenly distributed, finely granular chromatin in the nuclei of cervical squamous cells. To quantify this chromatin pattern, this study employed Fuzzy C-Means clustering as the segmentation technique, enabling different degrees of chromatin segmentation to be performed on sample images of non-neoplastic squamous cells. From the simulation results, a model representing the chromatin distribution of non-neoplastic cervical squamous cell is constructed with the following quantitative characteristics: at the best representative sensitivity level 4 based on statistical analysis and human experts' feedbacks, a nucleus of non-neoplastic squamous cell has an average of 67 chromatins with a total area of 10.827 μm2; the average distance between the nearest chromatin pair is 0.508 μm and the average eccentricity of the chromatin is 0.47.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Mousavi SM, Naghsh A, Abu-Bakar SA
    J Digit Imaging, 2015 Aug;28(4):417-27.
    PMID: 25736857 DOI: 10.1007/s10278-015-9770-z
    This paper presents an automatic region of interest (ROI) segmentation method for application of watermarking in medical images. The advantage of using this scheme is that the proposed method is robust against different attacks such as median, Wiener, Gaussian, and sharpening filters. In other words, this technique can produce the same result for the ROI before and after these attacks. The proposed algorithm consists of three main parts; suggesting an automatic ROI detection system, evaluating the robustness of the proposed system against numerous attacks, and finally recommending an enhancement part to increase the strength of the composed system against different attacks. Results obtained from the proposed method demonstrated the promising performance of the method.
    Matched MeSH terms: Image Processing, Computer-Assisted
  6. Liew TS, Schilthuizen M
    PLoS One, 2016;11(6):e0157069.
    PMID: 27280463 DOI: 10.1371/journal.pone.0157069
    Quantitative analysis of organismal form is an important component for almost every branch of biology. Although generally considered an easily-measurable structure, the quantification of gastropod shell form is still a challenge because many shells lack homologous structures and have a spiral form that is difficult to capture with linear measurements. In view of this, we adopt the idea of theoretical modelling of shell form, in which the shell form is the product of aperture ontogeny profiles in terms of aperture growth trajectory that is quantified as curvature and torsion, and of aperture form that is represented by size and shape. We develop a workflow for the analysis of shell forms based on the aperture ontogeny profile, starting from the procedure of data preparation (retopologising the shell model), via data acquisition (calculation of aperture growth trajectory, aperture form and ontogeny axis), and data presentation (qualitative comparison between shell forms) and ending with data analysis (quantitative comparison between shell forms). We evaluate our methods on representative shells of the genera Opisthostoma and Plectostoma, which exhibit great variability in shell form. The outcome suggests that our method is a robust, reproducible, and versatile approach for the analysis of shell form. Finally, we propose several potential applications of our methods in functional morphology, theoretical modelling, taxonomy, and evolutionary biology.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Hossain A, Islam MT, Islam MT, Chowdhury MEH, Rmili H, Samsuzzaman M
    Materials (Basel), 2020 Nov 02;13(21).
    PMID: 33147702 DOI: 10.3390/ma13214918
    In this paper, a compact planar ultrawideband (UWB) antenna and an antenna array setup for microwave breast imaging are presented. The proposed antenna is constructed with a slotted semicircular-shaped patch and partial trapezoidal ground. It is compact in dimension: 0.30λ × 0.31λ × 0.011λ, where λ is the wavelength of the lowest operating frequency. For design purposes, several parameters are assumed and optimized to achieve better performance. The prototype is applied in the breast imaging scheme over the UWB frequency range 3.10-10.60 GHz. However, the antenna achieves an operating bandwidth of 8.70 GHz (2.30-11.00 GHz) for the reflection coefficient under-10 dB with decent impedance matching, 5.80 dBi of maximum gain with steady radiation pattern. The antenna provides a fidelity factor (FF) of 82% and 81% for face-to-face and side-by-side setups, respectively, which specifies the directionality and minor variation of the received pulses. The antenna is fabricated and measured to evaluate the antenna characteristics. A 16-antenna array-based configuration is considered to measure the backscattering signal of the breast phantom where one antenna acts as transmitter, and 15 of them receive the scattered signals. The data is taken in both the configuration of the phantom with and without the tumor inside. Later, the Iteratively Corrected Delay and Sum (IC-DAS) image reconstructed algorithm was used to identify the tumor in the breast phantom. Finally, the reconstructed images from the analysis and processing of the backscattering signal by the algorithm are illustrated to verify the imaging performance.
    Matched MeSH terms: Image Processing, Computer-Assisted
  8. Kipli K, Hoque ME, Lim LT, Mahmood MH, Sahari SK, Sapawi R, et al.
    Comput Math Methods Med, 2018;2018:4019538.
    PMID: 30065780 DOI: 10.1155/2018/4019538
    Digital image processing is one of the most widely used computer vision technologies in biomedical engineering. In the present modern ophthalmological practice, biomarkers analysis through digital fundus image processing analysis greatly contributes to vision science. This further facilitates developments in medical imaging, enabling this robust technology to attain extensive scopes in biomedical engineering platform. Various diagnostic techniques are used to analyze retinal microvasculature image to enable geometric features measurements such as vessel tortuosity, branching angles, branching coefficient, vessel diameter, and fractal dimension. These extracted markers or characterized fundus digital image features provide insights and relates quantitative retinal vascular topography abnormalities to various pathologies such as diabetic retinopathy, macular degeneration, hypertensive retinopathy, transient ischemic attack, neovascular glaucoma, and cardiovascular diseases. Apart from that, this noninvasive research tool is automated, allowing it to be used in large-scale screening programs, and all are described in this present review paper. This paper will also review recent research on the image processing-based extraction techniques of the quantitative retinal microvascular feature. It mainly focuses on features associated with the early symptom of transient ischemic attack or sharp stroke.
    Matched MeSH terms: Image Processing, Computer-Assisted*
  9. Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH
    Tomography, 2023 Dec 05;9(6):2158-2189.
    PMID: 38133073 DOI: 10.3390/tomography9060169
    Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  10. Yazdani S, Yusof R, Karimian A, Riazi AH, Bennamoun M
    Comput Math Methods Med, 2015;2015:829893.
    PMID: 26089978 DOI: 10.1155/2015/829893
    Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets.
    Matched MeSH terms: Image Processing, Computer-Assisted/statistics & numerical data
  11. Soleymani A, Nordin MJ, Sundararajan E
    ScientificWorldJournal, 2014;2014:536930.
    PMID: 25258724 DOI: 10.1155/2014/536930
    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*; Image Processing, Computer-Assisted/standards
  12. Tai MW, Chong ZF, Asif MK, Rahmat RA, Nambiar P
    Leg Med (Tokyo), 2016 Sep;22:42-8.
    PMID: 27591538 DOI: 10.1016/j.legalmed.2016.07.009
    This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  13. Jain S, Seal A, Ojha A, Yazidi A, Bures J, Tacheci I, et al.
    Comput Biol Med, 2021 10;137:104789.
    PMID: 34455302 DOI: 10.1016/j.compbiomed.2021.104789
    Wireless capsule endoscopy (WCE) is one of the most efficient methods for the examination of gastrointestinal tracts. Computer-aided intelligent diagnostic tools alleviate the challenges faced during manual inspection of long WCE videos. Several approaches have been proposed in the literature for the automatic detection and localization of anomalies in WCE images. Some of them focus on specific anomalies such as bleeding, polyp, lesion, etc. However, relatively fewer generic methods have been proposed to detect all those common anomalies simultaneously. In this paper, a deep convolutional neural network (CNN) based model 'WCENet' is proposed for anomaly detection and localization in WCE images. The model works in two phases. In the first phase, a simple and efficient attention-based CNN classifies an image into one of the four categories: polyp, vascular, inflammatory, or normal. If the image is classified in one of the abnormal categories, it is processed in the second phase for the anomaly localization. Fusion of Grad-CAM++ and a custom SegNet is used for anomalous region segmentation in the abnormal image. WCENet classifier attains accuracy and area under receiver operating characteristic of 98% and 99%. The WCENet segmentation model obtains a frequency weighted intersection over union of 81%, and an average dice score of 56% on the KID dataset. WCENet outperforms nine different state-of-the-art conventional machine learning and deep learning models on the KID dataset. The proposed model demonstrates potential for clinical applications.
    Matched MeSH terms: Image Processing, Computer-Assisted
  14. Liu H, Huang J, Li Q, Guan X, Tseng M
    Artif Intell Med, 2024 Feb;148:102776.
    PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776
    This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  15. Noor NM, Rijal OM, Yunus A, Abu-Bakar SA
    Comput Med Imaging Graph, 2010 Mar;34(2):160-6.
    PMID: 19758785 DOI: 10.1016/j.compmedimag.2009.08.005
    This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  16. Oyelade ON, Ezugwu AE, Almutairi MS, Saha AK, Abualigah L, Chiroma H
    Sci Rep, 2022 Apr 13;12(1):6166.
    PMID: 35418566 DOI: 10.1038/s41598-022-09929-9
    Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  17. Al-Masni MA, Lee S, Al-Shamiri AK, Gho SM, Choi YH, Kim DH
    Comput Biol Med, 2023 Feb;153:106553.
    PMID: 36641933 DOI: 10.1016/j.compbiomed.2023.106553
    Patient movement during Magnetic Resonance Imaging (MRI) scan can cause severe degradation of image quality. In Susceptibility Weighted Imaging (SWI), several echoes are typically measured during a single repetition period, where the earliest echoes show less contrast between various tissues, while the later echoes are more susceptible to artifacts and signal dropout. In this paper, we propose a knowledge interaction paradigm that jointly learns feature details from multiple distorted echoes by sharing their knowledge with unified training parameters, thereby simultaneously reducing motion artifacts of all echoes. This is accomplished by developing a new scheme that boosts a Single Encoder with Multiple Decoders (SEMD), which assures that the generated features not only get fused but also learned together. We called the proposed method Knowledge Interaction Learning between Multi-Echo data (KIL-ME-based SEMD). The proposed KIL-ME-based SEMD allows to share information and gain an understanding of the correlations between the multiple echoes. The main purpose of this work is to correct the motion artifacts and maintain image quality and structure details of all motion-corrupted echoes towards generating high-resolution susceptibility enhanced contrast images, i.e., SWI, using a weighted average of multi-echo motion-corrected acquisitions. We also compare various potential strategies that might be used to address the problem of reducing artifacts in multi-echoes data. The experimental results demonstrate the feasibility and effectiveness of the proposed method, reducing the severity of motion artifacts and improving the overall clinical image quality of all echoes with their associated SWI maps. Significant improvement of image quality is observed using both motion-simulated test data and actual volunteer data with various motion severity strengths. Eventually, by enhancing the overall image quality, the proposed network can increase the effectiveness of the physicians' capability to evaluate and correctly diagnose brain MR images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  18. Chen Z, Rajamanickam L, Cao J, Zhao A, Hu X
    PLoS One, 2021;16(12):e0260758.
    PMID: 34879097 DOI: 10.1371/journal.pone.0260758
    This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network pretrained on the ImageNet dataset and a label localization algorithm. First, the pretrained InceptionV3 network extracts features from the target dataset that are used to train a specific classifier and fine-tune the entire network to obtain an optimal model. Then, the obtained model is used to derive the probabilities of the predicted labels. For this purpose, we introduce a squeeze and excitation (SE) module into the network architecture that augments the useful feature information, inhibits useless feature information, and conducts feature reweighting. Next, we perform label localization to obtain the label probabilities and determine the final label set for each image. During this process, the number of labels must be determined. The optimal K value is obtained experimentally and used to determine the number of predicted labels, thereby solving the empty label set problem that occurs when the predicted label values of images are below a fixed threshold. Experiments on the Corel5k multilabel image dataset verify that CNN-2L improves the labeling precision by 18% and 15% compared with the traditional multiple-Bernoulli relevance model (MBRM) and joint equal contribution (JEC) algorithms, respectively, and it improves the recall by 6% compared with JEC. Additionally, it improves the precision by 20% and 11% compared with the deep learning methods Weight-KNN and adaptive hypergraph learning (AHL), respectively. Although CNN-2L fails to improve the recall compared with the semantic extension model (SEM), it improves the comprehensive index of the F1 value by 1%. The experimental results reveal that the proposed transfer learning model based on a label localization strategy is effective for automatic image annotation and substantially boosts the multilabel image annotation performance.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Zourmand A, Mirhassani SM, Ting HN, Bux SI, Ng KH, Bilgen M, et al.
    Biomed Eng Online, 2014;13:103.
    PMID: 25060583 DOI: 10.1186/1475-925X-13-103
    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.
    Matched MeSH terms: Image Processing, Computer-Assisted
  20. Al-Ameen Z, Sulong G
    Scanning, 2015 Mar-Apr;37(2):116-25.
    PMID: 25663630 DOI: 10.1002/sca.21187
    Contrast is a distinctive visual attribute that indicates the quality of an image. Computed Tomography (CT) images are often characterized as poor quality due to their low-contrast nature. Although many innovative ideas have been proposed to overcome this problem, the outcomes, especially in terms of accuracy, visual quality and speed, are falling short and there remains considerable room for improvement. Therefore, an improved version of the single-scale Retinex algorithm is proposed to enhance the contrast while preserving the standard brightness and natural appearance, with low implementation time and without accentuating the noise for CT images. The novelties of the proposed algorithm consist of tuning the standard single-scale Retinex, adding a normalized-ameliorated Sigmoid function and adapting some parameters to improve its enhancement ability. The proposed algorithm is tested with synthetically and naturally degraded low-contrast CT images, and its performance is also verified with contemporary enhancement techniques using two prevalent quality evaluation metrics-SSIM and UIQI. The results obtained from intensive experiments exhibited significant improvement not only in enhancing the contrast but also in increasing the visual quality of the processed images. Finally, the proposed low-complexity algorithm provided satisfactory results with no apparent errors and outperformed all the comparative methods.
    Matched MeSH terms: Image Processing, Computer-Assisted
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links