Displaying all 4 publications

Abstract:
Sort:
  1. Dai L, Md Johar MG, Alkawaz MH
    Sci Rep, 2024 Nov 21;14(1):28885.
    PMID: 39572780 DOI: 10.1038/s41598-024-80441-y
    This work is to investigate the diagnostic value of a deep learning-based magnetic resonance imaging (MRI) image segmentation (IS) technique for shoulder joint injuries (SJIs) in swimmers. A novel multi-scale feature fusion network (MSFFN) is developed by optimizing and integrating the AlexNet and U-Net algorithms for the segmentation of MRI images of the shoulder joint. The model is evaluated using metrics such as the Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity (SE). A cohort of 52 swimmers with SJIs from Guangzhou Hospital serve as the subjects for this study, wherein the accuracy of the developed shoulder joint MRI IS model in diagnosing swimmers' SJIs is analyzed. The results reveal that the DSC for segmenting joint bones in MRI images based on the MSFFN algorithm is 92.65%, with PPV of 95.83% and SE of 96.30%. Similarly, the DSC for segmenting humerus bones in MRI images is 92.93%, with PPV of 95.56% and SE of 92.78%. The MRI IS algorithm exhibits an accuracy of 86.54% in diagnosing types of SJIs in swimmers, surpassing the conventional diagnostic accuracy of 71.15%. The consistency between the diagnostic results of complete tear, superior surface tear, inferior surface tear, and intratendinous tear of SJIs in swimmers and arthroscopic diagnostic results yield a Kappa value of 0.785 and an accuracy of 87.89%. These findings underscore the significant diagnostic value and potential of the MRI IS technique based on the MSFFN algorithm in diagnosing SJIs in swimmers.
  2. Alkawaz MH, Basori AH, Mohamad D, Mohamed F
    ScientificWorldJournal, 2014;2014:367013.
    PMID: 25136663 DOI: 10.1155/2014/367013
    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
  3. Al-Dabbagh MM, Salim N, Rehman A, Alkawaz MH, Saba T, Al-Rodhaan M, et al.
    ScientificWorldJournal, 2014;2014:612787.
    PMID: 25309952 DOI: 10.1155/2014/612787
    This paper presents a novel features mining approach from documents that could not be mined via optical character recognition (OCR). By identifying the intimate relationship between the text and graphical components, the proposed technique pulls out the Start, End, and Exact values for each bar. Furthermore, the word 2-gram and Euclidean distance methods are used to accurately detect and determine plagiarism in bar charts.
  4. Waheed SR, Alkawaz MH, Rehman A, Almazyad AS, Saba T
    Microsc Res Tech, 2016 May;79(5):431-7.
    PMID: 26918523 DOI: 10.1002/jemt.22646
    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf) , standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. Microsc. Res. Tech. 79:431-437, 2016. © 2016 Wiley Periodicals, Inc.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links