Displaying all 17 publications

Abstract:
Sort:
  1. Idbeaa T, Abdul Samad S, Husain H
    PLoS One, 2016;11(3):e0150732.
    PMID: 26963093 DOI: 10.1371/journal.pone.0150732
    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.
    Matched MeSH terms: Data Compression/methods*
  2. Rahmat RF, Andreas TSM, Fahmi F, Pasha MF, Alzahrani MY, Budiarto R
    J Healthc Eng, 2019;2019:5810540.
    PMID: 31316743 DOI: 10.1155/2019/5810540
    Compression, in general, aims to reduce file size, with or without decreasing data quality of the original file. Digital Imaging and Communication in Medicine (DICOM) is a medical imaging file standard used to store multiple information such as patient data, imaging procedures, and the image itself. With the rising usage of medical imaging in clinical diagnosis, there is a need for a fast and secure method to share large number of medical images between healthcare practitioners, and compression has always been an option. This work analyses the Huffman coding compression method, one of the lossless compression techniques, as an alternative method to compress a DICOM file in open PACS settings. The idea of the Huffman coding compression method is to provide codeword with less number of bits for the symbol that has a higher value of byte frequency distribution. Experiments using different type of DICOM images are conducted, and the analysis on the performances in terms of compression ratio and compression/decompression time, as well as security, is provided. The experimental results showed that the Huffman coding technique has the capability to compress the DICOM file up to 1 : 3.7010 ratio and up to 72.98% space savings.
    Matched MeSH terms: Data Compression/methods*
  3. Arif AS, Mansor S, Logeswaran R, Karim HA
    J Med Syst, 2015 Feb;39(2):5.
    PMID: 25628161 DOI: 10.1007/s10916-015-0200-z
    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.
    Matched MeSH terms: Data Compression/methods*
  4. Siddiqui MF, Reza AW, Kanesan J, Ramiah H
    ScientificWorldJournal, 2014;2014:620868.
    PMID: 25133249 DOI: 10.1155/2014/620868
    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.
    Matched MeSH terms: Data Compression/methods*
  5. Shyamsunder R, Eswaran C, Sriraam N
    J Med Syst, 2007 Apr;31(2):109-16.
    PMID: 17489503
    The volume of patient monitoring video acquired in hospitals is very huge and hence there is a need for better compression of the same for effective storage and transmission. This paper presents a new motion segmentation technique, which improves the compression of patient monitoring video. The proposed motion segmentation technique makes use of a binary mask, which is obtained by thresholding the standard deviation values of the pixels along the temporal axis. Two compression methods, which make use of the proposed motion segmentation technique, are presented. The first method uses MPEG-4 coder and 9/7-biorthogonal wavelet for compressing the moving and stationary portions of the video respectively. The second method uses 5/3-biorthogonal wavelet for compressing both the moving and the stationary portions of the video. The performances of these compression algorithms are evaluated in terms of PSNR and bitrate. From the experimental results, it is found that the proposed motion technique improves the performance of the MPEG-4 coder. Among the two compression methods presented, the MPEG-4 based method performs better for bitrates less than 767 Kbps whereas for bitrates above 767 Kbps the performance of the wavelet based method is found superior.
    Matched MeSH terms: Data Compression/methods*
  6. Choong MK, Logeswaran R, Bister M
    J Med Syst, 2006 Jun;30(3):139-43.
    PMID: 16848126
    This paper attempts to improve the diagnostic quality of magnetic resonance (MR) images through application of lossy compression as a noise-reducing filter. The amount of imaging noise present in MR images is compared with the amount of noise introduced by the compression, with particular attention given to the situation where the compression noise is a fraction of the imaging noise. A popular wavelet-based algorithm with good performance, Set Partitioning in Hierarchical Trees (SPIHT), was employed for the lossy compression. Tests were conducted with a number of MR patient images and corresponding phantom images. Different plausible ratios between imaging noise and compression noise (ICR) were considered, and the achievable compression gain through the controlled lossy compression was evaluated. Preliminary results show that at certain ICR's, it becomes virtually impossible to distinguish between the original and compressed-decompressed image. Radiologists presented with a blind test, in certain cases, showed preference to the compressed image rather than the original uncompressed ones, indicating that under controlled circumstances, lossy image compression can be used to improve the diagnostic quality of the MR images.
    Matched MeSH terms: Data Compression/methods*
  7. Tayan O, Kabir MN, Alginahi YM
    ScientificWorldJournal, 2014;2014:514652.
    PMID: 25254247 DOI: 10.1155/2014/514652
    This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints.
    Matched MeSH terms: Data Compression/methods*
  8. Sriraam N, Eswaran C
    J Med Syst, 2006 Dec;30(6):439-48.
    PMID: 17233156
    Two-stage lossless data compression methods involving predictors and encoders are well known. This paper discusses the application of context based error modeling techniques for neural network predictors used for the compression of EEG signals. Error modeling improves the performance of a compression algorithm by removing the statistical redundancy that exists among the error signals after the prediction stage. In this paper experiments are carried out by using human EEG signals recorded under various physiological conditions to evaluate the effect of context based error modeling in the EEG compression. It is found that the compression efficiency of the neural network based predictive techniques is significantly improved by using the error modeling schemes. It is shown that the bits per sample required for EEG compression with error modeling and entropy coding lie in the range of 2.92 to 6.62 which indicates a saving of 0.3 to 0.7 bits compared to the compression scheme without error modeling.
    Matched MeSH terms: Data Compression/methods*
  9. Gandam A, Sidhu JS, Verma S, Jhanjhi NZ, Nayyar A, Abouhawwash M, et al.
    PLoS One, 2021;16(5):e0250959.
    PMID: 33970949 DOI: 10.1371/journal.pone.0250959
    Compression at a very low bit rate(≤0.5bpp) causes degradation in video frames with standard decoding algorithms like H.261, H.262, H.264, and MPEG-1 and MPEG-4, which itself produces lots of artifacts. This paper focuses on an efficient pre-and post-processing technique (PP-AFT) to address and rectify the problems of quantization error, ringing, blocking artifact, and flickering effect, which significantly degrade the visual quality of video frames. The PP-AFT method differentiates the blocked images or frames using activity function into different regions and developed adaptive filters as per the classified region. The designed process also introduces an adaptive flicker extraction and removal method and a 2-D filter to remove ringing effects in edge regions. The PP-AFT technique is implemented on various videos, and results are compared with different existing techniques using performance metrics like PSNR-B, MSSIM, and GBIM. Simulation results show significant improvement in the subjective quality of different video frames. The proposed method outperforms state-of-the-art de-blocking methods in terms of PSNR-B with average value lying between (0.7-1.9db) while (35.83-47.7%) reduced average GBIM keeping MSSIM values very close to the original sequence statistically 0.978.
    Matched MeSH terms: Data Compression/methods*
  10. Soleymani A, Nordin MJ, Sundararajan E
    ScientificWorldJournal, 2014;2014:536930.
    PMID: 25258724 DOI: 10.1155/2014/536930
    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.
    Matched MeSH terms: Data Compression/methods
  11. Liew SC, Liew SW, Zain JM
    J Digit Imaging, 2013 Apr;26(2):316-25.
    PMID: 22555905 DOI: 10.1007/s10278-012-9484-4
    Tamper localization and recovery watermarking scheme can be used to detect manipulation and recover tampered images. In this paper, a tamper localization and lossless recovery scheme that used region of interest (ROI) segmentation and multilevel authentication was proposed. The watermarked images had a high average peak signal-to-noise ratio of 48.7 dB and the results showed that tampering was successfully localized and tampered area was exactly recovered. The usage of ROI segmentation and multilevel authentication had significantly reduced the time taken by approximately 50 % for the tamper localization and recovery processing.
    Matched MeSH terms: Data Compression/methods
  12. Abdul Rahim R, Pang JF, Chan KS, Leong LC, Sulaiman S, Abdul Manaf MS
    ISA Trans, 2007 Apr;46(2):131-45.
    PMID: 17367791
    The data distribution system of this project is divided into two types, which are a Two-PC Image Reconstruction System and a Two-PC Velocity Measurement System. Each data distribution system is investigated to see whether the results' refreshing rate of the corresponding measurement can be greater than the rate obtained by using a single computer in the same measurement system for each application. Each system has its own flow control protocol for controlling how data is distributed within the system in order to speed up the data processing time. This can be done if two PCs work in parallel. The challenge of this project is to define the data flow process and critical timing during data packaging, transferring and extracting in between PCs. If a single computer is used as a data processing unit, a longer time is needed to produce a measurement result. This insufficient real-time result will cause problems in a feedback control process when applying the system in industrial plants. To increase the refreshing rate of the measurement result, an investigation on a data distribution system is performed to replace the existing data processing unit.
    Matched MeSH terms: Data Compression/methods*
  13. Teoh AB, Goh A, Ngo DC
    IEEE Trans Pattern Anal Mach Intell, 2006 Dec;28(12):1892-901.
    PMID: 17108365
    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.
    Matched MeSH terms: Data Compression/methods*
  14. Choong MK, Logeswaran R, Bister M
    Int J Med Inform, 2007 Sep;76(9):646-54.
    PMID: 16769242
    This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed.
    Matched MeSH terms: Data Compression/methods*
  15. Shyam Sunder R, Eswaran C, Sriraam N
    Comput Biol Med, 2006 Sep;36(9):958-73.
    PMID: 16026779
    In this paper, 3-D discrete Hartley transform is applied for the compression of two medical modalities, namely, magnetic resonance images and X-ray angiograms and the performance results are compared with those of 3-D discrete cosine and Fourier transforms using the parameters such as PSNR and bit rate. It is shown that the 3-D discrete Hartley transform is better than the other two transforms for magnetic resonance brain images whereas for the X-ray angiograms, the 3-D discrete cosine transform is found to be superior.
    Matched MeSH terms: Data Compression/methods*
  16. Khuan LY, Bister M, Blanchfield P, Salleh YM, Ali RA, Chan TH
    Australas Phys Eng Sci Med, 2006 Jun;29(2):216-28.
    PMID: 16845928
    Increased inter-equipment connectivity coupled with advances in Web technology allows ever escalating amounts of physiological data to be produced, far too much to be displayed adequately on a single computer screen. The consequence is that large quantities of insignificant data will be transmitted and reviewed. This carries an increased risk of overlooking vitally important transients. This paper describes a technique to provide an integrated solution based on a single algorithm for the efficient analysis, compression and remote display of long-term physiological signals with infrequent short duration, yet vital events, to effect a reduction in data transmission and display cluttering and to facilitate reliable data interpretation. The algorithm analyses data at the server end and flags significant events. It produces a compressed version of the signal at a lower resolution that can be satisfactorily viewed in a single screen width. This reduced set of data is initially transmitted together with a set of 'flags' indicating where significant events occur. Subsequent transmissions need only involve transmission of flagged data segments of interest at the required resolution. Efficient processing and code protection with decomposition alone is novel. The fixed transmission length method ensures clutter-less display, irrespective of the data length. The flagging of annotated events in arterial oxygen saturation, electroencephalogram and electrocardiogram illustrates the generic property of the algorithm. Data reduction of 87% to 99% and improved displays are demonstrated.
    Matched MeSH terms: Data Compression/methods*
  17. Yildirim O, Baloglu UB, Tan RS, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2019 Jul;176:121-133.
    PMID: 31200900 DOI: 10.1016/j.cmpb.2019.05.004
    BACKGROUND AND OBJECTIVE: For diagnosis of arrhythmic heart problems, electrocardiogram (ECG) signals should be recorded and monitored. The long-term signal records obtained are analyzed by expert cardiologists. Devices such as the Holter monitor have limited hardware capabilities. For improved diagnostic capacity, it would be helpful to detect arrhythmic signals automatically. In this study, a novel approach is presented as a candidate solution for these issues.

    METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.

    RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.

    CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.

    Matched MeSH terms: Data Compression/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links