Refrigeration systems are complex, non-linear, multi-modal, and multi-dimensional. However, traditional methods are based on a trial and error process to optimize these systems, and a global optimum operating point cannot be guaranteed. Therefore, this work aims to study a two-stage vapor compression refrigeration system (VCRS) through a novel and robust hybrid multi-objective grey wolf optimizer (HMOGWO) algorithm. The system is modeled using response surface methods (RSM) to investigate the impacts of design variables on the set responses. Firstly, the interaction between the system components and their cycle behavior is analyzed by building four surrogate models using RSM. The model fit statistics indicate that they are statistically significant and agree with the design data. Three conflicting scenarios in bi-objective optimization are built focusing on the overall system following the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and Linear Programming Technique for Multidimensional Analysis of Preference (LINMAP) decision-making methods. The optimal solutions indicate that for the first to third scenarios, the exergetic efficiency (EE) and capital expenditure (CAPEX) are optimized by 33.4% and 7.5%, and the EE and operational expenditure (OPEX) are improved by 27.4% and 19.0%. The EE and global warming potential (GWP) are also optimized by 27.2% and 19.1%, where the proposed HMOGWO outperforms the MOGWO and NSGA-II. Finally, the K-means clustering technique is applied for Pareto characterization. Based on the research outcomes, the combined RSM and HMOGWO techniques have proved an excellent solution to simulate and optimize two-stage VCRS.
Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.
Currently, digital images and videos have high importance because they have become the main carriers of information. However, the relative ease of tampering with images and videos makes their authenticity untrustful. Digital image forensics addresses the problem of the authentication of images or their origins. One main branch of image forensics is passive image forgery detection. Images could be forged using different techniques, and the most common forgery is the copy-move, in which a region of an image is duplicated and placed elsewhere in the same image. Active techniques, such as watermarking, have been proposed to solve the image authenticity problem, but those techniques have limitations because they require human intervention or specially equipped cameras. To overcome these limitations, several passive authentication methods have been proposed. In contrast to active methods, passive methods do not require any previous information about the image, and they take advantage of specific detectable changes that forgeries can bring into the image. In this paper, we describe the current state-of-the-art of passive copy-move forgery detection methods. The key current issues in developing a robust copy-move forgery detector are then identified, and the trends of tackling those issues are addressed.
This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature.
The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes.
Devices in a visual sensor network (VSN) are mostly powered by batteries, and in such a network, energy consumption and bandwidth utilization are the most critical issues that need to be taken into consideration. The most suitable solution to such issues is to compress the captured visual data before transmission takes place. Compressive sensing (CS) has emerged as an efficient sampling mechanism for VSN. CS reduces the total amount of data to be processed such that it recreates the signal by using only fewer sampling values than that of the Nyquist rate. However, there are few open issues related to the reconstruction quality and practical implementation of CS. The current studies of CS are more concentrated on hypothetical characteristics with simulated results, rather than on the understanding the potential issues in the practical implementation of CS and its computational validation. In this paper, a low power, low cost, visual sensor platform is developed using an Arduino Due microcontroller board, XBee transmitter, and uCAM-II camera. Block compressive sensing (BCS) is implemented on the developed platform to validate the characteristics of compressive sensing in a real-world scenario. The reconstruction is performed by using the joint multi-phase decoding (JMD) framework. To the best of our knowledge, no such practical implementation using off the shelf components has yet been conducted for CS.
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.
Compression, in general, aims to reduce file size, with or without decreasing data quality of the original file. Digital Imaging and Communication in Medicine (DICOM) is a medical imaging file standard used to store multiple information such as patient data, imaging procedures, and the image itself. With the rising usage of medical imaging in clinical diagnosis, there is a need for a fast and secure method to share large number of medical images between healthcare practitioners, and compression has always been an option. This work analyses the Huffman coding compression method, one of the lossless compression techniques, as an alternative method to compress a DICOM file in open PACS settings. The idea of the Huffman coding compression method is to provide codeword with less number of bits for the symbol that has a higher value of byte frequency distribution. Experiments using different type of DICOM images are conducted, and the analysis on the performances in terms of compression ratio and compression/decompression time, as well as security, is provided. The experimental results showed that the Huffman coding technique has the capability to compress the DICOM file up to 1 : 3.7010 ratio and up to 72.98% space savings.
The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.
A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.
The volume of patient monitoring video acquired in hospitals is very huge and hence there is a need for better compression of the same for effective storage and transmission. This paper presents a new motion segmentation technique, which improves the compression of patient monitoring video. The proposed motion segmentation technique makes use of a binary mask, which is obtained by thresholding the standard deviation values of the pixels along the temporal axis. Two compression methods, which make use of the proposed motion segmentation technique, are presented. The first method uses MPEG-4 coder and 9/7-biorthogonal wavelet for compressing the moving and stationary portions of the video respectively. The second method uses 5/3-biorthogonal wavelet for compressing both the moving and the stationary portions of the video. The performances of these compression algorithms are evaluated in terms of PSNR and bitrate. From the experimental results, it is found that the proposed motion technique improves the performance of the MPEG-4 coder. Among the two compression methods presented, the MPEG-4 based method performs better for bitrates less than 767 Kbps whereas for bitrates above 767 Kbps the performance of the wavelet based method is found superior.
This paper attempts to improve the diagnostic quality of magnetic resonance (MR) images through application of lossy compression as a noise-reducing filter. The amount of imaging noise present in MR images is compared with the amount of noise introduced by the compression, with particular attention given to the situation where the compression noise is a fraction of the imaging noise. A popular wavelet-based algorithm with good performance, Set Partitioning in Hierarchical Trees (SPIHT), was employed for the lossy compression. Tests were conducted with a number of MR patient images and corresponding phantom images. Different plausible ratios between imaging noise and compression noise (ICR) were considered, and the achievable compression gain through the controlled lossy compression was evaluated. Preliminary results show that at certain ICR's, it becomes virtually impossible to distinguish between the original and compressed-decompressed image. Radiologists presented with a blind test, in certain cases, showed preference to the compressed image rather than the original uncompressed ones, indicating that under controlled circumstances, lossy image compression can be used to improve the diagnostic quality of the MR images.
The usage of wearable gadgets is growing in the cloud-based health monitoring systems. The signal compression, computational and power efficiencies play an imperative part in this scenario. In this context, we propose an efficient method for the diagnosis of cardiovascular diseases based on electrocardiogram (ECG) signals. The method combines multirate processing, wavelet decomposition and frequency content-based subband coefficient selection and machine learning techniques. Multirate processing and features selection is used to reduce the amount of information processed thus reducing the computational complexity of the proposed system relative to the equivalent fixed-rate solutions. Frequency content-dependent subband coefficient selection enhances the compression gain and reduces the transmission activity and computational cost of the post cloud-based classification. We have used MIT-BIH dataset for our experiments. To avoid overfitting and biasness, the performance of considered classifiers is studied by using five-fold cross validation (5CV) and a novel proposed partial blind protocol. The designed method achieves more than 12-fold computational gain while assuring an appropriate signal reconstruction. The compression gain is 13 times compared to fixed-rate counterparts and the highest classification accuracies are 97.06% and 92.08% for the 5CV and partial blind cases, respectively. Results suggest the feasibility of detecting cardiac arrhythmias using the proposed approach.
Due to medium scattering, absorption, and complex light interactions, capturing objects from the underwater environment has always been a difficult task. Single-pixel imaging (SPI) is an efficient imaging approach that can obtain spatial object information under low-light conditions. In this paper, we propose a single-pixel object inspection system for the underwater environment based on compressive sensing super-resolution convolutional neural network (CS-SRCNN). With the CS-SRCNN algorithm, image reconstruction can be achieved with 30% of the total pixels in the image. We also investigate the impact of compression ratios on underwater object SPI reconstruction performance. In addition, we analyzed the effect of peak signal to noise ratio (PSNR) and structural similarity index (SSIM) to determine the image quality of the reconstructed image. Our work is compared to the SPI system and SRCNN method to demonstrate its efficiency in capturing object results from an underwater environment. The PSNR and SSIM of the proposed method have increased to 35.44% and 73.07%, respectively. This work provides new insight into SPI applications and creates a better alternative for underwater optical object imaging to achieve good quality.
The compressive behaviour of column members can be considerably affected by local buckling, material yielding and local end conditions. In this paper, the effects of the loading conditions at the ends of plain channel section columns subjected to uniformly compressed loading, and fixed conditions at the column ends with respect to global rotations, was examined. Finite element simulation was employed to look at the post-buckled response of thin-walled, plain channel section columns that covered the complete loading history of the compression columns from the onset of elastic local buckling through the nonlinear elastic and elastoplastic post-buckling phases of behaviour to final collapse and unloading. Two types of loading conditions were considered: the first was one that has been used practically in tests whereby one end is loaded with a moving top platen while the other end is fixed at the lower platen, but, for the second loading condition, both ends were loaded with equally moving top and lower platens. These two conditions were shown to lead to quite different characteristic interactive responses of the columns due to mode jumping in the buckling mode for the locally rotationally constrained case.
One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.
The effects of annealing temperatures on composition and strain in Si x Ge1-x, obtained by rapid melting growth of electrodeposited Ge on Si (100) substrate were investigated. Here, a rapid melting process was performed at temperatures of 1000, 1050 and 1100 °C for 1 s. All annealed samples show single crystalline structure in (100) orientation. A significant appearance of Si-Ge vibration mode peak at ~400 cm-1 confirms the existence of Si-Ge intermixing due to out-diffusion of Si into Ge region. On a rapid melting process, Ge melts and reaches the thermal equilibrium in short time. Si at Ge/Si interface begins to dissolve once in contact with the molten Ge to produce Si-Ge intermixing. The Si fraction in Si-Ge intermixing was calculated by taking into account the intensity ratio of Ge-Ge and Si-Ge vibration mode peaks and was found to increase with the annealing temperatures. It is found that the strain turns from tensile to compressive as the annealing temperature increases. The Si fraction dependent thermal expansion coefficient of Si x Ge1-x is a possible cause to generate such strain behavior. The understanding of compositional and strain characteristics is important in Ge/Si heterostructure as these properties seem to give significant effects in device performance.
Pre-stressing is a concept used in many engineering structures. In this study prestressing in the form of axial compression stress is proposed in the blade structure of H-Darrieus wind turbine. The study draws a structural comparison between reference and prestressed configurations of turbine rotor with respect to their dynamic vibrational response. Rotordynamics calculations provided by ANSYS Mechanical is used to investigate the effects of turbine rotation on the dynamic response of the system. Rotation speed ranging between 0 to 150 rad/s was examined to cover the whole operating range of commercial instances. The modal analysis ends up with first six mode shapes of both rotor configurations. As a result, the displacement of the proposed configurations reduced effectively. Apparent variations in Campbell diagrams of both cases indicate that prestressed configuration has its resonant frequencies far away from turbine operation speeds and thus remarkably higher safety factor against whirling and probable following failures.
This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints.
Two-stage lossless data compression methods involving predictors and encoders are well known. This paper discusses the application of context based error modeling techniques for neural network predictors used for the compression of EEG signals. Error modeling improves the performance of a compression algorithm by removing the statistical redundancy that exists among the error signals after the prediction stage. In this paper experiments are carried out by using human EEG signals recorded under various physiological conditions to evaluate the effect of context based error modeling in the EEG compression. It is found that the compression efficiency of the neural network based predictive techniques is significantly improved by using the error modeling schemes. It is shown that the bits per sample required for EEG compression with error modeling and entropy coding lie in the range of 2.92 to 6.62 which indicates a saving of 0.3 to 0.7 bits compared to the compression scheme without error modeling.