Displaying publications 41 - 60 of 1459 in total

Abstract:
Sort:
  1. Karimi A, Afsharfarnia A, Zarafshan F, Al-Haddad SA
    ScientificWorldJournal, 2014;2014:432952.
    PMID: 25114965 DOI: 10.1155/2014/432952
    The stability of clusters is a serious issue in mobile ad hoc networks. Low stability of clusters may lead to rapid failure of clusters, high energy consumption for reclustering, and decrease in the overall network stability in mobile ad hoc network. In order to improve the stability of clusters, weight-based clustering algorithms are utilized. However, these algorithms only use limited features of the nodes. Thus, they decrease the weight accuracy in determining node's competency and lead to incorrect selection of cluster heads. A new weight-based algorithm presented in this paper not only determines node's weight using its own features, but also considers the direct effect of feature of adjacent nodes. It determines the weight of virtual links between nodes and the effect of the weights on determining node's final weight. By using this strategy, the highest weight is assigned to the best choices for being the cluster heads and the accuracy of nodes selection increases. The performance of new algorithm is analyzed by using computer simulation. The results show that produced clusters have longer lifetime and higher stability. Mathematical simulation shows that this algorithm has high availability in case of failure.
    Matched MeSH terms: Algorithms*
  2. Akbari M, Manesh MR, El-Saleh AA, Reza AW
    ScientificWorldJournal, 2014;2014:128195.
    PMID: 25045725 DOI: 10.1155/2014/128195
    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods.
    Matched MeSH terms: Algorithms*
  3. Ali SS, Moinuddin M, Raza K, Adil SH
    ScientificWorldJournal, 2014;2014:850189.
    PMID: 24987745 DOI: 10.1155/2014/850189
    Radial basis function neural networks are used in a variety of applications such as pattern recognition, nonlinear identification, control and time series prediction. In this paper, the learning algorithm of radial basis function neural networks is analyzed in a feedback structure. The robustness of the learning algorithm is discussed in the presence of uncertainties that might be due to noisy perturbations at the input or to modeling mismatch. An intelligent adaptation rule is developed for the learning rate of RBFNN which gives faster convergence via an estimate of error energy while giving guarantee to the l 2 stability governed by the upper bounding via small gain theorem. Simulation results are presented to support our theoretical development.
    Matched MeSH terms: Algorithms
  4. Tayan O, Kabir MN, Alginahi YM
    ScientificWorldJournal, 2014;2014:514652.
    PMID: 25254247 DOI: 10.1155/2014/514652
    This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints.
    Matched MeSH terms: Algorithms*
  5. Yarmand H, Gharehkhani S, Kazi SN, Sadeghinezhad E, Safaei MR
    ScientificWorldJournal, 2014;2014:369593.
    PMID: 25254236 DOI: 10.1155/2014/369593
    Thermal characteristics of turbulent nanofluid flow in a rectangular pipe have been investigated numerically. The continuity, momentum, and energy equations were solved by means of a finite volume method (FVM). The symmetrical rectangular channel is heated at the top and bottom at a constant heat flux while the sides walls are insulated. Four different types of nanoparticles Al2O3, ZnO, CuO, and SiO2 at different volume fractions of nanofluids in the range of 1% to 5% are considered in the present investigation. In this paper, effect of different Reynolds numbers in the range of 5000 < Re < 25000 on heat transfer characteristics of nanofluids flowing through the channel is investigated. The numerical results indicate that SiO2-water has the highest Nusselt number compared to other nanofluids while it has the lowest heat transfer coefficient due to low thermal conductivity. The Nusselt number increases with the increase of the Reynolds number and the volume fraction of nanoparticles. The results of simulation show a good agreement with the existing experimental correlations.
    Matched MeSH terms: Algorithms*
  6. Siddiqui MF, Reza AW, Kanesan J, Ramiah H
    ScientificWorldJournal, 2014;2014:620868.
    PMID: 25133249 DOI: 10.1155/2014/620868
    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.
    Matched MeSH terms: Algorithms*
  7. Iranmanesh V, Ahmad SM, Adnan WA, Yussof S, Arigbabu OA, Malallah FL
    ScientificWorldJournal, 2014;2014:381469.
    PMID: 25133227 DOI: 10.1155/2014/381469
    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%.
    Matched MeSH terms: Algorithms*
  8. Islam MJ, Reza AW, Kausar AS, Ramiah H
    ScientificWorldJournal, 2014;2014:306270.
    PMID: 25133220 DOI: 10.1155/2014/306270
    The advent of technology with the increasing use of wireless network has led to the development of Wireless Body Area Network (WBAN) to continuously monitor the change of physiological data in a cost efficient manner. As numerous researches on wave propagation characterization have been done in intrabody communication, this study has given emphasis on the wave propagation characterization between the control units (CUs) and wireless access point (AP) in a hospital scenario. Ray tracing is a tool to predict the rays to characterize the wave propagation. It takes huge simulation time, especially when multiple transmitters are involved to transmit physiological data in a realistic hospital environment. Therefore, this study has developed an accelerated ray tracing method based on the nearest neighbor cell and prior knowledge of intersection techniques. Beside this, Red-Black tree is used to store and provide a faster retrieval mechanism of objects in the hospital environment. To prove the superiority, detailed complexity analysis and calculations of reflection and transmission coefficients are also presented in this paper. The results show that the proposed method is about 1.51, 2.1, and 2.9 times faster than the Object Distribution Technique (ODT), Space Volumetric Partitioning (SVP), and Angular Z-Buffer (AZB) methods, respectively. To show the various effects on received power in 60 GHz frequency, few comparisons are made and it is found that on average -9.44 dBm, -8.23 dBm, and -9.27 dBm received power attenuations should be considered when human, AP, and CU move in a given hospital scenario.
    Matched MeSH terms: Algorithms*
  9. Ahmed AU, Islam MT, Ismail M, Kibria S, Arshad H
    ScientificWorldJournal, 2014;2014:253787.
    PMID: 25133214 DOI: 10.1155/2014/253787
    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation.
    Matched MeSH terms: Algorithms*
  10. Yahaya Rashid AS, Ramli R, Mohamed Haris S, Alias A
    ScientificWorldJournal, 2014;2014:190214.
    PMID: 25101312 DOI: 10.1155/2014/190214
    The dynamic behavior of a body-in-white (BIW) structure has significant influence on the noise, vibration, and harshness (NVH) and crashworthiness of a car. Therefore, by improving the dynamic characteristics of BIW, problems and failures associated with resonance and fatigue can be prevented. The design objectives attempt to improve the existing torsion and bending modes by using structural optimization subjected to dynamic load without compromising other factors such as mass and stiffness of the structure. The natural frequency of the design was modified by identifying and reinforcing the structure at critical locations. These crucial points are first identified by topology optimization using mass and natural frequencies as the design variables. The individual components obtained from the analysis go through a size optimization step to find their target thickness of the structure. The thickness of affected regions of the components will be modified according to the analysis. The results of both optimization steps suggest several design modifications to achieve the target vibration specifications without compromising the stiffness of the structure. A method of combining both optimization approaches is proposed to improve the design modification process.
    Matched MeSH terms: Algorithms
  11. Dabbagh M, Lee SP
    ScientificWorldJournal, 2014;2014:737626.
    PMID: 24982987 DOI: 10.1155/2014/737626
    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.
    Matched MeSH terms: Algorithms
  12. Aghabozorgi S, Ying Wah T, Herawan T, Jalab HA, Shaygan MA, Jalali A
    ScientificWorldJournal, 2014;2014:562194.
    PMID: 24982966 DOI: 10.1155/2014/562194
    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.
    Matched MeSH terms: Algorithms*
  13. Tahriri F, Dawal SZ, Taha Z
    ScientificWorldJournal, 2014;2014:505207.
    PMID: 24982962 DOI: 10.1155/2014/505207
    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model.
    Matched MeSH terms: Algorithms*
  14. Kamangar S, Kalimuthu G, Badruddin IA, Badarudin A, Ahmed NJ, Khan TM
    ScientificWorldJournal, 2014;2014:354946.
    PMID: 25258722 DOI: 10.1155/2014/354946
    The present study deals with the functional severity of a coronary artery stenosis assessed by the fractional flow reserve (FFR). The effects of different geometrical shapes of lesion on the diagnostic parameters are unknown. In this study, 3D computational simulation of blood flow in three different geometrical shapes of stenosis (triangular, elliptical, and trapezium) is considered in steady and transient conditions for 70% (moderate), 80% (intermediate), and 90% (severe) area stenosis (AS). For a given percentage AS, the variation of diagnostic parameters which are derived from pressure drop across the stenosis was found in three different geometrical shapes of stenosis and it was observed that FFR is higher in triangular shape and lower in trapezium shape. The pressure drop coefficient (CDP) was higher in trapezium shape and lower in triangular model whereas the LFC shows opposite trend. From the clinical perspective, the relationship between percentage AS and FFR is linear and inversely related in all the three models. A cut-off value of 0.75 for FFR was observed at 76.5% AS in trapezium model, 79.5% in elliptical model, and 82.7% AS for the triangular shaped model. The misinterpretation of the functional severity of the stenosis is in the region of 76.5%-82.7 % AS from different shapes of stenosis models.
    Matched MeSH terms: Algorithms*
  15. Asghar A, Abdul Raman AA, Daud WM
    ScientificWorldJournal, 2014;2014:869120.
    PMID: 25258741 DOI: 10.1155/2014/869120
    In the present study, a comparison of central composite design (CCD) and Taguchi method was established for Fenton oxidation. [Dye]ini, Dye:Fe(+2), H2O2:Fe(+2), and pH were identified control variables while COD and decolorization efficiency were selected responses. L 9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications.
    Matched MeSH terms: Algorithms*
  16. Soleymani A, Nordin MJ, Sundararajan E
    ScientificWorldJournal, 2014;2014:536930.
    PMID: 25258724 DOI: 10.1155/2014/536930
    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.
    Matched MeSH terms: Algorithms*
  17. Chin JJ, Tan SY, Heng SH, Phan RC
    ScientificWorldJournal, 2014;2014:170906.
    PMID: 25207333 DOI: 10.1155/2014/170906
    Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.
    Matched MeSH terms: Algorithms*
  18. Ng H, Tan WH, Abdullah J, Tong HL
    ScientificWorldJournal, 2014;2014:376569.
    PMID: 25143972 DOI: 10.1155/2014/376569
    This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.
    Matched MeSH terms: Algorithms
  19. Darzi S, Kiong TS, Islam MT, Ismail M, Kibria S, Salem B
    ScientificWorldJournal, 2014;2014:724639.
    PMID: 25147859 DOI: 10.1155/2014/724639
    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
    Matched MeSH terms: Algorithms*
  20. Marto A, Hajihassani M, Armaghani DJ, Mohamad ET, Makhtar AM
    ScientificWorldJournal, 2014;2014:643715.
    PMID: 25147856 DOI: 10.1155/2014/643715
    Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches.
    Matched MeSH terms: Algorithms*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links