The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.
Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also successfully encrypts and decrypts textual data in Microsoft Word document, Microsoft Excel and Portable Document Formats which are the conventional format of documenting medical records. Interestingly, the entire encryption and decryption procedures were achieved at a lower computational cost using regular hardware and software resources without compromising neither the quality of the decrypted data nor the security level of the algorithms.
Developing a software program to manage data in a general practice setting is complicated. Vision Integrated Medical System is an example of a integrated management system that was developed by general practitioners, within a general practice, to offer a user friendly system with multi tasking capabilities. The present report highlights the reasons behind the development of this system and how it can assist day to day practice.
The parallelisation of big data is emerging as an important framework for large-scale parallel data applications such as seismic data processing. The field of seismic data is so large or complex that traditional data processing software is incapable of dealing with it. For example, the implementation of parallel processing in seismic applications to improve the processing speed is complex in nature. To overcome this issue, a simple technique which that helps provide parallel processing for big data applications such as seismic algorithms is needed. In our framework, we used the Apache Hadoop with its MapReduce function. All experiments were conducted on the RedHat CentOS platform. Finally, we studied the bottlenecks and improved the overall performance of the system for seismic algorithms (stochastic inversion).
Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.
The vector evaluated particle swarm optimisation (VEPSO) algorithm was previously improved by incorporating nondominated solutions for solving multiobjective optimisation problems. However, the obtained solutions did not converge close to the Pareto front and also did not distribute evenly over the Pareto front. Therefore, in this study, the concept of multiple nondominated leaders is incorporated to further improve the VEPSO algorithm. Hence, multiple nondominated solutions that are best at a respective objective function are used to guide particles in finding optimal solutions. The improved VEPSO is measured by the number of nondominated solutions found, generational distance, spread, and hypervolume. The results from the conducted experiments show that the proposed VEPSO significantly improved the existing VEPSO algorithms.
Voting is an important operation in multichannel computation paradigm and realization of ultrareliable and real-time control systems that arbitrates among the results of N redundant variants. These systems include N-modular redundant (NMR) hardware systems and diversely designed software systems based on N-version programming (NVP). Depending on the characteristics of the application and the type of selected voter, the voting algorithms can be implemented for either hardware or software systems. In this paper, a novel voting algorithm is introduced for real-time fault-tolerant control systems, appropriate for applications in which N is large. Then, its behavior has been software implemented in different scenarios of error-injection on the system inputs. The results of analyzed evaluations through plots and statistical computations have demonstrated that this novel algorithm does not have the limitations of some popular voting algorithms such as median and weighted; moreover, it is able to significantly increase the reliability and availability of the system in the best case to 2489.7% and 626.74%, respectively, and in the worst case to 3.84% and 1.55%, respectively.
Unified Modeling Language is the most popular and widely used Object-Oriented modelling language in the IT industry. This study focuses on investigating the ability to expand UML to some extent to model crosscutting concerns (Aspects) to support AspectJ. Through a comprehensive literature review, we identify and extensively examine all the available Aspect-Oriented UML modelling approaches and find that the existing Aspect-Oriented Design Modelling approaches using UML cannot be considered to provide a framework for a comprehensive Aspectual UML modelling approach and also that there is a lack of adequate Aspect-Oriented tool support. This study also proposes a set of Aspectual UML semantic rules and attempts to generate AspectJ pseudocode from UML diagrams. The proposed Aspectual UML modelling approach is formally evaluated using a focus group to test six hypotheses regarding performance; a "good design" criteria-based evaluation to assess the quality of the design; and an AspectJ-based evaluation as a reference measurement-based evaluation. The results of the focus group evaluation confirm all the hypotheses put forward regarding the proposed approach. The proposed approach provides a comprehensive set of Aspectual UML structural and behavioral diagrams, which are designed and implemented based on a comprehensive and detailed set of AspectJ programming constructs.
New techniques are presented for Delaunay triangular mesh generation and element optimisation. Sample points for triangulation are generated through mapping (a new approach). These sample points are later triangulated by the conventional Delaunay method. Resulting triangular elements are optimised by addition, removal and relocation of mapped sample points (element nodes). The proposed techniques (generation of sample points through mapping for Delaunay triangulation and mesh optimisation) are demonstrated by using Mathematica software. Simulation results show that the proposed techniques are able to form meshes that consist of triangular elements with aspect ratio of less than 2 and minimum skewness of more than 45°.
In this paper, we have examined the effectiveness of the quarter-sweep iteration concept on conjugate gradient normal residual (CGNR) iterative method by using composite Simpson's (CS) and finite difference (FD) discretization schemes in solving Fredholm integro-differential equations. For comparison purposes, Gauss- Seidel (GS) and the standard or full- and half-sweep CGNR methods namely FSCGNR and HSCGNR are also presented. To validate the efficacy of the proposed method, several analyses were carried out such as computational complexity and percentage reduction on the proposed and existing methods.
A description is given of the numerical integration method for the calculation of the mean kidney dose for a Co-57 external radiation source. Based on this theory, a computer program was written. Initial calculation of the kidney volume shows that the method has a good accuracy. For the mean kidney dose, this method gives a satisfactory result, since the calculated value lies within the acceptable range of the central axis depth dose.
Satu huraian diberikan tentang kaedah pengkamiran berangka untuk mengira dos buah pinggang purata untuk satu sumber sinaran luar Co-57. Berdasarkan teori ini, satu program komputer ditulis. Pengiraan awal isipadu buah pinggang menunjukkan yang kaedah ini mempunyai ketepatan yang baik. Untuk dos buah pinggang purata, kaedah ini memberikan keputusan yang baik, kerana nilai kiraan terletak diantara julat dos kedalaman paksi pusat yang diterima.
For medical application, the efficiency and transmission distance of the wireless power transfer (WPT) are always the main concern. Research has been showing that the impedance matching is one of the critical factors for dealing with the problem. However, there is not much work performed taking both the source and load sides into consideration. Both sides matching is crucial in achieving an optimum overall performance, and the present work proposes a circuit model analysis for design and implementation. The proposed technique was validated against experiment and software simulation. Result was showing an improvement in transmission distance up to 6 times, and efficiency at this transmission distance had been improved up to 7 times as compared to the impedance mismatch system. The system had demonstrated a near-constant transfer efficiency for an operating range of 2cm-12cm.
In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature.
This paper deals with the interface-relevant activity of a vehicle integrated intelligent safety system (ISS) that includes an airbag deployment decision system (ADDS) and a tire pressure monitoring system (TPMS). A program is developed in LabWindows/CVI, using C for prototype implementation. The prototype is primarily concerned with the interconnection between hardware objects such as a load cell, web camera, accelerometer, TPM tire module and receiver module, DAQ card, CPU card and a touch screen. Several safety subsystems, including image processing, weight sensing and crash detection systems, are integrated, and their outputs are combined to yield intelligent decisions regarding airbag deployment. The integrated safety system also monitors tire pressure and temperature. Testing and experimentation with this ISS suggests that the system is unique, robust, intelligent, and appropriate for in-vehicle applications.
Complex engineering systems are usually designed to last for many years. Such systems will face many uncertainties in the future. Hence the design and deployment of these systems should not be based on a single scenario, but should incorporate flexibility. Flexibility can be incorporated in system architectures in the form of options that can be exercised in the future when new information is available. Incorporating flexibility comes, however, at a cost. To evaluate if this cost is worth the investment a real options analysis can be carried out. This approach is demonstrated through analysis of a case study of a previously developed static system-of-systems for maritime domain protection in the Straits of Malacca. This article presents a framework for dynamic strategic planning of engineering systems using real options analysis and demonstrates that flexibility adds considerable value over a static design. In addition to this it is shown that Monte Carlo analysis and genetic algorithms can be successfully combined to find solutions in a case with a very large number of possible futures and system designs.
Nowadays, the applications of tracking moving object are commonly used in various
areas especially in computer vision applications. There are many tracking algorithms
have been introduced and they are divided into three groups which are generative
trackers, discriminative trackers and hybrid trackers. One of the methods is TrackingLearning-Detection
(TLD) framework which is an example of the hybrid trackers where
combination between the generative trackers and the discriminative trackers occur. In
TLD, the detector consists of three stages which are patch variance, ensemble classifier
and KNearest Neighbor classifier. In the second stage, the ensemble classifier depends
on simple pixel comparison hence, it is likely fail to offer a better generalization of the
appearances of the target object in the detection process. In this paper, OnlineSequential
Extreme Learning Machine (OS-ELM) was used to replace the ensemble
classifier in the TLD framework. Besides that, different types of Haar-like features were
used for the feature extraction process instead of using raw pixel value as the features.
The objectives of this study are to improve the classifier in the second stage of detector
in TLD framework by using Haar-like features as an input to the classifier and to get a
more generalized detector in TLD framework by using OS-ELM based detector. The
results showed that the proposed method performs better in Pedestrian 1 in terms of
F-measure and also offers good performance in terms of Precision in four out of six
Space Vector Pulse Width Modulation (SVPWM) method is widely used as a modulation technique
to drive a three-phase inverter. It is an advanced computational intensive method used in pulse width modulation (PWM) algorithm for the three-phase voltage source inverter. Compared with the other PWM techniques, SVPWM is easier to implement, thus, it is the most preferred technique among others. Mathematical model for SVPWM was developed using MATLAB/ Simulink software. In this paper, the interface between MATLAB Simulink with the three-phase inverter by using Arduino Uno microcontroller is proposed. Arduino Uno generates the SVPWM signals for Permanent Magnet Synchronous Motor (PMSM) and is described in this paper. This work consists of software and hardware implementations. Simulation was done via Matlab/Simulink software to verify the effectiveness of the system and to measure the percentage of Total Harmonic Distortion (THD). The results show that SVPWM technique is able to drive the three-phase inverter with the Arduino UNO.
A new method to construct the distinct Hamiltonian circuits in complete
graphs is called Half Butterfly Method. The Half Butterfly Method used the concept
of isomorphism in developing the distinct Hamiltonian circuits. Thus some theoretical
works are presented throughout developing this method.
Covering as much as 25% to 35% of the development cost, software testing is an integral part of the software development lifecycle. Despite its importance, the current software testing practice is still based on highly manual processes from the generation of test cases (i.e. from specifications) up to the actual execution of the test. These manually generated tests are sometimes executed using ad hoc approaches, typically requiring the construction of a test driver for the particular application under test. In addition, test engineers are also under pressure to test increasing lines of code in order to meet market demands for more software functionalities. While there are significant proliferations of helpful testing tools or research prototypes in the market, much of them do not adequately provide the right level of abstraction and automation as required by test engineers. In order to facilitate and address some of the aforementioned issues, an automated testing tool was developed, called SFIT, based on Java® technology. This paper describes the development, implementation and evaluation of SFIT. Two case studies involving the robustness assessment of an adder module and a Linda-based distributed shared memory implementation are described in order to demonstrate the applicability of SFIT as a helpful automated testing tool.