This paper describes the development of a two-point implicit code in the form of fifth order Block Backward Differentiation Formulas (BBDF(5)) for solving first order stiff Ordinary Differential Equations (ODEs). This method computes the approximate solutions at two points simultaneously within an equidistant block. Numerical results are presented to compare the efficiency of the developed BBDF(5) to the classical one-point Backward Differentiation Formulas (BDF). The results indicated that the BBDF(5) outperformed the BDF in terms of total number of steps, accuracy and computational time.
Musculoskeletal disorders are commonly reported among computer users. This study explored whether these disorders can be reduced by the provision of ergonomics education.
Automatic vehicle license plate recognition is an essential part of intelligent vehicle access control and monitoring systems. With the increasing number of vehicles, it is important that an effective real-time system for automated license plate recognition is developed. Computer vision techniques are typically used for this task. However, it remains a challenging problem, as both high accuracy and low processing time are required in such a system. Here, we propose a method for license plate recognition that seeks to find a balance between these two requirements. The proposed method consists of two stages: detection and recognition. In the detection stage, the image is processed so that a region of interest is identified. In the recognition stage, features are extracted from the region of interest using the histogram of oriented gradients method. These features are then used to train an artificial neural network to identify characters in the license plate. Experimental results show that the proposed method achieves a high level of accuracy as well as low processing time when compared to existing methods, indicating that it is suitable for real-time applications.
Waste residues and acidic effluents (post-processing of E-waste) released into the local surroundings cause perilous environmental threats and potential risks to human health. Only limited research and information are available toward the sustainable management of waste residues generated post resource recovery of E-waste components. In the present study, the manual processing of obsolete computer (keyboard, monitor, CPU, and mouse) and chemical leaching of waste printed circuit boards (WPCBs) (motherboard, hard drive, DVD drive, and power supply) were performed for urban mining. The toxicity characteristics of typical pollutants in the residues of the WPCBs (post chemical leaching) were studied by toxicity characteristics leaching procedure (TCLP) test. Manual dismantling techniques resulted in an efficient urban mining concept with an overall average profit estimation of INR 2513.73/US$ 34.59. The chemical leaching of WPCBs showed a high concentration of metal leaching like Cu (229662 ± 575.3 mg/kg) and Pb (36785.67 ± 13.07 mg/kg) in the motherboard after stripping epoxy coating. The toxicity test revealed that the concentration of Cu (245.746 ± 0.016 mg/l) in the treated waste residue and Cu (430.746 ± 0.0015 mg/l) and Pb (182.09 ± 0.0035 mg/l) in the non-treated waste residue exceeded the threshold limit. The concentrations of other elements As, Cd, Co, Cr, Ag, Mn, Zn, Ni, Fe, Se, and In were within the permissible limit. Hence, the waste residue stands non-hazardous except Cu and Pb. Stripping out the epoxy coating of WPCBs enhances the metal leaching concentrations. The study highlighted that efficient and appropriate E-waste urban mining has immense potential in tracing the waste scrap into secondary resources. This study also emphasized that the final processed waste residue (left unattended or discarded due to lack of appropriate skill and technology) can be taken into consideration and exploited for value-added materials.
Fused deposition modelling (FDM) opens new ways across the industries and helps to produce complex products, yielding a prototype or finished product. However, it should be noted that the final products need high surface quality due to their better mechanical properties. The main purpose of this research was to determine the influence of computer numerical control (CNC) machining on the surface quality and identify the average surface roughness (Ra) and average peak to valley height (Rz) when the specimens were printed and machined in various build orientations. In this study, the study samples were printed and machined to investigate the effects of machining on FDM products and generate a surface comparison between the two processes. In particular, the block and complex specimens were printed in different build orientations, whereby other parameters were kept constant to understand the effects of orientation on surface smoothness. As a result, wide-ranging values of Ra and Rz were found in both processes for each profile due to their different features. The Ra values for the block samples, printed samples, and machined samples were 21, 91, and 52, respectively, whereas the Rz values were identical to Ra values in all samples. These results indicated that the horizontal surface roughness yielded the best quality compared to the perpendicular and vertical specimens. Moreover, machining was found to show a great influence on thermoplastics in which the surfaces became smooth in the machined samples. In brief, this research showed that build orientation had a great effect on the surface texture for both processes.
The need for power-efficient devices, such as smart sensor nodes, mobile devices, and portable digital gadgets, is markedly increasing and these devices are becoming commonly used in daily life. These devices continue to demand an energy-efficient cache memory designed on Static Random-Access Memory (SRAM) with enhanced speed, performance, and stability to perform on-chip data processing and faster computations. This paper presents an energy-efficient and variability-resilient 11T (E2VR11T) SRAM cell, which is designed with a novel Data-Aware Read-Write Assist (DARWA) technique. The E2VR11T cell comprises 11 transistors and operates with single-ended read and dynamic differential write circuits. The simulated results in a 45 nm CMOS technology exhibit 71.63% and 58.77% lower read energy than ST9T and LP10T and lower write energies of 28.25% and 51.79% against S8T and LP10T cells, respectively. The leakage power is reduced by 56.32% and 40.90% compared to ST9T and LP10T cells. The read static noise margin (RSNM) is improved by 1.94× and 0.18×, while the write noise margin (WNM) is improved by 19.57% and 8.70% against C6T and S8T cells. The variability investigation using the Monte Carlo simulation on 5000 samples highly validates the robustness and variability resilience of the proposed cell. The improved overall performance of the proposed E2VR11T cell makes it suitable for low-power applications.
The well-known geostatistics method (variance-reduction method) is commonly used to determine the optimal rain gauge network. The main problem in geostatistics method to determine the best semivariogram model in order to be used in estimating the variance. An optimal choice of the semivariogram model is an important point for a good data evaluation process. Three different semivariogram models which are Spherical, Gaussian and Exponential are used and their performances are compared in this study. Cross validation technique is applied to compute the errors of the semivariograms. Rain-fall data for the period of 1975 – 2008 from the existing 84 rain gauge stations covering the state of Johor are used in this study. The result shows that the exponential model is the best semivariogram model and chosen to determine the optimal number and location of rain gauge station.
Transmission opportunity (TXOP) is a key factor to enable efficient channel bandwidth utilization over wireless campus networks (WCN) for interactive multimedia (IMM) applications. It facilitates in resource allocation for the similar categories of multiple packets transmission until the allocated time is expired. The static TXOP limits are defined for various categories of IMM traffics in the IEEE802.11e standard. Due to the variation of traffic load in WCN, the static TXOP limits are not sufficient enough to guarantee the quality of service (QoS) for IMM traffic flows. In order to address this issue, several existing works allocate the TXOP limits dynamically to ensure QoS for IMM traffics based on the current associated queue size and pre-setting threshold values. However, existing works do not take into account all the medium access control (MAC) overheads while estimating the current queue size which in turn is required for dynamic TXOP limits allocation. Hence, not considering MAC overhead appropriately results in inaccurate queue size estimation, thereby leading to inappropriate allocation of dynamic TXOP limits. In this article, an enhanced dynamic TXOP (EDTXOP) scheme is proposed that takes into account all the MAC overheads while estimating current queue size, thereby allocating appropriate dynamic TXOP limits within the pre-setting threshold values. In addition, the article presents an analytical estimation of the EDTXOP scheme to compute the dynamic TXOP limits for the current high priority traffic queues. Simulation results were carried out by varying traffic load in terms of packet size and packet arrival rate. The results show that the proposed EDTXOP scheme achieves the overall performance gains in the range of 4.41%-8.16%, 8.72%-11.15%, 14.43%-32% and 26.21%-50.85% for throughput, PDR, average ETE delay and average jitter, respectively when compared to the existing work. Hence, offering a better TXOP limit allocation solution than the rest.
This paper presents the evaluation of integrated partial match query in Geographic Information Retrieval (GIR). To facilitate the evaluation, Kuala Lumpur tourist related data is used as test collection and is stored in SuperWeb, a map server. Then the map server is customized to enhance its query capability to recognize word in partial or case sensitive between layers of spatial data. Query keyword is tested using the system and results are evaluated using experiments on sample data. Findings show that integrated partial match query provides more flexibility to tourist in determining search results.
Elliptic curve cryptosystems (ECC) provides better security for each bit key utilized compared to the RSA cryptosystem. For this reason, it is projected to have more practical usage than the RSA. In ECC, scalar multiplication (or point multiplication) is the dominant operation, namely, computing nP from a point P on an elliptic curve, where n is an integer defined as the point resulting from adding P + P + ... + P, n times. However, for practical uses, it is very important to improve the efficiency of the scalar multiplication. Solinas (1997) proposes that the τ-adic Non-Adjacent Form (τ-NAF) is one of the most efficient algorithms used to compute scalar multiplications on Anomalous Binary curves. In this paper, we give a new property (i.e., Theorem 1.2) of τ-NAF(n) representation for every length, l. This is useful for evaluating the maximum and minimum norms occurring among all length-l elements of Z(τ). We also propose a new cryptographic method by using randomization of a multiplier n to nr an element of Z(τ). It is based on τ-NAF. We focused on estimating the length of RTNAF(nr) expansion by using a new method.
Face detection and analysis is an important area in computer vision. Furthermore, face detection has been an active research field in the recent years following the advancement in digital image processing. The visualisation of visual entities or sub-pattern composition may become complex to visualise due to the high frequency of noise and light effect during examination. This study focuses on evaluating the ability of Haar classifier in detecting faces from three paired Min-Max values used on histogram stretching. Min-Max histogram stretching was the selected method for implementation given that it appears to be the appropriate technique from the observation carried out. Experimental results show that, 60-240 MinMax values, Haar classifier can accurately detect faces compared to the two values.
According to the classical theory of viscoelasticity, a linear viscoelastic (LVE) function can be converted into another viscoelastic function even though they emphasize different information. In this study, dynamic tests were conducted on different conventional penetration grade bitumens using a dynamic shear rheometer (DSR) in the LVE region. The results showed that the dynamic data in the frequency domain can be converted into the time domain functions using a numerical technique. This was done with the aid of the non-linear regularization (NLREG) computer program. The NLREG software is a computer program for solving nonlinear ill-posed problem and is based on non-linear Tikhonov regularization method. The use of data interconversion equation is found suitable for converting from the frequency domain into the time domain of conventional penetration grade bitumens.
This paper presents the development of a PC-based microwave five-port reflectometer for the determination of moisture content in oil palm fruits. The reflectometer was designed to measure both the magnitude and phase of the reflection coefficient of any passive microwave device. The stand-alone reflectometer consists of a PC, a microwave source, diode detectors and an analog to digital converter. All the measurement and data acquisition were done using Agilent VEE graphical programming software. The relectometer can be used with any reflection based microwave sensor. In this work, the application of the reflectometer as a useful instrument to determine the moisture content in oil palm fruits using monopole and coaxial sensors was demonstrated. Calibration equations between reflection coefficients and moisture content have been established for both sensors. The equation based on phase measurement of monopole sensor was found to be accurate within 5% in predicting moisture content in the fruits when compared to the conventional oven drying method.
A case study to illustrate the cost effectiveness of ergonomic redesign of electronic motherboard was presented. The factory was running at a loss due to the high costs of rejects and poor quality and productivity. Subjective assessments and direct observations were made on the factory. Investigation revealed that due to motherboard design errors, the machine had difficulty in placing integrated circuits onto the pads, the operators had much difficulty in manual soldering certain components and much unproductive manual cleaning (MC) was required. Consequently, there were high rejects and occupational health and safety (OHS) problems, such as, boredom and work discomfort. Also, much labour and machine costs were spent on repairs. The motherboard was redesigned to correct the design errors, to allow more components to be machine soldered and to reduce MC. This eliminated rejects, reduced repairs, saved US dollars 581495/year and improved operators' OHS. The customer also saved US dollars 142105/year on loss of business.
An Electronic Medical Record (EMR) is a patient's database record that can be transmitted securely. There are a diversity of EMR systems for different medical units to choose from. The structure and value of these systems is the focus of this qualitative study, from a medical professional's standpoint, as well as its economic value and whether it should be shared between health organizations. The study took place in the natural setting of the medical units' environments. A purposive sample of 40 professionals in Greece and Oman, was interviewed. The study suggests that: (1) The demographics of the EMR should be divided in categories, not all of them accessible and/or visible by all; (2) The EMR system should follow an open architecture so that more categories and subcategories can be added as needed and following a possible business plan (ERD is suggested); (3) The EMR should be implemented gradually bearing in mind both medical and financial concerns; (4) Sharing should be a patient's decision as the owner of the record. Reaching a certain level of maturity of its implementation and utilization, it is useful to seek the professionals' assessment on the structure and value of such a system.
Voting is an important operation in multichannel computation paradigm and realization of ultrareliable and real-time control systems that arbitrates among the results of N redundant variants. These systems include N-modular redundant (NMR) hardware systems and diversely designed software systems based on N-version programming (NVP). Depending on the characteristics of the application and the type of selected voter, the voting algorithms can be implemented for either hardware or software systems. In this paper, a novel voting algorithm is introduced for real-time fault-tolerant control systems, appropriate for applications in which N is large. Then, its behavior has been software implemented in different scenarios of error-injection on the system inputs. The results of analyzed evaluations through plots and statistical computations have demonstrated that this novel algorithm does not have the limitations of some popular voting algorithms such as median and weighted; moreover, it is able to significantly increase the reliability and availability of the system in the best case to 2489.7% and 626.74%, respectively, and in the worst case to 3.84% and 1.55%, respectively.
This paper presents the design of a non-intrusive system to measure ultra-low water content in crude oil. The system is based on a capacitance to phase angle conversion method. Water content is measured with a capacitance sensor comprising two semi-cylindrical electrodes mounted on the outer side of a glass tube. The presence of water induces a capacitance change that in turn converts into a phase angle, with respect to a main oscillator. A differential sensing technique is adopted not only to ensure high immunity against temperature variation and background noise, but also to eliminate phase jitter and amplitude variation of the main oscillator that could destabilize the output. The complete capacitive sensing system was implemented in hardware and experiment results using crude oil samples demonstrated that a resolution of ± 50 ppm of water content in crude oil was achieved by the proposed design.
The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.
Company A is a brownfield refinery that had been in service for over 25 years and has its own system to generate GOX for its needed utility usage. Noting of the hazards of GOX and in consideration of an aged refinery, this research is of the intent to evaluate the risk of GOX in the aspect of personnel and process safety; and to provide recommendation or mitigations planning with regards to Company A’s existing hardware through Bow Tie review. The analysis was done taking into consideration the data compiled as well as the inherited Process Safety Assessment (PSA) findings of Company A that served as secondary data to this research. It was observed that Company A personnel are well versed with the risk and hazards of GOX system and through the plant rejuvenation and material upgrade works, the hazards were mitigated to a lower risk within the risk matrix. The implementation and upgrade works had served to add more barriers to the left side of the bow tie as well as ensuring that the aged complex is well equipped with needed safeguarding strategies (from inherent safer design, passive & active safeguarding and procedural controls) to avoid the occurrence of potential oxygen fire or explosion incident.