Effective communication in vehicular networks depends on the scheduling of wireless channel resources. There are two types of channel resource scheduling in Release 14 of the 3GPP, i.e., (1) controlled by eNodeB and (2) a distributed scheduling carried out by every vehicle, known as Autonomous Resource Selection (ARS). The most suitable resource scheduling for vehicle safety applications is the ARS mechanism. ARS includes (a) counter selection (i.e., specifying the number of subsequent transmissions) and (b) resource reselection (specifying the reuse of the same resource after counter expiry). ARS is a decentralized approach for resource selection. Therefore, resource collisions can occur during the initial selection, where multiple vehicles might select the same resource, hence resulting in packet loss. ARS is not adaptive towards vehicle density and employs a uniform random selection probability approach for counter selection and reselection. As a result, it can prevent some vehicles from transmitting in a congested vehicular network. To this end, the paper presents Truly Autonomous Resource Selection (TARS) for vehicular networks. TARS considers resource allocation as a problem of locally detecting the selected resources at neighbor vehicles to avoid resource collisions. The paper also models the behavior of counter selection and resource block reselection on resource collisions using the Discrete Time Markov Chain (DTMC). Observation of the model is used to propose a fair policy of counter selection and resource reselection in ARS. The simulation of the proposed TARS mechanism showed better performance in terms of resource collision probability and the packet delivery ratio when compared with the LTE Mode 4 standard and with a competing approach proposed by Jianhua He et al.
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
This paper presents a real time implementation of the single-phase power factor correction (PFC) AC-DC boost converter. A combination of higher order sliding mode controller based on super twisting algorithm and predictive control techniques are implemented to improve the performance of the boost converter. Due to the chattering effects, the higher order sliding mode control (HOSMC) is designed. Also, the predictive technique is modified taking into account the large computational delays. The robustness of the controller is verified conducting simulation in MATLAB, the results show good performances in both steady and transient states. An experiment is conducted through a test bench based on dSPACE 1104. The experimental results proved that the proposed controller enhanced the performance of the converter under different parameters variations.
In this paper, we study the effects of symmetrization by the implicit midpoint rule (IMR) and the implicit trapezoidal rule
(ITR) on the numerical solution of ordinary differential equations. We extend the study of the well-known formula of Gragg
to a two-step symmetrizer and compare the efficiency of their use with the IMR and ITR. We present the experimental results
on nonlinear problem using variable stepsize setting and the results show greater efficiency of the two-step symmetrizers
over the one-step symmetrizers of IMR and ITR.
In this study, a new class of exponential-rational methods (ERMs) for the numerical solution of first order initial value problems has been developed. Developments of third order and fourth order ERMs, as well as their corresponding local truncation error have been presented. Each ERM was found to be consistent with the differential equation and L-stable. Numerical experiments showed that the third order and fourth order ERMs generates more accurate numerical results compared with the existing rational methods in solving first order initial value problems.
We have considered a fractional integral operator in this study. By using this integral operator we obtained a Briot-Bouquet superordination and sandwich theorem.
Predictor-corrector two point block methods are developed for solving first order ordinary differential equations (ODEs) using variable step size. The method will estimate the solutions of initial value problems (IVPs) at two points simultaneously. The existence multistep method involves the computations of the divided differences and integration coefficients when using the variable step size or variable step size and order. The block method developed will be presented as in the form of Adams Bashforth - Moulton type and the coefficients will be stored in the code. The efficiency of the predictor-corrector block method is compared to the standard variable step and order non block multistep method in terms of total number of steps, maximum error, total function calls and execution times.
A new method called parallel R-point explicit block method for solving a single equation of higher order ordinary differential equation directly using a constant step size is developed. This method calculates the numerical solution at R point simultaneously is parallel in nature. Computational advantages are presented by comparing the results obtained with the new method with that of the conventional 1-point method. The numerical results show that the new method reduces the total number of steps and execution time. The accuracy of the parallel block and the conventional 1-point methods is comparable particularly when finer step sizes are used.
Recently, green computing has received significant attention for Internet of Things (IoT) environments due to the growing computing demands under tiny sensor enabled smart services. The related literature on green computing majorly focuses on a cover set approach that works efficiently for target coverage, but it is not applicable in case of area coverage. In this paper, we present a new variant of a cover set approach called a grouping and sponsoring aware IoT framework (GS-IoT) that is suitable for area coverage. We achieve non-overlapping coverage for an entire sensing region employing sectorial sensing. Non-overlapping coverage not only guarantees a sufficiently good coverage in case of large number of sensors deployed randomly, but also maximizes the life span of the whole network with appropriate scheduling of sensors. A deployment model for distribution of sensors is developed to ensure a minimum threshold density of sensors in the sensing region. In particular, a fast converging grouping (FCG) algorithm is developed to group sensors in order to ensure minimal overlapping. A sponsoring aware sectorial coverage (SSC) algorithm is developed to set off redundant sensors and to balance the overall network energy consumption. GS-IoT framework effectively combines both the algorithms for smart services. The simulation experimental results attest to the benefit of the proposed framework as compared to the state-of-the-art techniques in terms of various metrics for smart IoT environments including rate of overlapping, response time, coverage, active sensors, and life span of the overall network.
Differential equations are commonly used to model various types of real life applications. The complexity of these models may often hinder the ability to acquire an analytical solution. To overcome this drawback, numerical methods were introduced to approximate the solutions. Initially when developing a numerical algorithm, researchers focused on the key aspect which is accuracy of the method. As numerical methods becomes more and more robust, accuracy alone is not sufficient hence begins the pursuit of efficiency which warrants the need for reducing computational cost. The current research proposes a numerical algorithm for solving initial value higher order ordinary differential equations (ODEs). The proposed algorithm is derived as a three point block multistep method, developed in an Adams type formulae (3PBCS) and will be used to solve various types of ODEs and systems of ODEs. Type of ODEs that are selected varies from linear to nonlinear, artificial and real life problems. Results will illustrate the accuracy and efficiency of the proposed three point block method. Order, stability and convergence of the method are also presented in the study.
In this paper, the singular LR fuzzy linear system is introduced. Such systems are divided into two parts: singular consistent LR fuzzy linear systems and singular inconsistent LR fuzzy linear systems. The capability of the generalized inverses such as Drazin inverse, pseudoinverse, and {1}-inverse in finding minimal solution of singular consistent LR fuzzy linear systems is investigated.
We propose to cascade the Shape-Preserving Piecewise Cubic Hermite model with the Autoregressive Moving Average (ARMA) interpolator; we call this technique the Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average (SP2CHARMA) model. In a few test cases involving different images, this model is found to deliver an optimum solution for signal to noise ratio (SNR) estimation problems under different noise environments. The performance of the proposed estimator is compared with two existing methods: the autoregressive-based and autoregressive moving average estimators. Being more robust with noise, the SP2CHARMA estimator has efficiency that is significantly greater than those of the two methods.
The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level.
A digit 8-shaped resonator inspired metamaterial is proposed herein for sensor applications. The resonator is surrounded by a ground frame and excited by a microstrip feedline. The measurement of the sensor can be performed using common laboratory facilities in lieu of using the waveguide, as the resonator, ground frame, and feedline are all on the same microstrip. To achieve metamaterial properties, more than one unit cell is usually utilized, whereas, in this work, a single cell was used to achieve the metamaterial characteristics. The properties of the metamaterial were investigated to find the relationship between the simulation and measurements. The proposed metamaterial sensor shows considerable sensitivity in sensor application. For the sensor application, FR4 and Rogers RO4350 materials were used as the over-layer. The sensor can measure dielectric thickness with a sensitivity of 625 MHz/mm, 468 MHz/mm, and 354 MHz/mm for the single over-layer, double over-layers, and multiple over-layers, respectively. The proposed prototype can be utilized in several applications where metamaterial characteristics are required.
This paper studies the three dimensional (3D) simulation of fluid flows through the ball grid array (BGA) to replicate the real underfill encapsulation process. The effect of different solder bump arrangements of BGA on the flow front, pressure and velocity of the fluid is investigated. The flow front, pressure and velocity for different time intervals are determined and analyzed for potential problems relating to solder bump damage. The simulation results from Lattice Boltzmann Method (LBM) code will be validated with experimental findings as well as the conventional Finite Volume Method (FVM) code to ensure highly accurate simulation setup. Based on the findings, good agreement can be seen between LBM and FVM simulations as well as the experimental observations. It was shown that only LBM is capable of capturing the micro-voids formation. This study also shows an increasing trend in fluid filling time for BGA with perimeter, middle empty and full orientations. The perimeter orientation has a higher pressure fluid at the middle region of BGA surface compared to middle empty and full orientation. This research would shed new light for a highly accurate simulation of encapsulation process using LBM and help to further increase the reliability of the package produced.
Maximum k Satisfiability logical rule (MAX-kSAT) is a language that bridges real life application to neural network optimization. MAX-kSAT is an interesting paradigm because the outcome of this logical rule is always negative/false. Hopfield Neural Network (HNN) is a type of neural network that finds the solution based on energy minimization. Interesting intelligent behavior has been observed when the logical rule is embedded in HNN. Increasing the storage capacity during the learning phase of HNN has been a challenging problem for most neural network researchers. Development of Metaheuristics algorithms has been crucial in optimizing the learning phase of Neural Network. The most celebrated metaheuristics model is Genetic Algorithm (GA). GA consists of several important operators that emphasize on solution improvement. Although GA has been reported to optimize logic programming in HNN, the learning complexity increases as the number of clauses increases. GA is more likely to be trapped in suboptimal fitness as the number of clauses increases. In this paper, metaheuristic algorithm namely Artificial Bee Colony (ABC) were proposed in learning MAX-kSAT programming. ABC is swarm-based metaheuristics that capitalized the capability of Employed Bee, Onlooker Bee, and Scout Bee. To this end, all the learning models were tested in a new restricted learning environment. Experimental results obtained from the computer simulation demonstrate the effectiveness of ABC in modelling MAX-kSAT.
The convergence property for doing logic programming in Hopfield network can be accelerated by using new relaxation method. This paper shows that the performance of the Hopfield network can be improved by using a relaxation rate to control the energy relaxation process. The capacity and performance of these networks is tested by using computer simulations. It was proven by computer simulations that the new approach provides good solutions.
Two-point four step direct implicit block method is presented by applying the simple form of Adams- Moulton method for solving directly the general third order ordinary differential equations (ODEs) using variable step size. This method is implemented to get the solutions of initial value problems (IVPs) at two points simultaneously in a block using four backward steps. The numerical results showed that the performance of the developed method is better in terms of maximum error at all tested tolerances and lesser total number of steps as the tolerances getting smaller compared to the existence direct method.
The problem of constructing such a continuous function is called data fitting. Many times, data given only at discrete points. With interpolation, we seek a function that allows us to approximate f(x) such that functional values between the original data set values may be determined. The process of finding such a polynomial is called interpolation and one of the most important approaches used are Lagrange interpolating formula. In this study, researcher determining the polynomial interpolation by using Lagrange interpolating formula. Then, a mathematical modelling was built by using MATLAB programming to determine the polynomial interpolation for a given points using the Lagrange method. The result of the study showed that the manual calculating and the MATLAB mathematical modelling will give the same answer for evaluated x and graph.