A new technique based on the statistical autoregressive (AR) model has recently been developed as a solution to signal-to-noise (SNR) estimation in scanning electron microscope (SEM) images. In the present study, we propose to cascade the Lagrange time delay (LTD) estimator with the AR model. We call this technique the mixed Lagrange time delay estimation autoregressive (MLTDEAR) model. In a few test cases involving different images, this model is found to present an optimum solution for SNR estimation problems under different noise environments. In addition, it requires only a small filter order and has no noticeable estimation bias. The performance of the proposed estimator is compared with three existing methods: simple method, first-order linear interpolator, and AR-based estimator over several images. The efficiency of the MLTDEAR estimator, being more robust with noise, is significantly greater than that of the other three methods.
Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED).
By using a linear operator, we obtain some new results for a normalized analytic function f defined by means of the Hadamard product of Hurwitz zeta function. A class related to this function will be introduced and the properties will be discussed.
The statistical predictions of Newtonian and special-relativistic mechanics, which are calculated from an initially Gaussian ensemble of trajectories, are compared for a low-speed scattering system. The comparisons are focused on the mean dwell time, transmission and reflection coefficients, and the position and momentum means and standard deviations. We find that the statistical predictions of the two theories do not always agree as conventionally expected. The predictions are close if the scattering is non-chaotic but they are radically different if the scattering is chaotic and the initial ensemble is well localized in phase space. Our result indicates that for low-speed chaotic scattering, special-relativistic mechanics must be used, instead of the standard practice of using Newtonian mechanics, to obtain empirically-correct statistical predictions from an initially well-localized Gaussian ensemble.
Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG) iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method.
This paper discusses the comparative assessment of eight candidate distributions in providing accurate and reliable maximum rainfall estimates for Malaysia. The models considered were the Gamma, Generalised Normal, Generalised Pareto, Generalised Extreme Value, Gumbel, Log Pearson Type III, Pearson Type III and Wakeby. Annual maximum rainfall series for one-hour resolution from a network of seventeen automatic gauging stations located throughout Peninsular Malaysia were selected for this study. The length of rainfall records varies from twenty-three to twenty-eight years. Model parameters were estimated using the L-moment method. The quantitative assessment of the descriptive ability of each model was based on the Probability Plot Correlation Coefficient test combined with root mean squared error, relative root mean squared error and maximum absolute deviation. Bootstrap resampling was employed to investigate the extrapolative ability of each distribution. On the basis of these comparisons, it can be concluded that the GEV distribution is the most appropriate distribution for describing the annual maximum rainfall series in Malaysia.
A Poisson model typically is assumed for count data, but when there are so many zeroes in the response variable, because of overdispersion, a negative binomial regression is suggested as a count regression instead of Poisson regression. In this paper, a zero-inflated negative binomial regression model with right truncation count data was developed. In this model, we considered a response variable and one or more than one explanatory variables. The estimation of regression
parameters using the maximum likelihood method was discussed and the goodness-of-fit for the regression model was examined. We studied the effects of truncation in terms of parameters estimation, their standard errors and the goodnessof-fit statistics via real data. The results showed a better fit by using a truncated zero-inflated negative binomial regression model when the response variable has many zeros and it was right truncated.
Hidden truncation (HT) and additive component (AC) are two well known paradigms of generating skewed distributions from known symmetric distribution. In case of normal distribution it has been known that both the above paradigms lead to Azzalini's (1985) skew normal distribution. While the HT directly gives the Azzalini's ( 1985) skew normal distribution, the one generated by AC also leads to the same distribution under a re parameterization proposed by Arnold and Gomez (2009). But no such re parameterization which leads to exactly the same distribution by these two paradigms has so far been suggested for the skewed distributions generated from symmetric logistic and Laplace distributions. In this article, an attempt has been made to investigate numerically as well as statistically the closeness of skew distributions generated by HT and AC methods under the same re parameterization of Arnold and Gomez (2009) in the case of logistic and Laplace distributions.
Multiple imputation method is a widely used method in missing data analysis. The method consists of a three-stage
process including imputation, analyzing and pooling. The number of imputations to be selected in the imputation step
in the first stage is important. Hence, this study aimed to examine the performance of multiple imputation method at
different numbers of imputations. Monotone missing data pattern was created in the study by deleting approximately 24%
of the observations from the continuous result variable with complete data. At the first stage of the multiple imputation
method, monotone regression imputation at different numbers of imputations (m=3, 5, 10 and 50) was performed. In the
second stage, parameter estimations and their standard errors were obtained by applying general linear model to each
of the complete data sets obtained. In the final stage, the obtained results were pooled and the effect of the numbers of
imputations on parameter estimations and their standard errors were evaluated on the basis of these results. In conclusion,
efficiency of parameter estimations at the number of imputation m=50 was determined as about 99%. Hence, at the
determined missing observation rate, increase was determined in efficiency and performance of the multiple imputation
method as the number of imputations increased.
The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make freely available this 2016 release of the database, containing more than 3.2 million records sampled at over 26,000 locations and representing over 47,000 species. We outline how the database can help in answering a range of questions in ecology and conservation biology. To our knowledge, this is the largest and most geographically and taxonomically representative database of spatial comparisons of biodiversity that has been collated to date; it will be useful to researchers and international efforts wishing to model and understand the global status of biodiversity.
The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI) of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA) was applied to reveal the hidden (secondary) effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model.
The process involved in the local scour below pipelines is so complex that it makes it difficult to establish a general empirical model to provide accurate estimation for scour. This paper describes the use of artificial neural networks (ANN) to estimate the pipeline scour depth. The data sets of laboratory measurements were collected from published works and used to train the network or evolve the program. The developed networks were validated by using the observations that were not involved in training. The performance of ANN was found to be more effective when compared with the results of regression equations in predicting the scour depth around pipelines.
This paper addresses erosion prediction in 3-D, 90° elbow for two-phase (solid and liquid) turbulent flow with low volume fraction of copper. For a range of particle sizes from 10 nm to 100 microns and particle volume fractions from 0.00 to 0.04, the simulations were performed for the velocity range of 5-20 m/s. The 3-D governing differential equations were discretized using finite volume method. The influences of size and concentration of micro- and nanoparticles, shear forces, and turbulence on erosion behavior of fluid flow were studied. The model predictions are compared with the earlier studies and a good agreement is found. The results indicate that the erosion rate is directly dependent on particles' size and volume fraction as well as flow velocity. It has been observed that the maximum pressure has direct relationship with the particle volume fraction and velocity but has a reverse relationship with the particle diameter. It also has been noted that there is a threshold velocity as well as a threshold particle size, beyond which significant erosion effects kick in. The average friction factor is independent of the particle size and volume fraction at a given fluid velocity but increases with the increase of inlet velocities.
Decision-Making Trial and Evaluation Laboratory (DEMATEL) methodology has been proposed to solve complex and intertwined problem groups in many situations such as developing the capabilities, complex group decision making, security problems, marketing approaches, global managers, and control systems. DEMATEL is able to realize casual relationships by dividing important issues into cause and effect group as well as making it possible to visualize the casual relationships of subcriteria and systems in the course of casual diagram that it may demonstrate communication network or a little control relationships between individuals. Despite of its ability to visualize cause and effect inside a network, the original DEMATEL has not been able to find the cause and effect group between different networks. Therefore, the aim of this study is proposing the expanded DEMATEL to cover this deficiency by new formulations to determine cause and effect factors between separate networks that have bidirectional direct impact on each other. At the end, the feasibility of new extra formulations is validated by case study in three numerical examples of green supply chain networks for an automotive company.
Previous researches show that buy (growth) companies conduct income increasing earnings management in order to meet forecasts and generate positive forecast Errors (FEs). This behavior however, is not inherent in sell (non-growth) companies. Using the aforementioned background, this research hypothesizes that since sell companies are pressured to avoid income increasing earnings management, they are capable, and in fact more inclined, to pursue income decreasing Forecast Management (FM) with the purpose of generating positive FEs. Using a sample of 6553 firm-years of companies that are listed in the NYSE between the years 2005-2010, the study determines that sell companies conduct income decreasing FM to generate positive FEs. However, the frequency of positive FEs of sell companies does not exceed that of buy companies. Using the efficiency perspective, the study suggests that even though buy and sell companies have immense motivation in avoiding negative FEs, they exploit different but efficient strategies, respectively, in order to meet forecasts. Furthermore, the findings illuminated the complexities behind informative and opportunistic forecasts that falls under the efficiency versus opportunistic theories in literature.
We study the reduced dynamics of a pair of nondegenerate oscillators coupled collectively to a thermal bath. The model is related to the trilinear boson model where the idler mode is promoted to a field. Due to nonlinear coupling, the Markovian master equation for the pair of oscillators admits non-Gaussian equilibrium states, where the modes distribute according to the Bose-Einstein statistics. These states are metastable before the nonlinear coupling is taken over by linear coupling between the individual oscillators and the field. The Gibbs state for the individual modes lies in the subspace with infinite occupation quantum number. We present the time evolution of a few states to illustrate the behaviors of the system.
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
Many researchers documented that the stock market data are nonstationary and nonlinear time series data. In this study, we use EMD-HW bagging method for nonstationary and nonlinear time series forecasting. The EMD-HW bagging method is based on the empirical mode decomposition (EMD), the moving block bootstrap and the Holt-Winter. The stock market time series of six countries are used to compare EMD-HW bagging method. This comparison is based on five forecasting error measurements. The comparison shows that the forecasting results of EMD-HW bagging are more accurate than the forecasting results of the fourteen selected methods.
The primary goal in project portfolio management is to select and manage the optimal set of projects that contribute the maximum in business value. However, selecting Information Technology (IT) projects is a difficult task due to the complexities and uncertainties inherent in the strategic-operational nature of the process, and the existence of both quantitative and qualitative criteria. We propose a two-stage process to select an optimal project portfolio with the aim of maximizing project benefits and minimizing project risks. We construct a two-stage hybrid mathematical programming model by integrating Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Inference System (FIS). This hybrid framework provides the ability to consider both the quantitative and qualitative criteria while considering budget constraints and project risks. We also present a real-world case study in the cybersecurity industry to exhibit the applicability and demonstrate the efficacy of our proposed method.