Channel estimation techniques for Multiple-input Multiple-output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) based on comb type pilot arrangement with least-square error (LSE) estimator was investigated with space-time-frequency (STF) diversity implementation. The frequency offset in OFDM impacts its performance. This was mitigated with the implementation of the presented inter-carrier interference self-cancellation (ICI-SC) techniques and different space-time subcarrier mapping. STF block coding in the system exploits the spatial, temporal and frequency diversity to improve performance. Estimated channel was fed into a decoder which combines the STF decoding together with the estimated channel coefficients using LSE estimator for equalization. The performance of the system is compared by measuring the symbol error rate with a PSK-16 and PSK-32. The results show that subcarrier mapping together with ICI-SC were able to increase the system performance. Introduction of channel estimation was also able to estimate the channel coefficient at only 5dB difference with a perfectly known channel.
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs.
In this paper, an improved method of reducing ambient noise in speech signals is introduced. The proposed noise canceller was developed using a computationally efficient (DFT) filter bank to decompose input signals into sub-bands. The filter bank was based on a prototype filter optimized for minimum output distortion. A variable step-size version of the (LMS) filter was used to reduce the noise in individual branches. The subband noise canceller was aimed to overcome problems associated with the use of the conventional least mean square (LMS) adaptive algorithm in noise cancellation setups. Mean square error convergence was used as a measure of performance under white and ambient interferences. Compared to conventional as well as recently developed techniques, fast initial convergence and better noise cancellation performances were obtained under actual speech and ambient noise.
State estimation plays a vital role in the security analysis of a power system. The weighted least squares method is one of the conventional techniques used to estimate the unknown state vector of the power system. The existence of bad data can distort the reliability of the estimated state vector. A new algorithm based on the technique of quality control charts is developed in this paper for detection of bad data. The IEEE 6-bus power system data are utilised for the implementation of the proposed algorithm. The output of the study shows that this method is practically applicable for the separation of bad data in the problem of power system state estimation.
In the title compound, C11H12O2S2, two independent but virtually superimposable mol-ecules, A and B, comprise the asymmetric unit. In each mol-ecule, the 1,3-di-thiane ring has a chair conformation with the 1,4-disposed C atoms being above and below the plane through the remaining four atoms. The substituted benzene ring occupies an equatorial position in each case and forms dihedral angles of 85.62 (9) (mol-ecule A) and 85.69 (8)° (mol-ecule B) with the least-squares plane through the 1,3-di-thiane ring. The difference between the mol-ecules rests in the conformation of the five-membered 1,3-dioxole ring which is an envelope in mol-ecule A (the methyl-ene C atom is the flap) and almost planar in mol-ecule B (r.m.s. deviation = 0.046 Å). In the crystal, mol-ecules of A self-associate into supra-molecular zigzag chains (generated by glide symmetry along the c axis) via methyl-ene C-H⋯π inter-actions. Mol-ecules of B form similar chains. The chains pack with no specific directional inter-molecular inter-actions between them.
In the title compound, C10H11BrS2, the 1,3-di-thiane ring has a chair conformation with the 1,4-disposed C atoms being above and below the remaining four atoms. The bromo-benzene ring occupies an equatorial position and forms a dihedral angle of 86.38 (12)° with the least-squares plane through the 1,3-di-thiane ring. Thus, to a first approximation the mol-ecule has mirror symmetry with the mirror containing the bromo-benzene ring and the 1,4-disposed C atoms of the 1,3-di-thiane ring. In the crystal, mol-ecules associate via weak methyl-ene-bromo-benzene C-H⋯π and π-π [Cg⋯Cg = 3.7770 (14) Å for centrosymmetrically related bromo-benzene rings] inter-actions, forming supra-molecular layers parallel to [10-1]; these stack with no specific inter-molecular inter-actions between them.
This article aims to uncover novel insights into personality factors and consumer video game engagement modeling. This research empirically validates the role of specific HEXACO personality factors that foster consumer engagement (CE) in electronic sports (eSports) users. Using a survey-based approach, we incorporated the HEXACO 60 items and consumer video game engagement scales for data collection. Data were collected from eSports users, with 250 valid responses. WarpPLS 6.0 was used for partial least squares-structural equation modeling analyses comprising measurement and structural model assessment. The results showed that the reflective measurement model is reliable and sound, whereas the second-order formative measurement model also meets the criteria of indicator weights and collinearity values variance inflation factor (VIF). The results based on the structural model indicate that openness to experience, extraversion, agreeableness, and conscientiousness positively predict CE in eSports. This article is first among others that conceptualizes and validates the HEXACO personality traits as a reflective formative model using the hierarchical component model approach. The research model carries the explanatory capacity for CE in eSports concerning personality dimensions as indicated by the HEXACO model. It highlights the potential benefits of such research especially to marketers who could potentially employ personality modeling to develop tailored strategies to increase CE in video games.
Robustness is a key issue in speech recognition. A speech recognition algorithm for Malay digits from zero to nine and an algorithm for noise cancellation by using recursive least squares (RLS) is proposed in this article. This system consisted of speech processing inclusive of digit margin and recognition using zero crossing and energy calculations. Mel-frequency cepstral coefficient vectors were used to provide an estimate of the vocal tract filter. Meanwhile dynamic time warping was used to detect the nearest recorded voice with appropriate global constraint. The global constraint was used to set a valid search region because the variation of the speech rate of the speaker was considered to be limited in a reasonable range which meant that it could prune the unreasonable search space. The algorithm was tested on speech samples that were recorded as part of a Malay corpus. The results showed that the algorithm managed to recognize almost 80.5% of the Malay digits for all recorded words. The addition of a RLS noise canceller in the preprocessing stage increased the accuracy to 94.1%.
Autocorrelation problem causes unduly effects on the variance of Ordinary Least Squares (OLS) estimates. Hence, it is very essential to detect the autocorrelation problem so that appropriate remedial measures can be taken. The Breusch-Godfrey (BG) test is the most popular and commonly used test for the detection of autocorrelation. Since this test is based on the OLS estimates, which are not robust, it is easily affected by outliers. In this paper, we propose a robust Breusch-Godfrey (MBG) test which is not easily affected by outliers. The results of the study indicate that the MBG test is more powerful than the BG test in the detection of autocorrelation problem.
This paper presents a transmission line (TL) modelling which is based upon vector fitting algorithm
and RLC passive filter design. Frequency Response Analysis (FRA) is utilised for behaviour prediction and fault diagnosis. The utilities of the measured FRA data points need to be enhanced with suitable modelling category to facilitate the modelling and analysis process. This research proposes a new method for modelling the transmission line based on a rational approximation function which can be extracted through the Vector Fitting (VF) method, based on the frequency response measured data points. A set of steps needs to be implemented to achieve this by setting up an extracted partial fraction approximation, which results from a least square RMS error via VF. Active and passive filter design circuits are used to construct the model of the Transmission line. The RLC design representation was implemented for modelling the system physically while MATLAB Simulink was used to verify the results.
Neurocomputing has been adjusted effectively in time series forecasting activities, yet the vicinity of exceptions that frequently happens in time arrangement information might contaminate the system preparing information. This is because of its capacity to naturally realise any example without earlier suspicions and loss of sweeping statement. In principle, the most widely recognised calculation for preparing the system is the backpropagation (BP) calculation, which inclines toward minimisation of standard slightest squares (OLS) estimator, particularly the mean squared mistake (MSE). Regardless, this calculation is not by any stretch of the imagination strong when the exceptions are available, and it might prompt bogus expectation of future qualities. In this paper, we exhibit another calculation which controls the firefly algorithm of least median squares (FFA-LMedS) estimator for neural system nonlinear autoregressive moving average (ANN-NARMA) model enhancement to provide betterment for the peripheral issue in time arrangement information. Moreover, execution of the solidified model in correlation with another hearty ANN-NARMA models, utilising M-estimators, Iterative LMedS and Particle Swarm Optimisation on LMedS (PSO-LMedS) with root mean squared blunder (RMSE) qualities, is highlighted in this paper. In the interim, the actual monthly information of Malaysian Aggregate, Sand and Roof Materials value was taken from January 1980 to December 2012 (base year 1980=100) with various levels of anomaly issues. It was found that the robustified ANN-NARMA model utilising FFA-LMedS delivered the best results, with the RMSE values having almost no mistakes at all in all the preparation, testing and acceptance sets for every single distinctive variable. Findings of the studies are hoped to assist the regarded powers including the PFI development tasks to overcome cost overwhelms.
This paper considers the problem of outlier detection in bilinear time series data with special focus on BL(1,0,1,1) and BL(1,1,1,1) models. In the previous study, the formulations of effect of innovational outlier on the observations and residuals from the process had been developed and the corresponding least squares estimator of outlier effect had been derived. Consequently, an outlier detection procedure employing bootstrap-based procedure to estimate the variance of the estimator had been proposed. In this paper, we proposed to use the mean absolute deviance and trimmed mean formula to estimate the variance to improve the performances of the procedure. Via simulation, we showed that the procedure based on the trimmed mean formula has successfully improved the performance of the procedure.
This paper discusses the multilevel approach in constructing a model for estimating hierarchically structured data of students' performance. Multilevel models that take into account variation from the clustering of data in different levels are compared to regression models using least squares method. This study also estimates the contributions of gender and ethnic factors on students' performance. Performance data of866 students in a science faculty in an institution of higher learning is obtained and analyzed. This data is hierarchically structured with two levels, namely students and departments. Analysis findings show different parameter estimates for both models. Also, the multilevel model which incorporates variability from different levels and predictors from higher levels is found to provide a better fit for model explaining students' performance.
[Rencana ini membincangkan pendekatan multitahap dalam pembinaan model penganggaran pencapaian pelajar yang mempunyai struktur data hierarki. Model multitahap yang mengambil kira variasi data yang berpunca dari pengelompokan data pada tahap-tahap yang berbeza dibandingkan dengan model regresi linear yang menggunakan kaedah kuasa dua terkecil. Seterusnya kajian ini menganggar sumbangan faktor jantina dan etnik ke atas pencapaian pelajar. Data pencapaian akademik seramai 866 pelajar fakulti sains di sebuah institusi pengajian tinggi telah diperoleh dan dianalisis. Data pelajar ini berstruktur hierarki dengan dua tahap, iaitu pelajar dan jabatan. Hasil kajian menunjukkan kedua-dua kaedah memberikan penganggaran yang berbeza. Malah, didapati model multitahap yang memasukkan variasi dari tahap-tahap berlainan dan pembolehubah peramal dari tahap yang lebih tinggi memberikan padanan model lebih baik bagi menerangkan pencapaian pelajar].
The main purpose of this article is to introduce the technique of panel data analysis in econometrics modeling. The elasticity of labour, capital and economic of scale for twenty two food manufacturing firms covering from 1989 to 1993 is estimated using the Cobb-Douglas model. The three main techniques of panel data analysis discussed are least square dummy variables (LSDV), analysis of covariance (ANCOVA) and generalized least square (GLS). Ordinary Least Square (OLS) method is included as the basis of comparison.
Most human activities that use water produced sewage. As urbanization grows, the overall demand for water grows. Correspondingly, the amount of produced sewage and pollution-induced water shortage is continuously increasing worldwide. Ensuring there are sufficient and safe water supplies for everyone is becoming increasingly challenging. Sewage treatment is an essential prerequisite for water reclamation and reuse. Sewage treatment plants' (STPs) performance in terms of economic and environmental perspective is known as a critical indicator for this purpose. Here, the window-based data envelopment analysis model was applied to dynamically assess the relative annual efficiency of STPs under different window widths. A total of five STPs across Malaysia were analyzed during 2015-2019. The labor cost, utility cost, operation cost, chemical consumption cost, and removal rate of pollution, as well as greenhouse gases' (GHGs) emissions, all were integrated to interpret the eco-environmental efficiency. Moreover, the ordinary least square as a supplementary method was used to regress the efficiency drivers. The results indicated the particular window width significantly affects the average of overall efficiencies; however, it shows no influence on the ranking of STP efficiency. The labor cost was determined as the most influential parameter, involving almost 40% of the total cost incurred. Hence, higher efficiency was observed with the larger-scale plants. Meanwhile, the statistical regression analysis illustrates the significance of plant scale, inflow cBOD concentrations, and inflow total phosphorus concentrations at [Formula: see text] on the performance. Lastly, some applicable techniques were suggested in terms of GHG emission mitigation.
Despite the widespread interest in electrospinning technology, very few simulation studies have been conducted. Thus, the current research produced a system for providing a sustainable and effective electrospinning process by combining the design of experiments with machine learning prediction models. Specifically, in order to estimate the diameter of the electrospun nanofiber membrane, we developed a locally weighted kernel partial least squares regression (LW-KPLSR) model based on a response surface methodology (RSM). The accuracy of the model's predictions was evaluated based on its root mean square error (RMSE), its mean absolute error (MAE), and its coefficient of determination (R2). In addition to principal component regression (PCR), locally weighted partial least squares regression (LW-PLSR), partial least square regression (PLSR), and least square support vector regression model (LSSVR), some of the other types of regression models used to verify and compare the results were fuzzy modelling and least square support vector regression model (LSSVR). According to the results of our research, the LW-KPLSR model performed far better than other competing models when attempting to forecast the membrane's diameter. This is made clear by the much lower RMSE and MAE values of the LW-KPLSR model. In addition, it offered the highest R2 values that could be achieved, reaching 0.9989.
Spectroscopy in the visible and near-infrared region (Vis-NIR) region has proven to be an effective technique for quantifying the chlorophyll contents of plants, which serves as an important indicator of their photosynthetic rate and health status. However, the Vis-NIR spectroscopy analysis confronts a significant challenge concerning the existence of spectral variations and interferences induced by diverse factors. Hence, the selection of characteristic wavelengths plays a crucial role in Vis-NIR spectroscopy analysis. In this study, a novel wavelength selection approach known as the modified regression coefficient (MRC) selection method was introduced to enhance the diagnostic accuracy of chlorophyll content in sugarcane leaves. Experimental data comprising spectral reflectance measurements (220-1400 nm) were collected from sugarcane leaf samples at different growth stages, including seedling, tillering, and jointing, and the corresponding chlorophyll contents were measured. The proposed MRC method was employed to select optimal wavelengths for analysis, and subsequent partial least squares regression (PLSR) and Gaussian process regression (GPR) models were developed to establish the relationship between the selected wavelengths and the measured chlorophyll contents. In comparison to full-spectrum modelling and other commonly employed wavelength selection techniques, the proposed simplified MRC-GPR model, utilizing a subset of 291 selected wavelengths, demonstrated superior performance. The MRC-GPR model achieved higher coefficient of determination of 0.9665 and 0.8659, and lower root mean squared error of 1.7624 and 3.2029, for calibration set and prediction set, respectively. Results showed that the GPR model, a nonlinear regression approach, outperformed the PLSR model.
This study investigates the effect of green human resource management practices on green competitive advantage and the mediating role of competitive advantage between the green human resource management practices and green ambidexterity. This study also examined the effect of green competitive advantage on green ambidexterity and the moderating effect of firm size on green competitive advantage and ambidexterity. The results reveal that green recruitment and green training and involvement are not sufficient, but they are necessary for any outcome level of green competitive advantage. The other three constructs (green performance management and compensation, green intellectual capital, and green transformational leadership) are sufficient and necessary; however, green performance management and compensation is necessary only at an outcome level of more than or equal to 60%. The findings revealed that the mediating effect of green competitive advantage is significant only between three constructs (green performance management and compensation, green intellectual capital, and green transformational leadership) and green ambidexterity. The results also indicate that a green competitive advantage has a significant positive effect on green ambidexterity. Exploring the necessary and sufficient factors using a combination of partial least squares structural equation modeling and necessary condition analysis provides valuable guidance for practitioners to optimize firm outcomes.
The titrimetric method is used for on-site measurement of the concentration of volatile fatty acids (VFAs) in anaerobic treatment. In current practice, specific and interpolated pH-volume data points are used to obtain the concentration of VFA by solving simultaneous equations iteratively to convergence (denoted as SEq). Here, the least squares method (LSM) is introduced as an elegant alternative. Known concentrations of VFA (acetic acid and/or propionic acid) ranging from to 200 to 1,000 mg/L were determined using SEq and LSM. Using standard numbers of data points, SEq gave more accurate results compared with LSM. However, results favoured LSM when all data points in the range were included without any interpolation. For model refinement, unit monovalent activity coefficient (f(m) = 1) was found reasonable and arithmetic averages of dissociation constants and molecular weight of 80 mol% acetic acid were recommended in the model for VFA determination of mixtures. An accurate result was obtained with a mixture containing more VFA (butyric acid and valeric acid). In a typical VFA measurement of real anaerobic effluent, a satisfactory result with an error of 14% was achieved. LSM appears to be a promising mathematical model solver for determination of concentration of VFA in the titrimetric method. Validation of LSM in the presence of other electrolytes deserves further exploration.
Electrical charge tomography (EChT) is a non-invasive imaging technique that is aimed to reconstruct the image of materials being conveyed based on data measured by an electrodynamics sensor installed around the pipe. Image reconstruction in electrical charge tomography is vital and has not been widely studied before. Three methods have been introduced before, namely the linear back projection method, the filtered back projection method and the least square method. These methods normally face ill-posed problems and their solutions are unstable and inaccurate. In order to ensure the stability and accuracy, a special solution should be applied to obtain a meaningful image reconstruction result. In this paper, a new image reconstruction method - Least squares with regularization (LSR) will be introduced to reconstruct the image of material in a gravity mode conveyor pipeline for electrical charge tomography. Numerical analysis results based on simulation data indicated that this algorithm efficiently overcomes the numerical instability. The results show that the accuracy of the reconstruction images obtained using the proposed algorithm was enhanced and similar to the image captured by a CCD Camera. As a result, an efficient method for electrical charge tomography image reconstruction has been introduced.