Fuzzy Logic Speed Controller (FLSC) has been widely used for motor drive due to its robustness and its non-reliance to real plant parameters. However, it is computationally expensive to be implemented in real-time and prone to the fuzzy rules' selection error which results in the failure of the drive's system. This paper proposes an improved simplified rules method for Fuzzy Logic Speed Controller (FLSC) based on the significant crisp output calculations to address these issues. A systematic procedure for the fuzzy rules reduction process is first described. Then, a comprehensive evaluation of the activated crisp output data is presented to determine the fuzzy dominant rules. Based on the proposed method, the number of rules was significantly reduced by 72%. The simplified FLSC rule is tested on the Induction Motor (IM) drives system in which the real-time implementation was carried out in the dSPACE DS1103 controller environment. The simulation and experimental results based on the proposed FLSC have proved the workability of the simplified rules without degrading the motor performance.
The azimuthal anisotropy Fourier coefficients (v_{n}) in 8.16 TeV p+Pb data are extracted via long-range two-particle correlations as a function of the event multiplicity and compared to corresponding results in pp and PbPb collisions. Using a four-particle cumulant technique, v_{n} correlations are measured for the first time in pp and p+Pb collisions. The v_{2} and v_{4} coefficients are found to be positively correlated in all collision systems. For high-multiplicity p+Pb collisions, an anticorrelation of v_{2} and v_{3} is observed, with a similar correlation strength as in PbPb data at the same multiplicity. The new correlation results strengthen the case for a common origin of the collectivity seen in p+Pb and PbPb collisions in the measured multiplicity range.
The past few years have witnessed increased interest among researchers in cluster-based protocols for homogeneous networks because of their better scalability and higher energy efficiency than other routing protocols. Given the limited capabilities of sensor nodes in terms of energy resources, processing and communication range, the cluster-based protocols should be compatible with these constraints in either the setup state or steady data transmission state. With focus on these constraints, we classify routing protocols according to their objectives and methods towards addressing the shortcomings of clustering process on each stage of cluster head selection, cluster formation, data aggregation and data communication. We summarize the techniques and methods used in these categories, while the weakness and strength of each protocol is pointed out in details. Furthermore, taxonomy of the protocols in each phase is given to provide a deeper understanding of current clustering approaches. Ultimately based on the existing research, a summary of the issues and solutions of the attributes and characteristics of clustering approaches and some open research areas in cluster-based routing protocols that can be further pursued are provided.
Selection of a suitable and appropriate method is an important aspect in ensuring successful
implementation of a research. The proposed study aims to obtain weights for sustainable construction
criteria from the input and perception of industrial practitioner, and also to explore their opinion on
the criteria. Therefore, the selection and use of study implementation method will determine the
direction of the study whether the intended objectives can be achieved. This manuscript writing presents
the description of the structured interview used to obtain and collect the required data. The suitability
and implementation of the methods have been described in this study, in which the ultimate aim of its
application is to ensure that the collected data is meaningful to the study.
This Letter describes a search for Higgs boson pair production using the combined results from four final states: bbγγ, bbττ, bbbb, and bbVV, where V represents a W or Z boson. The search is performed using data collected in 2016 by the CMS experiment from LHC proton-proton collisions at sqrt[s]=13 TeV, corresponding to an integrated luminosity of 35.9 fb^{-1}. Limits are set on the Higgs boson pair production cross section. A 95% confidence level observed (expected) upper limit on the nonresonant production cross section is set at 22.2 (12.8) times the standard model value. A search for narrow resonances decaying to Higgs boson pairs is also performed in the mass range 250-3000 GeV. No evidence for a signal is observed, and upper limits are set on the resonance production cross section.
We present the first measurements of absolute branching fractions of Ξ_{c}^{0} decays into Ξ^{-}π^{+}, ΛK^{-}π^{+}, and pK^{-}K^{-}π^{+} final states. The measurements are made using a dataset comprising (772±11)×10^{6} BB[over ¯] pairs collected at the ϒ(4S) resonance with the Belle detector at the KEKB e^{+}e^{-} collider. We first measure the absolute branching fraction for B^{-}→Λ[over ¯]_{c}^{-}Ξ_{c}^{0} using a missing-mass technique; the result is B(B^{-}→Λ[over ¯]_{c}^{-}Ξ_{c}^{0})=(9.51±2.10±0.88)×10^{-4}. We subsequently measure the product branching fractions B(B^{-}→Λ[over ¯]_{c}^{-}Ξ_{c}^{0})B(Ξ_{c}^{0}→Ξ^{-}π^{+}), B(B^{-}→Λ[over ¯]_{c}^{-}Ξ_{c}^{0})B(Ξ_{c}^{0}→ΛK^{-}π^{+}), and B(B^{-}→Λ[over ¯]_{c}^{-}Ξ_{c}^{0})B(Ξ_{c}^{0}→pK^{-}K^{-}π^{+}) with improved precision. Dividing these product branching fractions by the result for B^{-}→Λ[over ¯]_{c}^{-}Ξ_{c}^{0} yields the following branching fractions: B(Ξ_{c}^{0}→Ξ^{-}π^{+})=(1.80±0.50±0.14)%, B(Ξ_{c}^{0}→ΛK^{-}π^{+})=(1.17±0.37±0.09)%, and B(Ξ_{c}^{0}→pK^{-}K^{-}π^{+})=(0.58±0.23±0.05)%. For the above branching fractions, the first uncertainties are statistical and the second are systematic. Our result for B(Ξ_{c}^{0}→Ξ^{-}π^{+}) can be combined with Ξ_{c}^{0} branching fractions measured relative to Ξ_{c}^{0}→Ξ^{-}π^{+} to yield other absolute Ξ_{c}^{0} branching fractions.
We report on the first Belle search for a light CP-odd Higgs boson, A^{0}, that decays into low mass dark matter, χ, in final states with a single photon and missing energy. We search for events produced via the dipion transition ϒ(2S)→ϒ(1S)π^{+}π^{-}, followed by the on-shell process ϒ(1S)→γA^{0} with A^{0}→χχ, or by the off-shell process ϒ(1S)→γχχ. Utilizing a data sample of 157.3×10^{6} ϒ(2S) decays, we find no evidence for a signal. We set limits on the branching fractions of such processes in the mass ranges M_{A^{0}}<8.97 GeV/c^{2} and M_{χ}<4.44 GeV/c^{2}. We then use the limits on the off-shell process to set competitive limits on WIMP-nucleon scattering in the WIMP mass range below 5 GeV/c^{2}.
We present a search for the direct production of a light pseudoscalar a decaying into two photons with the Belle II detector at the SuperKEKB collider. We search for the process e^{+}e^{-}→γa, a→γγ in the mass range 0.2data corresponding to an integrated luminosity of (445±3) pb^{-1}. Light pseudoscalars interacting predominantly with standard model gauge bosons (so-called axionlike particles or ALPs) are frequently postulated in extensions of the standard model. We find no evidence for ALPs and set 95% confidence level upper limits on the coupling strength g_{aγγ} of ALPs to photons at the level of 10^{-3} GeV^{-1}. The limits are the most restrictive to date for 0.2
Given that validity is the baseline of psychological assessments, there is a need to provide evidence-based data for construct validity of such scales to advance the clinicians for evaluating psychiatric morbidity in psychiatric and psychosomatic setting.
In this paper, we study Tsallis' fractional entropy (TFE) in a complex domain by applying the definition of the complex probability functions. We study the upper and lower bounds of TFE based on some special functions. Moreover, applications in complex neural networks (CNNs) are illustrated to recognize the accuracy of CNNs.
Radio propagation models (RPMs) are generally employed in Vehicular Ad Hoc Networks (VANETs) to predict path loss in multiple operating environments (e.g. modern road infrastructure such as flyovers, underpasses and road tunnels). For example, different RPMs have been developed to predict propagation behaviour in road tunnels. However, most existing RPMs for road tunnels are computationally complex and are based on field measurements in frequency band not suitable for VANET deployment. Furthermore, in tunnel applications, consequences of moving radio obstacles, such as large buses and delivery trucks, are generally not considered in existing RPMs. This paper proposes a computationally inexpensive RPM with minimal set of parameters to predict path loss in an acceptable range for road tunnels. The proposed RPM utilizes geometric properties of the tunnel, such as height and width along with the distance between sender and receiver, to predict the path loss. The proposed RPM also considers the additional attenuation caused by the moving radio obstacles in road tunnels, while requiring a negligible overhead in terms of computational complexity. To demonstrate the utility of our proposed RPM, we conduct a comparative summary and evaluate its performance. Specifically, an extensive data gathering campaign is carried out in order to evaluate the proposed RPM. The field measurements use the 5 GHz frequency band, which is suitable for vehicular communication. The results demonstrate that a close match exists between the predicted values and measured values of path loss. In particular, an average accuracy of 94% is found with R2 = 0.86.
Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks.
Collecting correlated scene images and camera poses is an essential step towards learning absolute camera pose regression models. While the acquisition of such data in living environments is relatively easy by following regular roads and paths, it is still a challenging task in constricted industrial environments. This is because industrial objects have varied sizes and inspections are usually carried out with non-constant motions. As a result, regression models are more sensitive to scene images with respect to viewpoints and distances. Motivated by this, we present a simple but efficient camera pose data collection method, WatchPose, to improve the generalization and robustness of camera pose regression models. Specifically, WatchPose tracks nested markers and visualizes viewpoints in an Augmented Reality- (AR) based manner to properly guide users to collect training data from broader camera-object distances and more diverse views around the objects. Experiments show that WatchPose can effectively improve the accuracy of existing camera pose regression models compared to the traditional data acquisition method. We also introduce a new dataset, Industrial10, to encourage the community to adapt camera pose regression methods for more complex environments.
The aim of this research is to apply the variance and conditional value at risk (CVaR) as risk measures in portfolio selection problem. Consequently, we are motivated to compare the behavior of two different type of risk measures (variance and CVaR) when the expected returns of a portfolio vary from a low return to a higher return. To obtain an optimum portfolio of the assets, we minimize the risks using mean variance and mean CVaR models. Dataset with stocks for FBMKLCI is used to generate our scenario returns. Both models and dataset are coded and implemented in AMPL software. We compared the performance of both optimized portfolios constructed from the models in term of risk measure and realized returns. The optimal portfolios are evaluated across three different target returns that represent the low risk low returns, medium risk medium returns and high risk high returns portfolios. Numerical results show that the composition of portfolios for mean variance are generally more diversified compared to mean CVaR portfolios. The in sample results show that the seven optimal mean CVaR0:05 portfolios have lower CVaR0:05 values as compared to their optimal mean variance counterparts. Consequently, the standard deviation for mean variance optimal portfolios are lower than the standard deviation of its mean CVaR0:05 counterparts. For the out of sample analysis, we can conclude that mean variance portfolio only minimizes standard deviation at low target return. While, mean CVaR portfolios are favorable in minimizing risks at high target return.
This paper aims to study a triple flat-type air coil inductive sensor that can identify two maturity stages of oil palm fruits, ripe and unripe, based on the resonance frequency and fruitlet capacitance changes. There are two types of triple structure that have been tested, namely Triple I and II. Triple I is a triple series coil with a fixed number of turns (n = 200) with different length, and Triple II is a coil with fixed length (l = 5 mm) and a different number of turns. The peak comparison between Triple I and II is using the coefficient of variation cv, which is defined as the ratio of the standard deviation to the mean to express the precision and repeatability of data. As the fruit ripens, the resonance frequency peaks from an inductance⁻frequency curve and shifts closer to the peak curve of the air, and the fruitlet capacitance decreases. The coefficient of the variation of the inductive oil palm fruit sensor shows that Triple I is smaller and more consistent in comparison with Triple II, for both resonance frequency and fruitlet capacitance. The development of this sensor proves the capability of an inductive element such as a coil, to be used as a sensor so as to determine the ripeness of the oil palm fresh fruit bunch sample.
Activity recognition in smart homes aims to infer the particular activities of the inhabitant, the aim being to monitor their activities and identify any abnormalities, especially for those living alone. In order for a smart home to support its inhabitant, the recognition system needs to learn from observations acquired through sensors. One question that often arises is which sensors are useful and how many sensors are required to accurately recognise the inhabitant's activities? Many wrapper methods have been proposed and remain one of the popular evaluators for sensor selection due to its superior accuracy performance. However, they are prohibitively slow during the evaluation process and may run into the risk of overfitting due to the extent of the search. Motivated by this characteristic, this paper attempts to reduce the cost of the evaluation process and overfitting through tree alignment. The performance of our method is evaluated on two public datasets obtained in two distinct smart home environments.
We present first evidence that the cosine of the CP-violating weak phase 2β is positive, and hence exclude trigonometric multifold solutions of the Cabibbo-Kobayashi-Maskawa (CKM) Unitarity Triangle using a time-dependent Dalitz plot analysis of B^{0}→D^{(*)}h^{0} with D→K_{S}^{0}π^{+}π^{-} decays, where h^{0}∈{π^{0},η,ω} denotes a light unflavored and neutral hadron. The measurement is performed combining the final data sets of the BABAR and Belle experiments collected at the ϒ(4S) resonance at the asymmetric-energy B factories PEP-II at SLAC and KEKB at KEK, respectively. The data samples contain (471±3)×10^{6}BB[over ¯] pairs recorded by the BABAR detector and (772±11)×10^{6}BB[over ¯] pairs recorded by the Belle detector. The results of the measurement are sin2β=0.80±0.14(stat)±0.06(syst)±0.03(model) and cos2β=0.91±0.22(stat)±0.09(syst)±0.07(model). The result for the direct measurement of the angle β of the CKM Unitarity Triangle is β=[22.5±4.4(stat)±1.2(syst)±0.6(model)]°. The measurement assumes no direct CP violation in B^{0}→D^{(*)}h^{0} decays. The quoted model uncertainties are due to the composition of the D^{0}→K_{S}^{0}π^{+}π^{-} decay amplitude model, which is newly established by performing a Dalitz plot amplitude analysis using a high-statistics e^{+}e^{-}→cc[over ¯] data sample. CP violation is observed in B^{0}→D^{(*)}h^{0} decays at the level of 5.1 standard deviations. The significance for cos2β>0 is 3.7 standard deviations. The trigonometric multifold solution π/2-β=(68.1±0.7)° is excluded at the level of 7.3 standard deviations. The measurement resolves an ambiguity in the determination of the apex of the CKM Unitarity Triangle.
The side sensitive group runs (SSGR) chart is better than both the Shewhart and synthetic charts in detecting small and moderate process mean shifts. In practical circumstances, the process parameters are seldom known, so it is necessary to estimate them from in-control Phase-I samples. Research has discovered that a large number of in-control Phase-I samples are needed for the SSGR chart with estimated process parameters to behave similarly to a chart with known process parameters. The common metric to evaluate the performance of the control chart is average run length (ARL). An assumption for the computation of the ARL is that the shift size is assumed to be known. In reality however, the practitioners may not know the following shift size in advance. In light of this, the expected average run length (EARL) will be considered to measure the performance of the SSGR chart. Moreover, the standard deviation of the ARL (SDARL) will be studied, which is used to quantify the between-practitioner variability in the SSGR chart with estimated process parameters. This paper proposes the optimal design of the estimated process parameters SSGR chart based on the EARL criterion. The application of the optimal SSGR chart with estimated process parameters is demonstrated with actual data taken from a manufacturing company.