Displaying publications 1 - 20 of 240 in total

Abstract:
Sort:
  1. Tukkee AS, Bin Abdul Wahab NI, Binti Mailah NF, Bin Hassan MK
    PLoS One, 2024;19(2):e0298094.
    PMID: 38330067 DOI: 10.1371/journal.pone.0298094
    Recently, global interest in organizing the functioning of renewable energy resources (RES) through microgrids (MG) has developed, as a unique approach to tackle technical, economic, and environmental difficulties. This study proposes implementing a developed Distributable Resource Management strategy (DRMS) in hybrid Microgrid systems to reduce total net percent cost (TNPC), energy loss (Ploss), and gas emissions (GEM) while taking the cost-benefit index (CBI) and loss of power supply probability (LPSP) as operational constraints. Grey Wolf Optimizer (GWO) was utilized to find the optimal size of the hybrid Microgrid components and calculate the multi-objective function with and without the proposed management method. In addition, a detailed sensitivity analysis of numerous economic and technological parameters was performed to assess system performance. The proposed strategy reduced the system's total net present cost, power loss, and emissions by (1.06%), (8.69%), and (17.19%), respectively compared to normal operation. Firefly Algorithm (FA) and Particle Swarm Optimization (PSO) techniques were used to verify the results. This study gives a more detailed plan for evaluating the effectiveness of hybrid Microgrid systems from a technical, economic, and environmental perspective.
    Matched MeSH terms: Probability
  2. Mawardi I, Al Mustofa MU, Widiastuti T, Fanani S, Bakri MH, Hanafi Z, et al.
    PLoS One, 2024;19(4):e0301398.
    PMID: 38635825 DOI: 10.1371/journal.pone.0301398
    The banking industry necessitates implementing an early warning system to effectively identify the factors that impact bank managers and enable them to make informed decisions, thereby mitigating systemic risk. Identifying factors that influence banks in times of stability and crisis is crucial, as it ultimately contributes to developing an improved early warning system. This study undertakes a comparative analysis of the stability of Indonesian Islamic and conventional banking across distinct economic regimes-crisis and stability. We analyze monthly banking data from December 2007 to November 2022 using the Markov Switching Dynamic Regression technique. The study focuses on conducting a comparative analysis between Islamic banks, represented by Islamic Commercial Bank (ICB) and Islamic Rural Bank (IRB), and conventional banks, represented by the Conventional Commercial Bank (CCB) and Conventional Rural Bank (CRB). The findings reveal that both Islamic and conventional banks exhibit a higher probability of being in a stable regime than a crisis regime. Notably, Islamic banks demonstrate a greater propensity to remain in a stable regime than their conventional counterparts. However, in a crisis regime, the likelihood of recovery for Sharia-compliant institutions is lower than for conventional banks. Furthermore, our analysis indicates that larger banks exhibit higher stability than their smaller counterparts regarding assets and size. This study pioneers a comprehensive comparison of the Z-score, employed as a proxy for stability, between two distinct classifications of Indonesian banks: Sharia (ICB and IRB) and conventional (CCB and CRB). The result is expected to improve our awareness of the elements that affect the stability of Islamic and conventional banking in Indonesia, leading to a deeper comprehension of their dynamics.
    Matched MeSH terms: Probability
  3. Jeon J, Krishnan S, Manirathinam T, Narayanamoorthy S, Nazir Ahmad M, Ferrara M, et al.
    Sci Rep, 2023 Jun 23;13(1):10206.
    PMID: 37353615 DOI: 10.1038/s41598-023-37200-2
    The probabilistic hesitant elements (PHFEs) are a beneficial augmentation to the hesitant fuzzy element (HFE), which is intended to give decision-makers more flexibility in expressing their biases while using hesitant fuzzy information. To extrapolate a more accurate interpretation of the decision documentation, it is sufficient to standardize the organization of the elements in PHFEs without introducing fictional elements. Several processes for unifying and arranging components in PHFEs have been proposed so far, but most of them result in various disadvantages that are critically explored in this paper. The primary objective of this research is to recommend a PHFE unification procedure that avoids the deficiencies of operational practices while maintaining the inherent properties of PHFE probabilities. The prevailing study advances the hypothesis of permutation on PHFEs by suggesting a new sort of PHFS division and subtraction compared with the existing unification procedure. Eventually, the proposed PHFE-unification process will be used in this study, an innovative PHFEs based on the Weighted Aggregated Sum Product Assessment Method-Analytic Hierarchy Process (WASPAS-AHP) perspective for selecting flexible packaging bags after the prohibition on single-use plastics. As a result, we have included the PHFEs-WASPAS in our selection of the most effective fuzzy environment for bio-plastic bags. The ranking results for the suggested PHFEs-MCDM techniques surpassed the existing AHP methods in the research study by providing the best solution. Our solutions offer the best bio-plastic bag alternative strategy for mitigating environmental impacts.
    Matched MeSH terms: Probability
  4. Walters K, Yaacob H
    Genet Epidemiol, 2023 Apr;47(3):249-260.
    PMID: 36739616 DOI: 10.1002/gepi.22517
    Currently, the only effect size prior that is routinely implemented in a Bayesian fine-mapping multi-single-nucleotide polymorphism (SNP) analysis is the Gaussian prior. Here, we show how the Laplace prior can be deployed in Bayesian multi-SNP fine mapping studies. We compare the ranking performance of the posterior inclusion probability (PIP) using a Laplace prior with the ranking performance of the corresponding Gaussian prior and FINEMAP. Our results indicate that, for the simulation scenarios we consider here, the Laplace prior can lead to higher PIPs than either the Gaussian prior or FINEMAP, particularly for moderately sized fine-mapping studies. The Laplace prior also appears to have better worst-case scenario properties. We reanalyse the iCOGS case-control data from the CASP8 region on Chromosome 2. Even though this study has a total sample size of nearly 90,000 individuals, there are still some differences in the top few ranked SNPs if the Laplace prior is used rather than the Gaussian prior. R code to implement the Laplace (and Gaussian) prior is available at https://github.com/Kevin-walters/lapmapr.
    Matched MeSH terms: Probability
  5. Madani Fadoul M, Chow CO
    PLoS One, 2023;18(6):e0286970.
    PMID: 37339142 DOI: 10.1371/journal.pone.0286970
    In a multicell environment, the half-duplex (HD) relaying is prone to inter-relay interference (IRI) and the full-duplex (FD) relaying is prone to relay residual-interference (RSI) and relay-to-destination interference (RDI) due to Next Generation Node B (gNB) traffic adaptation to different backhaul subframe configurations. IRI and RDI occur in the downlink when a relay is transmitting on its access link and interfering with the reception of a backhaul link of another victim relay. While the simultaneous transmission and reception of the FD relay creates the RSI. IRI, RDI, and RSI have detrimental effects on the system performance, leading to lower ergodic capacity and higher outage probability. Some previous contributions only briefly analysed the IRI, RSI, and RDI in a single cell scenario and some assumed that the backhaul and access subframes among the adjacent cells are perfectly aligned for different relays without counting for IRI, RSI and RDI. However, in practise the subframes are not perfectly aligned. In this paper, we eliminate the IRI, RSI, and RDI by using the hybrid zeroforcing and singular value decomposition (ZF-SVD) beamforming technique based on nullspace projection. Furthermore, joint power allocation (joint PA) for the relays and destinations is performed to optimize the capacity. The ergodic capacity and outage probability comparisons of the proposed scheme with comparable baseline schemes corroborate the effectiveness of the proposed scheme.
    Matched MeSH terms: Probability
  6. Allias Omar SM, Wan Ariffin WNH, Mohd Sidek L, Basri H, Moh Khambali MH, Ahmed AN
    Int J Environ Res Public Health, 2022 Dec 09;19(24).
    PMID: 36554413 DOI: 10.3390/ijerph192416530
    Extensive hydrological analysis is carried out to estimate floods for the Batu Dam, a hydropower dam located in the urban area upstream of Kuala Lumpur, Malaysia. The study demonstrates the operational state and reliability of the dam structure based on hydrologic assessment of the dam. The surrounding area is affected by heavy rainfall and climate change every year, which increases the probability of flooding and threatens a dense population downstream of the dam. This study evaluates the adequacy of dam spillways by considering the latest Probable Maximum Precipitation (PMP) and Probable Maximum Flood (PMF) values of the concerned dams. In this study, the PMP estimations are applied using comparison of both statistical method by Hershfield and National Hydraulic Research Institute of Malaysia (NAHRIM) Envelope Curve as input for PMF establishments. Since the PMF is derived from the PMP values, the highest design flood standard can be applied to any dam, ensuring inflow into the reservoirs and limiting the risk of dam structural failure. Hydrologic modeling using HEC-HMS provides PMF values for the Batu dam. Based on the results, Batu Dam is found to have 200.6 m3/s spillway discharge capacities. Under PMF conditions, the Batu dam will not face overtopping since the peak outflow of the reservoir level is still below the crest level of the dam.
    Matched MeSH terms: Probability
  7. Saealal MS, Ibrahim MZ, Mulvaney DJ, Shapiai MI, Fadilah N
    PLoS One, 2022;17(12):e0278989.
    PMID: 36520851 DOI: 10.1371/journal.pone.0278989
    Deep learning is notably successful in data analysis, computer vision, and human control. Nevertheless, this approach has inevitably allowed the development of DeepFake video sequences and images that could be altered so that the changes are not easily or explicitly detectable. Such alterations have been recently used to spread false news or disinformation. This study aims to identify Deepfaked videos and images and alert viewers to the possible falsity of the information. The current work presented a novel means of revealing fake face videos by cascading the convolution network with recurrent neural networks and fully connected network (FCN) models. The system detection approach utilizes the eye-blinking state in temporal video frames. Notwithstanding, it is deemed challenging to precisely depict (i) artificiality in fake videos and (ii) spatial information within the individual frame through this physiological signal. Spatial features were extracted using the VGG16 network and trained with the ImageNet dataset. The temporal features were then extracted in every 20 sequences through the LSTM network. On another note, the pre-processed eye-blinking state served as a probability to generate a novel BPD dataset. This newly-acquired dataset was fed to three models for training purposes with each entailing four, three, and six hidden layers, respectively. Every model constitutes a unique architecture and specific dropout value. Resultantly, the model optimally and accurately identified tampered videos within the dataset. The study model was assessed using the current BPD dataset based on one of the most complex datasets (FaceForensic++) with 90.8% accuracy. Such precision was successfully maintained in datasets that were not used in the training process. The training process was also accelerated by lowering the computation prerequisites.
    Matched MeSH terms: Probability
  8. Jia Y, Zheng F, Zhang Q, Duan HF, Savic D, Kapelan Z
    Water Res, 2021 Oct 01;204:117594.
    PMID: 34474249 DOI: 10.1016/j.watres.2021.117594
    Hydraulic modeling of a foul sewer system (FSS) enables a better understanding of the behavior of the system and its effective management. However, there is generally a lack of sufficient field measurement data for FSS model development due to the low number of in-situ sensors for data collection. To this end, this study proposes a new method to develop FSS models based on geotagged information and water consumption data from smart water meters that are readily available. Within the proposed method, each sewer manhole is firstly associated with a particular population whose size is estimated from geotagged data. Subsequently, a two-stage optimization framework is developed to identify daily time-series inflows for each manhole based on physical connections between manholes and population as well as sewer sensor observations. Finally, a new uncertainty analysis method is developed by mapping the probability distributions of water consumption captured by smart meters to the stochastic variations of wastewater discharges. Two real-world FSSs are used to demonstrate the effectiveness of the proposed method. Results show that the proposed method can significantly outperform the traditional FSS model development approach in accurately simulating the values and uncertainty ranges of FSS hydraulic variables (manhole water depths and sewer flows). The proposed method is promising due to the easy availability of geotagged information as well as water consumption data from smart water meters in near future.
    Matched MeSH terms: Probability
  9. Ali BH, Sulaiman N, Al-Haddad SAR, Atan R, Hassan SLM, Alghrairi M
    Sensors (Basel), 2021 Sep 27;21(19).
    PMID: 34640773 DOI: 10.3390/s21196453
    One of the most dangerous kinds of attacks affecting computers is a distributed denial of services (DDoS) attack. The main goal of this attack is to bring the targeted machine down and make their services unavailable to legal users. This can be accomplished mainly by directing many machines to send a very large number of packets toward the specified machine to consume its resources and stop it from working. We implemented a method using Java based on entropy and sequential probabilities ratio test (ESPRT) methods to identify malicious flows and their switch interfaces that aid them in passing through. Entropy (E) is the first technique, and the sequential probabilities ratio test (SPRT) is the second technique. The entropy method alone compares its results with a certain threshold in order to make a decision. The accuracy and F-scores for entropy results thus changed when the threshold values changed. Using both entropy and SPRT removed the uncertainty associated with the entropy threshold. The false positive rate was also reduced when combining both techniques. Entropy-based detection methods divide incoming traffic into groups of traffic that have the same size. The size of these groups is determined by a parameter called window size. The Defense Advanced Research Projects Agency (DARPA) 1998, DARPA2000, and Canadian Institute for Cybersecurity (CIC-DDoS2019) databases were used to evaluate the implementation of this method. The metric of a confusion matrix was used to compare the ESPRT results with the results of other methods. The accuracy and f-scores for the DARPA 1998 dataset were 0.995 and 0.997, respectively, for the ESPRT method when the window size was set at 50 and 75 packets. The detection rate of ESPRT for the same dataset was 0.995 when the window size was set to 10 packets. The average accuracy for the DARPA 2000 dataset for ESPRT was 0.905, and the detection rate was 0.929. Finally, ESPRT was scalable to a multiple domain topology application.
    Matched MeSH terms: Probability
  10. Saad WK, Shayea I, Hamza BJ, Mohamad H, Daradkeh YI, Jabbar WA
    Sensors (Basel), 2021 Jul 31;21(15).
    PMID: 34372437 DOI: 10.3390/s21155202
    The massive growth of mobile users will spread to significant numbers of small cells for the Fifth Generation (5G) mobile network, which will overlap the fourth generation (4G) network. A tremendous increase in handover (HO) scenarios and HO rates will occur. Ensuring stable and reliable connection through the mobility of user equipment (UE) will become a major problem in future mobile networks. This problem will be magnified with the use of suboptimal handover control parameter (HCP) settings, which can be configured manually or automatically. Therefore, the aim of this study is to investigate the impact of different HCP settings on the performance of 5G network. Several system scenarios are proposed and investigated based on different HCP settings and mobile speed scenarios. The different mobile speeds are expected to demonstrate the influence of many proposed system scenarios on 5G network execution. We conducted simulations utilizing MATLAB software and its related tools. Evaluation comparisons were performed in terms of handover probability (HOP), ping-pong handover probability (PPHP) and outage probability (OP). The 5G network framework has been employed to evaluate the proposed system scenarios used. The simulation results reveal that there is a trade-off in the results obtained from various systems. The use of lower HCP settings provides noticeable enhancements compared to higher HCP settings in terms of OP. Simultaneously, the use of lower HCP settings provides noticeable drawbacks compared to higher HCP settings in terms of high PPHP for all scenarios of mobile speed. The simulation results show that medium HCP settings may be the acceptable solution if one of these systems is applied. This study emphasises the application of automatic self-optimisation (ASO) functions as the best solution that considers user experience.
    Matched MeSH terms: Probability
  11. Walters K, Cox A, Yaacob H
    Genet Epidemiol, 2021 Jun;45(4):386-401.
    PMID: 33410201 DOI: 10.1002/gepi.22375
    The Gaussian distribution is usually the default causal single-nucleotide polymorphism (SNP) effect size prior in Bayesian population-based fine-mapping association studies, but a recent study showed that the heavier-tailed Laplace prior distribution provided a better fit to breast cancer top hits identified in genome-wide association studies. We investigate the utility of the Laplace prior as an effect size prior in univariate fine-mapping studies. We consider ranking SNPs using Bayes factors and other summaries of the effect size posterior distribution, the effect of prior choice on credible set size based on the posterior probability of causality, and on the noteworthiness of SNPs in univariate analyses. Across a wide range of fine-mapping scenarios the Laplace prior generally leads to larger 90% credible sets than the Gaussian prior. These larger credible sets for the Laplace prior are due to relatively high prior mass around zero which can yield many noncausal SNPs with relatively large Bayes factors. If using conventional credible sets, the Gaussian prior generally yields a better trade off between including the causal SNP with high probability and keeping the set size reasonable. Interestingly when using the less well utilised measure of noteworthiness, the Laplace prior performs well, leading to causal SNPs being declared noteworthy with high probability, whilst generally declaring fewer than 5% of noncausal SNPs as being noteworthy. In contrast, the Gaussian prior leads to the causal SNP being declared noteworthy with very low probability.
    Matched MeSH terms: Probability
  12. Hamilton MG
    Heredity (Edinb), 2021 06;126(6):884-895.
    PMID: 33692533 DOI: 10.1038/s41437-021-00421-0
    The cost of parentage assignment precludes its application in many selective breeding programmes and molecular ecology studies, and/or limits the circumstances or number of individuals to which it is applied. Pooling samples from more than one individual, and using appropriate genetic markers and algorithms to determine parental contributions to pools, is one means of reducing the cost of parentage assignment. This paper describes and validates a novel maximum likelihood (ML) parentage-assignment method, that can be used to accurately assign parentage to pooled samples of multiple individuals-previously published ML methods are applicable to samples of single individuals only-using low-density single nucleotide polymorphism (SNP) 'quantitative' (also referred to as 'continuous') genotype data. It is demonstrated with simulated data that, when applied to pools, this 'quantitative maximum likelihood' method assigns parentage with greater accuracy than established maximum likelihood parentage-assignment approaches, which rely on accurate discrete genotype calls; exclusion methods; and estimating parental contributions to pools by solving the weighted least squares problem. Quantitative maximum likelihood can be applied to pools generated using either a 'pooling-for-individual-parentage-assignment' approach, whereby each individual in a pool is tagged or traceable and from a known and mutually exclusive set of possible parents; or a 'pooling-by-phenotype' approach, whereby individuals of the same, or similar, phenotype/s are pooled. Although computationally intensive when applied to large pools, quantitative maximum likelihood has the potential to substantially reduce the cost of parentage assignment, even if applied to pools comprised of few individuals.
    Matched MeSH terms: Probability
  13. Albowarab MH, Zakaria NA, Zainal Abidin Z
    Sensors (Basel), 2021 May 12;21(10).
    PMID: 34065920 DOI: 10.3390/s21103356
    Various aspects of task execution load balancing of Internet of Things (IoTs) networks can be optimised using intelligent algorithms provided by software-defined networking (SDN). These load balancing aspects include makespan, energy consumption, and execution cost. While past studies have evaluated load balancing from one or two aspects, none has explored the possibility of simultaneously optimising all aspects, namely, reliability, energy, cost, and execution time. For the purposes of load balancing, implementing multi-objective optimisation (MOO) based on meta-heuristic searching algorithms requires assurances that the solution space will be thoroughly explored. Optimising load balancing provides not only decision makers with optimised solutions but a rich set of candidate solutions to choose from. Therefore, the purposes of this study were (1) to propose a joint mathematical formulation to solve load balancing challenges in cloud computing and (2) to propose two multi-objective particle swarm optimisation (MP) models; distance angle multi-objective particle swarm optimization (DAMP) and angle multi-objective particle swarm optimization (AMP). Unlike existing models that only use crowding distance as a criterion for solution selection, our MP models probabilistically combine both crowding distance and crowding angle. More specifically, we only selected solutions that had more than a 0.5 probability of higher crowding distance and higher angular distribution. In addition, binary variants of the approaches were generated based on transfer function, and they were denoted by binary DAMP (BDAMP) and binary AMP (BAMP). After using MOO mathematical functions to compare our models, BDAMP and BAMP, with state of the standard models, BMP, BDMP and BPSO, they were tested using the proposed load balancing model. Both tests proved that our DAMP and AMP models were far superior to the state of the art standard models, MP, crowding distance multi-objective particle swarm optimisation (DMP), and PSO. Therefore, this study enables the incorporation of meta-heuristic in the management layer of cloud networks.
    Matched MeSH terms: Probability
  14. Saranya K, Ponnada SR, Cheruvathoor JJ, Jacob S, Kandukuri G, Mudigonda M, et al.
    J Forensic Odontostomatol, 2021 Apr 30;1(39):16-23.
    PMID: 34057154
    Juvenile crime or delinquency has been increasing at an alarming rate in recent times. In many countries, including India, the minimum age for criminal responsibility is 16 years. The present study aimed to estimate the probability of a south Indian adolescent either being or being older than the legally relevant age of 16 years using Demirjian's tooth formation stages. Orthopantomograms (OPG) of 640 south Indian adolescents (320 boys and 320 girls) aged between 12 and 20 years were retrospectively analyzed. In each OPG, Demirjian's formation stage of the mandibular left third molar was recorded and the data was subjected to statistical analysis. Descriptive and Pearsons correlation statistics were performed. The empirical probabilities were provided relative to the medico-legal question of predicting 16 years of age. The distribution of age throughout the 10th, 25th, 50th, 75th and 90th percentile follows a logical distribution pattern horizontally and vertically. Pearson's correlation statistics showed a strong positive correlation between the Demirjian's stages and age for both sexes. Therefore, it can be concluded that stage "F" can be used to predict the attainment of age equal to or older than 16 years with a probability of 93.9% for boys and 96.6% for girls.
    Matched MeSH terms: Probability
  15. Kipourou DK, Leyrat C, Alsheridah N, Almazeedi S, Al-Youha S, Jamal MH, et al.
    BMC Public Health, 2021 04 26;21(1):799.
    PMID: 33902520 DOI: 10.1186/s12889-021-10759-z
    BACKGROUND: Subsequent epidemic waves have already emerged in many countries and in the absence of highly effective preventive and curative options, the role of patient characteristics on the development of outcomes needs to be thoroughly examined, especially in middle-east countries where such epidemiological studies are lacking. There is a huge pressure on the hospital services and in particular, on the Intensive Care Units (ICU). Describing the need for critical care as well as the chance of being discharged from hospital according to patient characteristics, is essential for a more efficient hospital management. The objective of this study is to describe the probabilities of admission to the ICU and the probabilities of hospital discharge among positive COVID-19 patients according to demographics and comorbidities recorded at hospital admission.

    METHODS: A prospective cohort study of all patients with COVID-19 found in the Electronic Medical Records of Jaber Al-Ahmad Al-Sabah Hospital in Kuwait was conducted. The study included 3995 individuals (symptomatic and asymptomatic) of all ages who tested positive from February 24th to May 27th, 2020, out of which 315 were treated in the ICU and 3619 were discharged including those who were transferred to a different healthcare unit without having previously entered the ICU. A competing risk analysis considering two events, namely, ICU admission and hospital discharge using flexible hazard models was performed to describe the association between event-specific probabilities and patient characteristics.

    RESULTS: Results showed that being male, increasing age and comorbidities such as chronic kidney disease (CKD), asthma or chronic obstructive pulmonary disease and weakened immune system increased the risk of ICU admission within 10 days of entering the hospital. CKD and weakened immune system decreased the probabilities of discharge in both females and males however, the age-related pattern differed by gender. Diabetes, which was the most prevalent comorbid condition, had only a moderate impact on both probabilities (18% overall) in contrast to CKD which had the largest effect, but presented only in 7% of those admitted to ICU and in 1% of those who got discharged. For instance, within 5 days a 50-year-old male had 19% (95% C.I.: [15,23]) probability of entering the ICU if he had none of these comorbidities, yet this risk jumped to 31% (95% C.I.: [20,46]) if he had also CKD, and to 27% in the presence of asthma/COPD (95% C.I.: [19,36]) or of weakened immune system (95% C.I.: [16,42]).

    CONCLUSIONS: This study provides useful insight in describing the probabilities of ICU admission and hospital discharge according to age, gender, and comorbidities among confirmed COVID-19 cases in Kuwait. A web-tool is also provided to allow the user to estimate these probabilities for any combination of these covariates. These probabilities enable deeper understanding of the hospital demand according to patient characteristics which is essential to hospital management and useful for developing a vaccination strategy.

    Matched MeSH terms: Probability
  16. Ch'ng YH, Osman MA, Jong HY
    Malays J Med Sci, 2021 Apr;28(2):161-170.
    PMID: 33958970 DOI: 10.21315/mjms2021.28.2.15
    Background: Specific language impairment (SLI) diagnosis is inconvenient due to manual procedures and hardware cost. Computer-aided SLI diagnosis has been proposed to counter these inconveniences. This study focuses on evaluating the feasibility of computer systems used to diagnose SLI.

    Methods: The accuracy of Webgazer.js for software-based gaze tracking is tested under different lighting conditions. Predefined time delays of a prototype diagnosis task automation script are contrasted against with manual delays based on human time estimation to understand how automation influences diagnosis accuracy. SLI diagnosis binary classifier was built and tested based on randomised parameters. The obtained results were cross-compared to Singlims_ES.exe for equality.

    Results: Webgazer.js achieved an average accuracy of 88.755% under global lighting conditions, 61.379% under low lighting conditions and 52.7% under face-focused lighting conditions. The diagnosis task automation script found to execute with actual time delays with a deviation percentage no more than 0.04%, while manually executing time delays based on human time estimation resulted in a deviation percentage of not more than 3.37%. One-tailed test probability value produced by both the newly built classifier and Singlims_ES were observed to be similar up to three decimal places.

    Conclusion: The results obtained should serve as a foundation for further evaluation of computer tools to help speech language pathologists diagnose SLI.

    Matched MeSH terms: Probability
  17. Yaseen ZM, Ali M, Sharafati A, Al-Ansari N, Shahid S
    Sci Rep, 2021 Feb 09;11(1):3435.
    PMID: 33564055 DOI: 10.1038/s41598-021-82977-9
    A noticeable increase in drought frequency and severity has been observed across the globe due to climate change, which attracted scientists in development of drought prediction models for mitigation of impacts. Droughts are usually monitored using drought indices (DIs), most of which are probabilistic and therefore, highly stochastic and non-linear. The current research investigated the capability of different versions of relatively well-explored machine learning (ML) models including random forest (RF), minimum probability machine regression (MPMR), M5 Tree (M5tree), extreme learning machine (ELM) and online sequential-ELM (OSELM) in predicting the most widely used DI known as standardized precipitation index (SPI) at multiple month horizons (i.e., 1, 3, 6 and 12). Models were developed using monthly rainfall data for the period of 1949-2013 at four meteorological stations namely, Barisal, Bogra, Faridpur and Mymensingh, each representing a geographical region of Bangladesh which frequently experiences droughts. The model inputs were decided based on correlation statistics and the prediction capability was evaluated using several statistical metrics including mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), correlation coefficient (R), Willmott's Index of agreement (WI), Nash Sutcliffe efficiency (NSE), and Legates and McCabe Index (LM). The results revealed that the proposed models are reliable and robust in predicting droughts in the region. Comparison of the models revealed ELM as the best model in forecasting droughts with minimal RMSE in the range of 0.07-0.85, 0.08-0.76, 0.062-0.80 and 0.042-0.605 for Barisal, Bogra, Faridpur and Mymensingh, respectively for all the SPI scales except one-month SPI for which the RF showed the best performance with minimal RMSE of 0.57, 0.45, 0.59 and 0.42, respectively.
    Matched MeSH terms: Probability
  18. Abdulrahman M, Gardner A, Yamaguchi N
    J Arid Environ, 2021 Feb;185:104379.
    PMID: 33162623 DOI: 10.1016/j.jaridenv.2020.104379
    The distributions of bat species in Qatar have not previously been recorded. We conducted the first nation-wide survey of bats in Qatar. Based on sonogram analysis, we identified Asellia tridens, Otonycteris hemprichii, and Pipistrellus kuhlii. The most commonly recorded species was Asellia tridens, the only species recorded in the northern half of the country. Contrary to our prediction, the likelihood of recording bats was not higher in the northern half of the country where there are many irrigated farms. The distributions of the bat species may result from differences in human land use and disturbance, and from the distance to the main body of the Arabian Peninsula. A key habitat feature for Asellia tridens and Otonycteris hemprichii may be the presence of roosting sites in less disturbed sinkholes/caves, which are therefore crucial for bat conservation.
    Matched MeSH terms: Probability
  19. Thomas LA, Thomas LR, Balla SB, Gopalaiah H, Kanaparthi A, Sai Sravanthi G, et al.
    Leg Med (Tokyo), 2021 Feb;48:101814.
    PMID: 33246253 DOI: 10.1016/j.legalmed.2020.101814
    In the context of dental age assessment, two significant factors can be studied; tooth mineralisation and tooth emergence. Little is known about the role of a second molar eruption in forensic age estimation. This paper aims to contribute to forensic age estimation using an age threshold of 14 years, studying the eruption stages of permanent mandibular premolars and second molars. Totally 640 orthopantomograms (OPGs) of south Indian children, aged between 10 and 18 years, were evaluated using Olze et al. staging of tooth eruption stages (A-D). Spearman's rho correlation showed a strong, positive, and statistically significant correlation between the chronological age and the eruption stages of both sexes' teeth. Accuracy, sensitivity, specificity, likelihood ratios, and post-test probability values were calculated for all tested teeth. The best performance to discriminate individuals above or below 14 years showed stage D in second molars. The sensitivity varied between 89% and 94% and specificity between 75% and 84%, respectively. Receiver operating characteristic curve analysis revealed high diagnostic performance for stage D, with area under the ROC curve (AUC) values of 84% and 85% for tooth 37 and 85% and 83% for tooth 47 in males and females, respectively. In conclusion, it is possible to predict age over 14 years in south Indian children using tooth emergence stages from OPGs with a relatively high interobserver agreement and good diagnostic accuracy. However, there are some limitations and, therefore, must be used in conjunction with other methods.
    Matched MeSH terms: Probability
  20. Yang HK, Ji J, Han SU, Terashima M, Li G, Kim HH, et al.
    Lancet Gastroenterol Hepatol, 2021 02;6(2):120-127.
    PMID: 33253659 DOI: 10.1016/S2468-1253(20)30315-0
    BACKGROUND: Peritoneal recurrence of gastric cancer after curative surgical resection is common and portends a poor prognosis. Early studies suggest that extensive intraoperative peritoneal lavage (EIPL) might reduce the risk of peritoneal recurrence and improve survival. We aimed to evaluate the survival benefit of EIPL in patients with gastric cancer undergoing curative gastrectomy.

    METHODS: In this open-label, phase 3, multicentre randomised trial, patients aged 21-80 years with cT3 or cT4 gastric cancer undergoing curative resection were enrolled at 22 centres from South Korea, China, Japan, Malaysia, Hong Kong, and Singapore. Patients were randomly assigned to receive surgery and EIPL (EIPL group) or surgery alone (standard surgery group) via a web-based programme in random permuted blocks in varying block sizes of four and six, assuming equal allocation between treatment groups. Randomisation was stratified according to study site and the sequence was generated using a computer program and concealed until the interventions were assigned. After surgery in the EIPL group, peritoneal lavage was done with 1 L of warm (42°C) normal 0·9% saline followed by complete aspiration; this procedure was repeated ten times. The primary endpoint was overall survival. All analyses were done assuming intention to treat. This trial is registered with ClinicalTrials.gov, NCT02140034.

    FINDINGS: Between Sept 16, 2012, and Aug 3, 2018, 800 patients were randomly assigned to the EIPL group (n=398) or the standard surgery group (n=402). Two patients in the EIPL group and one in the standard surgery group withdrew from the trial immediately after randomisation and were excluded from the intention-to-treat analysis. At the third interim analysis on Aug 28, 2019, the predictive probability of overall survival being significantly higher in the EIPL group was less than 0·5%; therefore, the trial was terminated on the basis of futility. With a median follow-up of 2·4 years (IQR 1·5-3·0), the two groups were similar in terms of overall survival (hazard ratio 1·09 [95% CI 0·78-1·52; p=0·62). 3-year overall survival was 77·0% (95% CI 71·4-81·6) for the EIPL group and 76·7% (71·0-81·5) for the standard surgery group. 60 adverse events were reported in the EIPL group and 41 were reported in the standard surgery group. The most common adverse events included anastomotic leak (ten [3%] of 346 patients in the EIPL group vs six [2%] of 362 patients in the standard surgery group), bleeding (six [2%] vs six [2%]), intra-abdominal abscess (four [1%] vs five [1%]), superficial wound infection (seven [2%] vs one [<1%]), and abnormal liver function (six [2%] vs one [<1%]). Ten of the reported adverse events (eight in the EIPL group and two in the standard surgery group) resulted in death.

    INTERPRETATION: EIPL and surgery did not have a survival benefit compared with surgery alone and is not recommended for patients undergoing curative gastrectomy for gastric cancer.

    FUNDING: National Medical Research Council, Singapore.

    Matched MeSH terms: Probability
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links