Displaying publications 61 - 80 of 261 in total

Abstract:
Sort:
  1. Saeed F, Ahmed A, Shamsir MS, Salim N
    J Comput Aided Mol Des, 2014 Jun;28(6):675-84.
    PMID: 24830925 DOI: 10.1007/s10822-014-9750-2
    The cluster-based compound selection is used in the lead identification process of drug discovery and design. Many clustering methods have been used for chemical databases, but there is no clustering method that can obtain the best results under all circumstances. However, little attention has been focused on the use of combination methods for chemical structure clustering, which is known as consensus clustering. Recently, consensus clustering has been used in many areas including bioinformatics, machine learning and information theory. This process can improve the robustness, stability, consistency and novelty of clustering. For chemical databases, different consensus clustering methods have been used including the co-association matrix-based, graph-based, hypergraph-based and voting-based methods. In this paper, a weighted cumulative voting-based aggregation algorithm (W-CVAA) was developed. The MDL Drug Data Report (MDDR) benchmark chemical dataset was used in the experiments and represented by the AlogP and ECPF_4 descriptors. The results from the clustering methods were evaluated by the ability of the clustering to separate biologically active molecules in each cluster from inactive ones using different criteria, and the effectiveness of the consensus clustering was compared to that of Ward's method, which is the current standard clustering method in chemoinformatics. This study indicated that weighted voting-based consensus clustering can overcome the limitations of the existing voting-based methods and improve the effectiveness of combining multiple clusterings of chemical structures.
    Matched MeSH terms: Artificial Intelligence
  2. Wei H, Rahman MA, Hu X, Zhang L, Guo L, Tao H, et al.
    Work, 2021;68(3):845-852.
    PMID: 33612527 DOI: 10.3233/WOR-203418
    BACKGROUND: The selection of orders is the method of gathering the parts needed to assemble the final products from storage sites. Kitting is the name of a ready-to-use package or a parts kit, flexible robotic systems will significantly help the industry to improve the performance of this activity. In reality, despite some other limitations on the complexity of components and component characteristics, the technological advances in recent years in robotics and artificial intelligence allows the treatment of a wide range of items.

    OBJECTIVE: In this article, we study the robotic kitting system with a Robotic Mounted Rail Arm System (RMRAS), which travels narrowly to choose the elements.

    RESULTS: The objective is to evaluate the efficiency of a robotic kitting system in cycle times through modeling of the elementary kitting operations that the robot performs (pick and room, move, change tools, etc.). The experimental results show that the proposed method enhances the performance and efficiency ratio when compared to other existing methods.

    CONCLUSION: This study with the manufacturer can help him assess the robotic area performance in a given design (layout and picking a policy, etc.) as part of an ongoing project on automation of kitting operations.

    Matched MeSH terms: Artificial Intelligence
  3. Hai T, Ma X, Singh Chauhan B, Mahmoud S, Al-Kouz W, Tong J, et al.
    Chemosphere, 2023 Oct;338:139398.
    PMID: 37406939 DOI: 10.1016/j.chemosphere.2023.139398
    A newly developed waste-to-energy system using a biomass combined energy system designed and taken into account for electricity generation, cooling, and freshwater production has been investigated and modeled in this project. The investigated system incorporates several different cycles, such as a biomass waste integrated gasifier-gas turbine cycle, a high-temperature fuel cell, a Rankine cycle, an absorption refrigeration system, and a flash distillation system for seawater desalination. The EES software is employed to perform a basic analysis of the system. They are then transferred to MATLAB software to optimize and evaluate the impact of operational factors. Artificial intelligence is employed to evaluate and model the EES software's analysis output for this purpose. By enhancing the flow rate of fuel from 4 to 6.5 kg/s, the cost rate and energy efficiency are reduced by 51% and increased by 6.5%, respectively. Furthermore, the maximum increment in exergetic efficiency takes place whenever the inlet temperature of the gas turbine rises. According to an analysis of three types of biomasses, Solid Waste possesses the maximum efficiency rate, work output, and expense. Rice Husk, in contrast, has the minimum efficiency, work output, and expense. Additionally, with the change in fuel discharge and gas turbine inlet temperature, the system behavior for all three types of biomasses will be nearly identical. The Pareto front optimization findings demonstrate that the best mode for system performance is an output power of 53,512 kW, a cost of 0.643 dollars per second, and a first law efficiency of 42%. This optimal value occurs for fuel discharge of 5.125 and the maximum inlet temperature for a gas turbine. The rates of water desalination and cooling in this condition are 18.818 kg/s and 2356 kW, respectively.
    Matched MeSH terms: Artificial Intelligence*
  4. Gunasekaran S, Venkatesh B, Sagar BS
    Int J Neural Syst, 2004 Apr;14(2):139-45.
    PMID: 15112371
    Training methodology of the Back Propagation Network (BPN) is well documented. One aspect of BPN that requires investigation is whether or not the BPN would get trained for a given training data set and architecture. In this paper the behavior of the BPN is analyzed during its training phase considering convergent and divergent training data sets. Evolution of the weights during the training phase was monitored for the purpose of analysis. The evolution of weights was plotted as return map and was characterized by means of fractal dimension. This fractal dimensional analysis of the weight evolution trajectories is used to provide a new insight to understand the behavior of BPN and dynamics in the evolution of weights.
    Matched MeSH terms: Artificial Intelligence
  5. Jamal A, Hazim Alkawaz M, Rehman A, Saba T
    Microsc Res Tech, 2017 Jul;80(7):799-811.
    PMID: 28294460 DOI: 10.1002/jemt.22867
    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.
    Matched MeSH terms: Artificial Intelligence
  6. Sengupta P, Dutta S, Jegasothy R, Slama P, Cho CL, Roychoudhury S
    Reprod Biol Endocrinol, 2024 Feb 13;22(1):22.
    PMID: 38350931 DOI: 10.1186/s12958-024-01193-y
    The quandary known as the Intracytoplasmic Sperm Injection (ICSI) paradox is found at the juncture of Assisted Reproductive Technology (ART) and 'andrological ignorance' - a term coined to denote the undervalued treatment and comprehension of male infertility. The prevalent use of ICSI as a solution for severe male infertility, despite its potential to propagate genetically defective sperm, consequently posing a threat to progeny health, illuminates this paradox. We posit that the meteoric rise in Industrial Revolution 4.0 (IR 4.0) and Artificial Intelligence (AI) technologies holds the potential for a transformative shift in addressing male infertility, specifically by mitigating the limitations engendered by 'andrological ignorance.' We advocate for the urgent need to transcend andrological ignorance, envisaging AI as a cornerstone in the precise diagnosis and treatment of the root causes of male infertility. This approach also incorporates the identification of potential genetic defects in descendants, the establishment of knowledge platforms dedicated to male reproductive health, and the optimization of therapeutic outcomes. Our hypothesis suggests that the assimilation of AI could streamline ICSI implementation, leading to an overall enhancement in the realm of male fertility treatments. However, it is essential to conduct further investigations to substantiate the efficacy of AI applications in a clinical setting. This article emphasizes the significance of harnessing AI technologies to optimize patient outcomes in the fast-paced domain of reproductive medicine, thereby fostering the well-being of upcoming generations.
    Matched MeSH terms: Artificial Intelligence
  7. Shetty H, Shetty S, Kakade A, Shetty A, Karobari MI, Pawar AM, et al.
    Sci Rep, 2021 11 09;11(1):21914.
    PMID: 34754049 DOI: 10.1038/s41598-021-01489-8
    The volumetric change that occurs in the pulp space over time represents a critical measure when it comes to determining the secondary outcomes of regenerative endodontic procedures (REPs). However, to date, only a few studies have investigated the accuracy of the available domain-specialized medical imaging tools with regard to three-dimensional (3D) volumetric assessment. This study sought to compare the accuracy of two different artificial intelligence-based medical imaging programs namely OsiriX MD (v 9.0, Pixmeo SARL, Bernex Switzerland, https://www.osirix-viewer.com ) and 3D Slicer ( http://www.slicer.org ), in terms of estimating the volume of the pulp space following a REP. An Invitro assessment was performed to check the reliability and sensitivity of the two medical imaging programs in use. For the subsequent clinical application, pre- and post-procedure cone beam computed tomography scans of 35 immature permanent teeth with necrotic pulp and periradicular pathosis that had been treated with a cell-homing concept-based REP were processed using the two biomedical DICOM software programs (OsiriX MD and 3D Slicer). The volumetric changes in the teeth's pulp spaces were assessed using semi-automated techniques in both programs. The data were statistically analyzed using t-tests and paired t-tests (P = 0.05). The pulp space volumes measured using both programs revealed a statistically significant decrease in the pulp space volume following the REP (P  0.05). The mean decreases in the pulp space volumes measured using OsiriX MD and 3D Slicer were 25.06% ± 19.45% and 26.10% ± 18.90%, respectively. The open-source software (3D Slicer) was found to be as accurate as the commercially available software with regard to the volumetric assessment of the post-REP pulp space. This study was the first to demonstrate the step-by-step application of 3D Slicer, a user-friendly and easily accessible open-source multiplatform software program for the segmentation and volume estimation of the pulp spaces of teeth treated with REPs.
    Matched MeSH terms: Artificial Intelligence
  8. Menon S, Anand D, Kavita, Verma S, Kaur M, Jhanjhi NZ, et al.
    Sensors (Basel), 2023 Jul 04;23(13).
    PMID: 37447981 DOI: 10.3390/s23136132
    With the increasing growth rate of smart home devices and their interconnectivity via the Internet of Things (IoT), security threats to the communication network have become a concern. This paper proposes a learning engine for a smart home communication network that utilizes blockchain-based secure communication and a cloud-based data evaluation layer to segregate and rank data on the basis of three broad categories of Transactions (T), namely Smart T, Mod T, and Avoid T. The learning engine utilizes a neural network for the training and classification of the categories that helps the blockchain layer with improvisation in the decision-making process. The contributions of this paper include the application of a secure blockchain layer for user authentication and the generation of a ledger for the communication network; the utilization of the cloud-based data evaluation layer; the enhancement of an SI-based algorithm for training; and the utilization of a neural engine for the precise training and classification of categories. The proposed algorithm outperformed the Fused Real-Time Sequential Deep Extreme Learning Machine (RTS-DELM) system, the data fusion technique, and artificial intelligence Internet of Things technology in providing electronic information engineering and analyzing optimization schemes in terms of the computation complexity, false authentication rate, and qualitative parameters with a lower average computation complexity; in addition, it ensures a secure, efficient smart home communication network to enhance the lifestyle of human beings.
    Matched MeSH terms: Artificial Intelligence*
  9. Albahri OS, Zaidan AA, Albahri AS, Zaidan BB, Abdulkareem KH, Al-Qaysi ZT, et al.
    J Infect Public Health, 2020 Oct;13(10):1381-1396.
    PMID: 32646771 DOI: 10.1016/j.jiph.2020.06.028
    This study presents a systematic review of artificial intelligence (AI) techniques used in the detection and classification of coronavirus disease 2019 (COVID-19) medical images in terms of evaluation and benchmarking. Five reliable databases, namely, IEEE Xplore, Web of Science, PubMed, ScienceDirect and Scopus were used to obtain relevant studies of the given topic. Several filtering and scanning stages were performed according to the inclusion/exclusion criteria to screen the 36 studies obtained; however, only 11 studies met the criteria. Taxonomy was performed, and the 11 studies were classified on the basis of two categories, namely, review and research studies. Then, a deep analysis and critical review were performed to highlight the challenges and critical gaps outlined in the academic literature of the given subject. Results showed that no relevant study evaluated and benchmarked AI techniques utilised in classification tasks (i.e. binary, multi-class, multi-labelled and hierarchical classifications) of COVID-19 medical images. In case evaluation and benchmarking will be conducted, three future challenges will be encountered, namely, multiple evaluation criteria within each classification task, trade-off amongst criteria and importance of these criteria. According to the discussed future challenges, the process of evaluation and benchmarking AI techniques used in the classification of COVID-19 medical images considered multi-complex attribute problems. Thus, adopting multi-criteria decision analysis (MCDA) is an essential and effective approach to tackle the problem complexity. Moreover, this study proposes a detailed methodology for the evaluation and benchmarking of AI techniques used in all classification tasks of COVID-19 medical images as future directions; such methodology is presented on the basis of three sequential phases. Firstly, the identification procedure for the construction of four decision matrices, namely, binary, multi-class, multi-labelled and hierarchical, is presented on the basis of the intersection of evaluation criteria of each classification task and AI classification techniques. Secondly, the development of the MCDA approach for benchmarking AI classification techniques is provided on the basis of the integrated analytic hierarchy process and VlseKriterijumska Optimizacija I Kompromisno Resenje methods. Lastly, objective and subjective validation procedures are described to validate the proposed benchmarking solutions.
    Matched MeSH terms: Artificial Intelligence/standards*
  10. Loo CK, Rajeswari M, Rao MV
    IEEE Trans Neural Netw, 2004 Nov;15(6):1378-95.
    PMID: 15565767
    This paper presents two novel approaches to determine optimum growing multi-experts network (GMN) structure. The first method called direct method deals with expertise domain and levels in connection with local experts. The growing neural gas (GNG) algorithm is used to cluster the local experts. The concept of error distribution is used to apportion error among the local experts. After reaching the specified size of the network, redundant experts removal algorithm is invoked to prune the size of the network based on the ranking of the experts. However, GMN is not ergonomic due to too many network control parameters. Therefore, a self-regulating GMN (SGMN) algorithm is proposed. SGMN adopts self-adaptive learning rates for gradient-descent learning rules. In addition, SGMN adopts a more rigorous clustering method called fully self-organized simplified adaptive resonance theory in a modified form. Experimental results show SGMN obtains comparative or even better performance than GMN in four benchmark examples, with reduced sensitivity to learning parameters setting. Moreover, both GMN and SGMN outperform the other neural networks and statistical models. The efficacy of SGMN is further justified in three industrial applications and a control problem. It provides consistent results besides holding out a profound potential and promise for building a novel type of nonlinear model consisting of several local linear models.
    Matched MeSH terms: Artificial Intelligence
  11. Jamal N, Ng KH, Looi LM, McLean D, Zulfiqar A, Tan SP, et al.
    Phys Med Biol, 2006 Nov 21;51(22):5843-57.
    PMID: 17068368
    We describe a semi-automated technique for the quantitative assessment of breast density from digitized mammograms in comparison with patterns suggested by Tabar. It was developed using the MATLAB-based graphical user interface applications. It is based on an interactive thresholding method, after a short automated method that shows the fibroglandular tissue area, breast area and breast density each time new thresholds are placed on the image. The breast density is taken as a percentage of the fibroglandular tissue to the breast tissue areas. It was tested in four different ways, namely by examining: (i) correlation of the quantitative assessment results with subjective classification, (ii) classification performance using the quantitative assessment technique, (iii) interobserver agreement and (iv) intraobserver agreement. The results of the quantitative assessment correlated well (r2 = 0.92) with the subjective Tabar patterns classified by the radiologist (correctly classified 83% of digitized mammograms). The average kappa coefficient for the agreement between the readers was 0.63. This indicated moderate agreement between the three observers in classifying breast density using the quantitative assessment technique. The kappa coefficient of 0.75 for intraobserver agreement reflected good agreement between two sets of readings. The technique may be useful as a supplement to the radiologist's assessment in classifying mammograms into Tabar's pattern associated with breast cancer risk.
    Matched MeSH terms: Artificial Intelligence
  12. Sachithanandan A, Lockman H, Azman RR, Tho LM, Ban EZ, Ramon V
    Med J Malaysia, 2024 Jan;79(1):9-14.
    PMID: 38287751
    INTRODUCTION: The poor prognosis of lung cancer has been largely attributed to the fact that most patients present with advanced stage disease. Although low dose computed tomography (LDCT) is presently considered the optimal imaging modality for lung cancer screening, its use has been hampered by cost and accessibility. One possible approach to facilitate lung cancer screening is to implement a risk-stratification step with chest radiography, given its ease of access and affordability. Furthermore, implementation of artificial-intelligence (AI) in chest radiography is expected to improve the detection of indeterminate pulmonary nodules, which may represent early lung cancer.

    MATERIALS AND METHODS: This consensus statement was formulated by a panel of five experts of primary care and specialist doctors. A lung cancer screening algorithm was proposed for implementation locally.

    RESULTS: In an earlier pilot project collaboration, AI-assisted chest radiography had been incorporated into lung cancer screening in the community. Preliminary experience in the pilot project suggests that the system is easy to use, affordable and scalable. Drawing from experience with the pilot project, a standardised lung cancer screening algorithm using AI in Malaysia was proposed. Requirements for such a screening programme, expected outcomes and limitations of AI-assisted chest radiography were also discussed.

    CONCLUSION: The combined strategy of AI-assisted chest radiography and complementary LDCT imaging has great potential in detecting early-stage lung cancer in a timely manner, and irrespective of risk status. The proposed screening algorithm provides a guide for clinicians in Malaysia to participate in screening efforts.

    Matched MeSH terms: Artificial Intelligence
  13. Jaafar H, Ibrahim S, Ramli DA
    Comput Intell Neurosci, 2015;2015:360217.
    PMID: 26113861 DOI: 10.1155/2015/360217
    Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI) extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN), was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%.
    Matched MeSH terms: Artificial Intelligence*
  14. Alanazi HO, Abdullah AH, Qureshi KN
    J Med Syst, 2017 Apr;41(4):69.
    PMID: 28285459 DOI: 10.1007/s10916-017-0715-6
    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
    Matched MeSH terms: Artificial Intelligence*
  15. Dikshit A, Pradhan B
    Sci Total Environ, 2021 Dec 20;801:149797.
    PMID: 34467917 DOI: 10.1016/j.scitotenv.2021.149797
    Accurate prediction of any type of natural hazard is a challenging task. Of all the various hazards, drought prediction is challenging as it lacks a universal definition and is getting adverse with climate change impacting drought events both spatially and temporally. The problem becomes more complex as drought occurrence is dependent on a multitude of factors ranging from hydro-meteorological to climatic variables. A paradigm shift happened in this field when it was found that the inclusion of climatic variables in the data-driven prediction model improves the accuracy. However, this understanding has been primarily using statistical metrics used to measure the model accuracy. The present work tries to explore this finding using an explainable artificial intelligence (XAI) model. The explainable deep learning model development and comparative analysis were performed using known understandings drawn from physical-based models. The work also tries to explore how the model achieves specific results at different spatio-temporal intervals, enabling us to understand the local interactions among the predictors for different drought conditions and drought periods. The drought index used in the study is Standard Precipitation Index (SPI) at 12 month scales applied for five different regions in New South Wales, Australia, with the explainable algorithm being SHapley Additive exPlanations (SHAP). The conclusions drawn from SHAP plots depict the importance of climatic variables at a monthly scale and varying ranges of annual scale. We observe that the results obtained from SHAP align with the physical model interpretations, thus suggesting the need to add climatic variables as predictors in the prediction model.
    Matched MeSH terms: Artificial Intelligence*
  16. Fallahpoor M, Chakraborty S, Heshejin MT, Chegeni H, Horry MJ, Pradhan B
    Comput Biol Med, 2022 Jun;145:105464.
    PMID: 35390746 DOI: 10.1016/j.compbiomed.2022.105464
    BACKGROUND: Artificial intelligence technologies in classification/detection of COVID-19 positive cases suffer from generalizability. Moreover, accessing and preparing another large dataset is not always feasible and time-consuming. Several studies have combined smaller COVID-19 CT datasets into "supersets" to maximize the number of training samples. This study aims to assess generalizability by splitting datasets into different portions based on 3D CT images using deep learning.

    METHOD: Two large datasets, including 1110 3D CT images, were split into five segments of 20% each. Each dataset's first 20% segment was separated as a holdout test set. 3D-CNN training was performed with the remaining 80% from each dataset. Two small external datasets were also used to independently evaluate the trained models.

    RESULTS: The total combination of 80% of each dataset has an accuracy of 91% on Iranmehr and 83% on Moscow holdout test datasets. Results indicated that 80% of the primary datasets are adequate for fully training a model. The additional fine-tuning using 40% of a secondary dataset helps the model generalize to a third, unseen dataset. The highest accuracy achieved through transfer learning was 85% on LDCT dataset and 83% on Iranmehr holdout test sets when retrained on 80% of Iranmehr dataset.

    CONCLUSION: While the total combination of both datasets produced the best results, different combinations and transfer learning still produced generalizable results. Adopting the proposed methodology may help to obtain satisfactory results in the case of limited external datasets.

    Matched MeSH terms: Artificial Intelligence
  17. Matin SS, Pradhan B
    Sensors (Basel), 2021 Jun 30;21(13).
    PMID: 34209169 DOI: 10.3390/s21134489
    Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack of transparency in the AI-model, the lack of quality in the labelled image, and the use of irrelevant descriptor features in building the AI-model. Using explainable AI (XAI) can lead us to gain insight into identifying these limitations and therefore, to modify the training dataset and the model accordingly. This paper proposes the use of SHAP (Shapley additive explanation) to interpret the outputs of a multilayer perceptron (MLP)-a machine learning model-and analyse the impact of each feature descriptor included in the model for building-damage assessment to examine the reliability of the model. In this study, a post-event satellite image from the 2018 Palu earthquake was used. The results show that MLP can classify the collapsed and non-collapsed buildings with an overall accuracy of 84% after removing the redundant features. Further, spectral features are found to be more important than texture features in distinguishing the collapsed and non-collapsed buildings. Finally, we argue that constructing an explainable model would help to understand the model's decision to classify the buildings as collapsed and non-collapsed and open avenues to build a transferable AI model.
    Matched MeSH terms: Artificial Intelligence*
  18. Cacha LA, Poznanski RR
    J Integr Neurosci, 2014 Jun;13(2):253-92.
    PMID: 25012712 DOI: 10.1142/S0219635214400081
    A theoretical framework is developed based on the premise that brains evolved into sufficiently complex adaptive systems capable of instantiating genomic consciousness through self-awareness and complex interactions that recognize qualitatively the controlling factors of biological processes. Furthermore, our hypothesis assumes that the collective interactions in neurons yield macroergic effects, which can produce sufficiently strong electric energy fields for electronic excitations to take place on the surface of endogenous structures via alpha-helical integral proteins as electro-solitons. Specifically the process of radiative relaxation of the electro-solitons allows for the transfer of energy via interactions with deoxyribonucleic acid (DNA) molecules to induce conformational changes in DNA molecules producing an ultra weak non-thermal spontaneous emission of coherent biophotons through a quantum effect. The instantiation of coherent biophotons confined in spaces of DNA molecules guides the biophoton field to be instantaneously conducted along the axonal and neuronal arbors and in-between neurons and throughout the cerebral cortex (cortico-thalamic system) and subcortical areas (e.g., midbrain and hindbrain). Thus providing an informational character of the electric coherence of the brain - referred to as quantum coherence. The biophoton field is realized as a conscious field upon the re-absorption of biophotons by exciplex states of DNA molecules. Such quantum phenomenon brings about self-awareness and enables objectivity to have access to subjectivity in the unconscious. As such, subjective experiences can be recalled to consciousness as subjective conscious experiences or qualia through co-operative interactions between exciplex states of DNA molecules and biophotons leading to metabolic activity and energy transfer across proteins as a result of protein-ligand binding during protein-protein communication. The biophoton field as a conscious field is attributable to the resultant effect of specifying qualia from the metabolic energy field that is transported in macromolecular proteins throughout specific networks of neurons that are constantly transforming into more stable associable representations as molecular solitons. The metastability of subjective experiences based on resonant dynamics occurs when bottom-up patterns of neocortical excitatory activity are matched with top-down expectations as adaptive dynamic pressures. These dynamics of on-going activity patterns influenced by the environment and selected as the preferred subjective experience in terms of a functional field through functional interactions and biological laws are realized as subjectivity and actualized through functional integration as qualia. It is concluded that interactionism and not information processing is the key in understanding how consciousness bridges the explanatory gap between subjective experiences and their neural correlates in the transcendental brain.
    Matched MeSH terms: Artificial Intelligence
  19. Kumar R, Khan FU, Sharma A, Aziz IB, Poddar NK
    Curr Med Chem, 2021 Apr 04.
    PMID: 33820515 DOI: 10.2174/0929867328666210405114938
    There is substantial progress in artificial intelligence (AI) algorithms and their medical sciences applications in the last two decades. AI-assisted programs have already been established for remotely health monitoring using sensors and smartphones. A variety of AI-based prediction models available for the gastrointestinal inflammatory, non-malignant diseases, and bowel bleeding using wireless capsule endoscopy, electronic medical records for hepatitis-associated fibrosis, pancreatic carcinoma using endoscopic ultrasounds. AI-based models may be of immense help for healthcare professionals in the identification, analysis, and decision support using endoscopic images to establish prognosis and risk assessment of patient's treatment using multiple factors. Although enough randomized clinical trials are warranted to establish the efficacy of AI-algorithms assisted and non-AI based treatments before approval of such techniques from medical regulatory authorities. In this article, available AI approaches and AI-based prediction models for detecting gastrointestinal, hepatic, and pancreatic diseases are reviewed. The limitation of AI techniques in such disease prognosis, risk assessment, and decision support are discussed.
    Matched MeSH terms: Artificial Intelligence
  20. Flaherty GT, Piyaphanee W
    J Travel Med, 2023 Feb 18;30(1).
    PMID: 36208173 DOI: 10.1093/jtm/taac113
    Matched MeSH terms: Artificial Intelligence*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links