METHODS: A crossover study was conducted among Year 1 and Year 2 pharmacy students. Students were invited to participate voluntarily for one OB and one CB online formative test in a chemistry module in each year. Evaluation of their learning approach and perception of the OB and CB systems of examination was conducted using Deep Information Processing (DIP) questionnaire and Student Perception questionnaire respectively. The mean performance scores of OB and CB examinations were compared.
RESULTS: Analysis of DIP scores showed that there was no significant difference (p > 0.05) in the learning approach adopted for the two different examination systems. However, the mean score obtained in the OB examination was significantly higher (p < 0.01) than those obtained in the CB examination. Preference was given by a majority of students for the OB examination, possibly because it was associated with lower anxiety levels, less requirement of memorization, and more problem solving.
CONCLUSION: There is no difference in deep learning approach of students, whether the format is of the OB or CB type examinations. However, the performance of students was significantly better in OB examination than CB. Hence, using OB examination along with CB examination will be useful for student learning and help them adapt to growing and changing knowledge in pharmacy education and practice.
METHODS: Cry signals from 2 different databases were utilized. First database contains 507 cry samples of normal (N), 340 cry samples of asphyxia (A), 879 cry samples of deaf (D), 350 cry samples of hungry (H) and 192 cry samples of pain (P). Second database contains 513 cry samples of jaundice (J), 531 samples of premature (Prem) and 45 samples of normal (N). Wavelet packet transform based energy and non-linear entropies (496 features), Linear Predictive Coding (LPC) based cepstral features (56 features), Mel-frequency Cepstral Coefficients (MFCCs) were extracted (16 features). The combined feature set consists of 568 features. To overcome the curse of dimensionality issue, improved binary dragonfly optimization algorithm (IBDFO) was proposed to select the most salient attributes or features. Finally, Extreme Learning Machine (ELM) kernel classifier was used to classify the different types of infant cry signals using all the features and highly informative features as well.
RESULTS: Several experiments of two-class and multi-class classification of cry signals were conducted. In binary or two-class experiments, maximum accuracy of 90.18% for H Vs P, 100% for A Vs N, 100% for D Vs N and 97.61% J Vs Prem was achieved using the features selected (only 204 features out of 568) by IBDFO. For the classification of multiple cry signals (multi-class problem), the selected features could differentiate between three classes (N, A & D) with the accuracy of 100% and seven classes with the accuracy of 97.62%.
CONCLUSION: The experimental results indicated that the proposed combination of feature extraction and selection method offers suitable classification accuracy and may be employed to detect the subtle changes in the cry signals.
OBJECTIVE: This paper presents a machine learning-based approach for the automatic classification of regular and irregular capnogram segments.
METHODS: Herein, we proposed four time- and two frequency-domain features experimented with the support vector machine classifier through ten-fold cross-validation. MATLAB simulation was conducted on 100 regular and 100 irregular 15 s capnogram segments. Analysis of variance was performed to investigate the significance of the proposed features. Pearson's correlation was utilized to select the relatively most substantial ones, namely variance and the area under normalized magnitude spectrum. Classification performance, using these features, was evaluated against two feature sets in which either time- or frequency-domain features only were employed.
RESULTS: Results showed a classification accuracy of 86.5%, which outperformed the other cases by an average of 5.5%. The achieved specificity, sensitivity, and precision were 84%, 89% and 86.51%, respectively. The average execution time for feature extraction and classification per segment is only 36 ms.
CONCLUSION: The proposed approach can be integrated with capnography devices for real-time capnogram-based respiratory assessment. However, further research is recommended to enhance the classification performance.
RESULTS: At present, the classifier used has achieved an accuracy of 100% based on skulls' views. Classification and identification to regions and sexes have also attained 72.5%, 87.5% and 80.0% of accuracy for dorsal, lateral, and jaw views, respectively. This results show that the shape characteristic features used are substantial because they can differentiate the specimens based on regions and sexes up to the accuracy of 80% and above. Finally, an application was developed and can be used for the scientific community.
CONCLUSIONS: This automated system demonstrates the practicability of using computer-assisted systems in providing interesting alternative approach for quick and easy identification of unknown species.