Displaying all 13 publications

Abstract:
Sort:
  1. Amin HU, Malik AS, Kamel N, Hussain M
    Brain Topogr, 2016 Mar;29(2):207-17.
    PMID: 26613724 DOI: 10.1007/s10548-015-0462-2
    Feature extraction and classification for electroencephalogram (EEG) in medical applications is a challenging task. The EEG signals produce a huge amount of redundant data or repeating information. This redundancy causes potential hurdles in EEG analysis. Hence, we propose to use this redundant information of EEG as a feature to discriminate and classify different EEG datasets. In this study, we have proposed a JPEG2000 based approach for computing data redundancy from multi-channels EEG signals and have used the redundancy as a feature for classification of EEG signals by applying support vector machine, multi-layer perceptron and k-nearest neighbors classifiers. The approach is validated on three EEG datasets and achieved high accuracy rate (95-99 %) in the classification. Dataset-1 includes the EEG signals recorded during fluid intelligence test, dataset-2 consists of EEG signals recorded during memory recall test, and dataset-3 has epileptic seizure and non-seizure EEG. The findings demonstrate that the approach has the ability to extract robust feature and classify the EEG signals in various applications including clinical as well as normal EEG patterns.
  2. Jawed S, Amin HU, Malik AS, Faye I
    PMID: 31133829 DOI: 10.3389/fnbeh.2019.00086
    This study analyzes the learning styles of subjects based on their electroencephalo-graphy (EEG) signals. The goal is to identify how the EEG features of a visual learner differ from those of a non-visual learner. The idea is to measure the students' EEGs during the resting states (eyes open and eyes closed conditions) and when performing learning tasks. For this purpose, 34 healthy subjects are recruited. The subjects have no background knowledge of the animated learning content. The subjects are shown the animated learning content in a video format. The experiment consists of two sessions and each session comprises two parts: (1) Learning task: the subjects are shown the animated learning content for an 8-10 min duration. (2) Memory retrieval task The EEG signals are measured during the leaning task and memory retrieval task in two sessions. The retention time for the first session was 30 min, and 2 months for the second session. The analysis is performed for the EEG measured during the memory retrieval tasks. The study characterizes and differentiates the visual learners from the non-visual learners considering the extracted EEG features, such as the power spectral density (PSD), power spectral entropy (PSE), and discrete wavelet transform (DWT). The PSD and DWT features are analyzed. The EEG PSD and DWT features are computed for the recorded EEG in the alpha and gamma frequency bands over 128 scalp sites. The alpha and gamma frequency band for frontal, occipital, and parietal regions are analyzed as these regions are activated during learning. The extracted PSD and DWT features are then reduced to 8 and 15 optimum features using principal component analysis (PCA). The optimum features are then used as an input to the k-nearest neighbor (k-NN) classifier using the Mahalanobis distance metric, with 10-fold cross validation and support vector machine (SVM) classifier using linear kernel, with 10-fold cross validation. The classification results showed 97% and 94% accuracies rate for the first session and 96% and 93% accuracies for the second session in the alpha and gamma bands for the visual learners and non-visual learners, respectively, for k-NN classifier for PSD features and 68% and 100% accuracies rate for first session and 100% accuracies rate for second session for DWT features using k-NN classifier for the second session in the alpha and gamma band. For PSD features 97% and 96% accuracies rate for the first session, 100% and 95% accuracies rate for second session using SVM classifier and 79% and 82% accuracy for first session and 56% and 74% accuracy for second session for DWT features using SVM classifier. The results showed that the PSDs in the alpha and gamma bands represent distinct and stable EEG signatures for visual learners and non-visual learners during the retrieval of the learned contents.
  3. Amin HU, Ullah R, Reza MF, Malik AS
    J Neuroeng Rehabil, 2023 Jun 02;20(1):70.
    PMID: 37269019 DOI: 10.1186/s12984-023-01179-8
    BACKGROUND: Presentation of visual stimuli can induce changes in EEG signals that are typically detectable by averaging together data from multiple trials for individual participant analysis as well as for groups or conditions analysis of multiple participants. This study proposes a new method based on the discrete wavelet transform with Huffman coding and machine learning for single-trial analysis of evenal (ERPs) and classification of different visual events in the visual object detection task.

    METHODS: EEG single trials are decomposed with discrete wavelet transform (DWT) up to the [Formula: see text] level of decomposition using a biorthogonal B-spline wavelet. The coefficients of DWT in each trial are thresholded to discard sparse wavelet coefficients, while the quality of the signal is well maintained. The remaining optimum coefficients in each trial are encoded into bitstreams using Huffman coding, and the codewords are represented as a feature of the ERP signal. The performance of this method is tested with real visual ERPs of sixty-eight subjects.

    RESULTS: The proposed method significantly discards the spontaneous EEG activity, extracts the single-trial visual ERPs, represents the ERP waveform into a compact bitstream as a feature, and achieves promising results in classifying the visual objects with classification performance metrics: accuracies 93.60[Formula: see text], sensitivities 93.55[Formula: see text], specificities 94.85[Formula: see text], precisions 92.50[Formula: see text], and area under the curve (AUC) 0.93[Formula: see text] using SVM and k-NN machine learning classifiers.

    CONCLUSION: The proposed method suggests that the joint use of discrete wavelet transform (DWT) with Huffman coding has the potential to efficiently extract ERPs from background EEG for studying evoked responses in single-trial ERPs and classifying visual stimuli. The proposed approach has O(N) time complexity and could be implemented in real-time systems, such as the brain-computer interface (BCI), where fast detection of mental events is desired to smoothly operate a machine with minds.

  4. Tyng CM, Amin HU, Saad MNM, Malik AS
    Front Psychol, 2017;8:1454.
    PMID: 28883804 DOI: 10.3389/fpsyg.2017.01454
    Emotion has a substantial influence on the cognitive processes in humans, including perception, attention, learning, memory, reasoning, and problem solving. Emotion has a particularly strong influence on attention, especially modulating the selectivity of attention as well as motivating action and behavior. This attentional and executive control is intimately linked to learning processes, as intrinsically limited attentional capacities are better focused on relevant information. Emotion also facilitates encoding and helps retrieval of information efficiently. However, the effects of emotion on learning and memory are not always univalent, as studies have reported that emotion either enhances or impairs learning and long-term memory (LTM) retention, depending on a range of factors. Recent neuroimaging findings have indicated that the amygdala and prefrontal cortex cooperate with the medial temporal lobe in an integrated manner that affords (i) the amygdala modulating memory consolidation; (ii) the prefrontal cortex mediating memory encoding and formation; and (iii) the hippocampus for successful learning and LTM retention. We also review the nested hierarchies of circular emotional control and cognitive regulation (bottom-up and top-down influences) within the brain to achieve optimal integration of emotional and cognitive processing. This review highlights a basic evolutionary approach to emotion to understand the effects of emotion on learning and memory and the functional roles played by various brain regions and their mutual interactions in relation to emotional processing. We also summarize the current state of knowledge on the impact of emotion on memory and map implications for educational settings. In addition to elucidating the memory-enhancing effects of emotion, neuroimaging findings extend our understanding of emotional influences on learning and memory processes; this knowledge may be useful for the design of effective educational curricula to provide a conducive learning environment for both traditional "live" learning in classrooms and "virtual" learning through online-based educational technologies.
  5. Amin HU, Malik AS, Ahmad RF, Badruddin N, Kamel N, Hussain M, et al.
    Australas Phys Eng Sci Med, 2015 Mar;38(1):139-49.
    PMID: 25649845 DOI: 10.1007/s13246-015-0333-x
    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.
  6. Amin HU, Malik AS, Kamel N, Chooi WT, Hussain M
    J Neuroeng Rehabil, 2015;12:87.
    PMID: 26400233 DOI: 10.1186/s12984-015-0077-6
    Educational psychology research has linked fluid intelligence with learning and memory abilities and neuroimaging studies have specifically associated fluid intelligence with event related potentials (ERPs). The objective of this study is to find the relationship of ERPs with learning and memory recall and predict the memory recall score using P300 (P3) component.
  7. Ahmad RF, Malik AS, Kamel N, Reza F, Amin HU, Hussain M
    Technol Health Care, 2017;25(3):471-485.
    PMID: 27935575 DOI: 10.3233/THC-161286
    BACKGROUND: Classification of the visual information from the brain activity data is a challenging task. Many studies reported in the literature are based on the brain activity patterns using either fMRI or EEG/MEG only. EEG and fMRI considered as two complementary neuroimaging modalities in terms of their temporal and spatial resolution to map the brain activity. For getting a high spatial and temporal resolution of the brain at the same time, simultaneous EEG-fMRI seems to be fruitful.

    METHODS: In this article, we propose a new method based on simultaneous EEG-fMRI data and machine learning approach to classify the visual brain activity patterns. We acquired EEG-fMRI data simultaneously on the ten healthy human participants by showing them visual stimuli. Data fusion approach is used to merge EEG and fMRI data. Machine learning classifier is used for the classification purposes.

    RESULTS: Results showed that superior classification performance has been achieved with simultaneous EEG-fMRI data as compared to the EEG and fMRI data standalone. This shows that multimodal approach improved the classification accuracy results as compared with other approaches reported in the literature.

    CONCLUSIONS: The proposed simultaneous EEG-fMRI approach for classifying the brain activity patterns can be helpful to predict or fully decode the brain activity patterns.

  8. Qazi EU, Hussain M, Aboalsamh H, Malik AS, Amin HU, Bamatraf S
    Front Hum Neurosci, 2016;10:687.
    PMID: 28163676 DOI: 10.3389/fnhum.2016.00687
    Assessing a person's intelligence level is required in many situations, such as career counseling and clinical applications. EEG evoked potentials in oddball task and fluid intelligence score are correlated because both reflect the cognitive processing and attention. A system for prediction of an individual's fluid intelligence level using single trial Electroencephalography (EEG) signals has been proposed. For this purpose, we employed 2D and 3D contents and 34 subjects each for 2D and 3D, which were divided into low-ability (LA) and high-ability (HA) groups using Raven's Advanced Progressive Matrices (RAPM) test. Using visual oddball cognitive task, neural activity of each group was measured and analyzed over three midline electrodes (Fz, Cz, and Pz). To predict whether an individual belongs to LA or HA group, features were extracted using wavelet decomposition of EEG signals recorded in visual oddball task and support vector machine (SVM) was used as a classifier. Two different types of Haar wavelet transform based features have been extracted from the band (0.3 to 30 Hz) of EEG signals. Statistical wavelet features and wavelet coefficient features from the frequency bands 0.0-1.875 Hz (delta low) and 1.875-3.75 Hz (delta high), resulted in the 100 and 98% prediction accuracies, respectively, both for 2D and 3D contents. The analysis of these frequency bands showed clear difference between LA and HA groups. Further, discriminative values of the features have been validated using statistical significance tests and inter-class and intra-class variation analysis. Also, statistical test showed that there was no effect of 2D and 3D content on the assessment of fluid intelligence level. Comparisons with state-of-the-art techniques showed the superiority of the proposed system.
  9. Amin HU, Mumtaz W, Subhani AR, Saad MNM, Malik AS
    Front Comput Neurosci, 2017;11:103.
    PMID: 29209190 DOI: 10.3389/fncom.2017.00103
    Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
  10. Bamatraf S, Hussain M, Aboalsamh H, Qazi EU, Malik AS, Amin HU, et al.
    Comput Intell Neurosci, 2016;2016:8491046.
    PMID: 26819593 DOI: 10.1155/2016/8491046
    We studied the impact of 2D and 3D educational contents on learning and memory recall using electroencephalography (EEG) brain signals. For this purpose, we adopted a classification approach that predicts true and false memories in case of both short term memory (STM) and long term memory (LTM) and helps to decide whether there is a difference between the impact of 2D and 3D educational contents. In this approach, EEG brain signals are converted into topomaps and then discriminative features are extracted from them and finally support vector machine (SVM) which is employed to predict brain states. For data collection, half of sixty-eight healthy individuals watched the learning material in 2D format whereas the rest watched the same material in 3D format. After learning task, memory recall tasks were performed after 30 minutes (STM) and two months (LTM), and EEG signals were recorded. In case of STM, 97.5% prediction accuracy was achieved for 3D and 96.6% for 2D and, in case of LTM, it was 100% for both 2D and 3D. The statistical analysis of the results suggested that for learning and memory recall both 2D and 3D materials do not have much difference in case of STM and LTM.
  11. Malik AS, Khairuddin RN, Amin HU, Smith ML, Kamel N, Abdullah JM, et al.
    Biomed Eng Online, 2015;14:21.
    PMID: 25886584 DOI: 10.1186/s12938-015-0006-8
    Consumer preference is rapidly changing from 2D to 3D movies due to the sensational effects of 3D scenes, like those in Avatar and The Hobbit. Two 3D viewing technologies are available: active shutter glasses and passive polarized glasses. However, there are consistent reports of discomfort while viewing in 3D mode where the discomfort may refer to dizziness, headaches, nausea or simply not being able to see in 3D continuously.
  12. Roslan NS, Izhar LI, Faye I, Amin HU, Mohamad Saad MN, Sivapalan S, et al.
    PLoS One, 2019;14(7):e0219839.
    PMID: 31344061 DOI: 10.1371/journal.pone.0219839
    The extraversion personality trait has a positive correlation with social interaction. In neuroimaging studies, investigations on extraversion in face-to-face verbal interactions are still scarce. This study presents an electroencephalography (EEG)-based investigation of the extraversion personality trait in relation to eye contact during face-to-face interactions, as this is a vital signal in social interactions. A sample of healthy male participants were selected (consisting of sixteen more extraverted and sixteen less extraverted individuals) and evaluated with the Eysenck's Personality Inventory (EPI) and Big Five Inventory (BFI) tools. EEG alpha oscillations in the occipital region were measured to investigate extraversion personality trait correlates of eye contact during a face-to-face interaction task and an eyes-open condition. The results revealed that the extraversion personality trait has a significant positive correlation with EEG alpha coherence in the occipital region, presumably due to its relationship with eye contact during the interaction task. Furthermore, the decrease in EEG alpha power during the interaction task compared to the eyes-open condition was found to be greater in the less extraverted participants; however, no significant difference was observed between the less and more extraverted participants. Overall, these findings encourage further research towards the understanding of neural mechanism correlates of the extraversion personality trait-particularly in social interaction.
  13. Chai MT, Amin HU, Izhar LI, Saad MNM, Abdul Rahman M, Malik AS, et al.
    Front Neuroinform, 2019;13:66.
    PMID: 31649522 DOI: 10.3389/fninf.2019.00066
    Color is a perceptual stimulus that has a significant impact on improving human emotion and memory. Studies have revealed that colored multimedia learning materials (MLMs) have a positive effect on learner's emotion and learning where it was assessed by subjective/objective measurements. This study aimed to quantitatively assess the influence of colored MLMs on emotion, cognitive processes during learning, and long-term memory (LTM) retention using electroencephalography (EEG). The dataset consisted of 45 healthy participants, and MLMs were designed in colored or achromatic illustrations to elicit emotion and that to assess its impact on LTM retention after 30-min and 1-month delay. The EEG signal analysis was first started to estimate the effective connectivity network (ECN) using the phase slope index and expand it to characterize the ECN pattern using graph theoretical analysis. EEG results showed that colored MLMs had influences on theta and alpha networks, including (1) an increased frontal-parietal connectivity (top-down processing), (2) a larger number of brain hubs, (3) a lower clustering coefficient, and (4) a higher local efficiency, indicating that color influences information processing in the brain, as reflected by ECN, together with a significant improvement in learner's emotion and memory performance. This is evidenced by a more positive emotional valence and higher recall accuracy for groups who learned with colored MLMs than that of achromatic MLMs. In conclusion, this paper demonstrated how the EEG ECN parameters could help quantify the influences of colored MLMs on emotion and cognitive processes during learning.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links