Displaying all 3 publications

Abstract:
Sort:
  1. Ramachandran T, Faruque MRI, Singh MSJ, Khandaker MU, Salman M, Youssef AAF
    Materials (Basel), 2023 Jan 23;16(3).
    PMID: 36770037 DOI: 10.3390/ma16031030
    This work focused on the novel and compact 1-bit symmetrical coding-based metamaterial for radar cross section reduction in terahertz frequencies. A couple of coding particles were constructed to impersonate the elements '0' and '1', which have phase differences of 180°. All the analytical simulations were performed by adopting Computer Simulation Technology Microwave Studio 2019 software. Moreover, the transmission coefficient of the element '1' was examined as well by adopting similar software and validated by a high-frequency structure simulator. Meanwhile, the frequency range from 0 to 3 THz was set in this work. The phase response properties of each element were examined before constructing various coding metamaterial designs in smaller and bigger lattices. The proposed unit cells exhibit phase responses at 0.84 THz and 1.54 THz, respectively. Meanwhile, the analysis of various coding sequences was carried out and they manifest interesting monostatic and bistatic radar cross section (RCS) reduction performances. The Coding Sequence 2 manifests the best bistatic RCS reduction values in smaller lattices, which reduced from -69.8 dBm2 to -65.5 dBm2 at 1.54 THz. On the other hand, the monostatic RCS values for all lattices have an inclined line until they reach a frequency of 1.0 THz from more than -60 dBm2. However, from the 1.0 THz to 3.0 THz frequency range the RCS values have moderate discrepancies among the horizontal line for each lattice. Furthermore, two parametric studies were performed to examine the RCS reduction behaviour, for instance, multi-layer structures and as well tilt positioning of the proposed coding metamaterial. Overall it indicates that the integration of coding-based metamaterial successfully reduced the RCS values.
  2. Pathan RK, Biswas M, Yasmin S, Khandaker MU, Salman M, Youssef AAF
    Sci Rep, 2023 Oct 09;13(1):16975.
    PMID: 37813932 DOI: 10.1038/s41598-023-43852-x
    Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous studies have successfully recognized sign language, it requires many costly instruments including sensors, devices, and high-end processing power. However, such drawbacks can be easily overcome by employing artificial intelligence-based techniques. Since, in this modern era of advanced mobile technology, using a camera to take video or images is much easier, this study demonstrates a cost-effective technique to detect American Sign Language (ASL) using an image dataset. Here, "Finger Spelling, A" dataset has been used, with 24 letters (except j and z as they contain motion). The main reason for using this dataset is that these images have a complex background with different environments and scene colors. Two layers of image processing have been used: in the first layer, images are processed as a whole for training, and in the second layer, the hand landmarks are extracted. A multi-headed convolutional neural network (CNN) model has been proposed and tested with 30% of the dataset to train these two layers. To avoid the overfitting problem, data augmentation and dynamic learning rate reduction have been used. With the proposed model, 98.981% test accuracy has been achieved. It is expected that this study may help to develop an efficient human-machine communication system for a deaf-mute society.
  3. Jibon FA, Jamil Chowdhury AR, Miraz MH, Jin HH, Khandaker MU, Sultana S, et al.
    Digit Health, 2024;10:20552076241249874.
    PMID: 38726217 DOI: 10.1177/20552076241249874
    Automated epileptic seizure detection from ectroencephalogram (EEG) signals has attracted significant attention in the recent health informatics field. The serious brain condition known as epilepsy, which is characterized by recurrent seizures, is typically described as a sudden change in behavior caused by a momentary shift in the excessive electrical discharges in a group of brain cells, and EEG signal is primarily used in most cases to identify seizure to revitalize the close loop brain. The development of various deep learning (DL) algorithms for epileptic seizure diagnosis has been driven by the EEG's non-invasiveness and capacity to provide repetitive patterns of seizure-related electrophysiological information. Existing DL models, especially in clinical contexts where irregular and unordered structures of physiological recordings make it difficult to think of them as a matrix; this has been a key disadvantage to producing a consistent and appropriate diagnosis outcome due to EEG's low amplitude and nonstationary nature. Graph neural networks have drawn significant improvement by exploiting implicit information that is present in a brain anatomical system, whereas inter-acting nodes are connected by edges whose weights can be determined by either temporal associations or anatomical connections. Considering all these aspects, a novel hybrid framework is proposed for epileptic seizure detection by combined with a sequential graph convolutional network (SGCN) and deep recurrent neural network (DeepRNN). Here, DepRNN is developed by fusing a gated recurrent unit (GRU) with a traditional RNN; its key benefit is that it solves the vanishing gradient problem and achieve this hybrid framework greater sophistication. The line length feature, auto-covariance, auto-correlation, and periodogram are applied as a feature from the raw EEG signal and then grouped the resulting matrix into time-frequency domain as inputs for the SGCN to use for seizure classification. This model extracts both spatial and temporal information, resulting in improved accuracy, precision, and recall for seizure detection. Extensive experiments conducted on the CHB-MIT and TUH datasets showed that the SGCN-DeepRNN model outperforms other deep learning models for seizure detection, achieving an accuracy of 99.007%, with high sensitivity and specificity.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links