Displaying all 3 publications

Abstract:
Sort:
  1. Ye G, Jiao K, Huang X, Goi BM, Yap WS
    Sci Rep, 2020 Dec 03;10(1):21044.
    PMID: 33273539 DOI: 10.1038/s41598-020-78127-2
    Most of existing image encryption schemes are proposed in the spatial domain which easily destroys the correlation between pixels. This paper proposes an image encryption scheme by employing discrete cosine transform (DCT), quantum logistic map and substitution-permutation network (SPN). The DCT is used to transform the images in the frequency domain. Meanwhile, the SPN is used to provide the security properties of confusion and diffusion. The SPN provides fast encryption as compared to the asymmetric based image encryption since operations with low computational complexity are used (e.g., exclusive-or and permutation). Different statistical experiments and security analysis are performed against six grayscale and color images to justify the effectiveness and security of the proposed image encryption scheme.
  2. Poon HK, Yap WS, Tee YK, Lee WK, Goi BM
    Neural Netw, 2019 Nov;119:299-312.
    PMID: 31499354 DOI: 10.1016/j.neunet.2019.08.017
    Document classification aims to assign one or more classes to a document for ease of management by understanding the content of a document. Hierarchical attention network (HAN) has been showed effective to classify documents that are ambiguous. HAN parses information-intense documents into slices (i.e., words and sentences) such that each slice can be learned separately and in parallel before assigning the classes. However, introducing hierarchical attention approach leads to the redundancy of training parameters which is prone to overfitting. To mitigate the concern of overfitting, we propose a variant of hierarchical attention network using adversarial and virtual adversarial perturbations in 1) word representation, 2) sentence representation and 3) both word and sentence representations. The proposed variant is tested on eight publicly available datasets. The results show that the proposed variant outperforms the hierarchical attention network with and without using random perturbation. More importantly, the proposed variant achieves state-of-the-art performance on multiple benchmark datasets. Visualizations and analysis are provided to show that perturbation can effectively alleviate the overfitting issue and improve the performance of hierarchical attention network.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links