Displaying all 3 publications

Abstract:
Sort:
  1. Lim JY, Lim KM, Lee CP, Tan YX
    Neural Netw, 2023 Aug;165:19-30.
    PMID: 37263089 DOI: 10.1016/j.neunet.2023.05.037
    Few-shot learning aims to train a model with a limited number of base class samples to classify the novel class samples. However, to attain generalization with a limited number of samples is not a trivial task. This paper proposed a novel few-shot learning approach named Self-supervised Contrastive Learning (SCL) that enriched the model representation with multiple self-supervision objectives. Given the base class samples, the model is trained with the base class loss. Subsequently, contrastive-based self-supervision is introduced to minimize the distance between each training sample with their augmented variants to improve the sample discrimination. To recognize the distant sample, rotation-based self-supervision is proposed to enable the model to learn to recognize the rotation degree of the samples for better sample diversity. The multitask environment is introduced where each training sample is assigned with two class labels: base class label and rotation class label. Complex augmentation is put forth to help the model learn a deeper understanding of the object. The image structure of the training samples are augmented independent of the base class information. The proposed SCL is trained to minimize the base class loss, contrastive distance loss, and rotation class loss simultaneously to learn the generic features and improve the novel class performance. With the multiple self-supervision objectives, the proposed SCL outperforms state-of-the-art few-shot approaches on few-shot image classification benchmark datasets.
    Matched MeSH terms: Generalization (Psychology)*
  2. Zhang Q, Abdullah AR, Chong CW, Ali MH
    Comput Intell Neurosci, 2022;2022:8235308.
    PMID: 35126503 DOI: 10.1155/2022/8235308
    Gross domestic product (GDP) is an important indicator for determining a country's or region's economic status and development level, and it is closely linked to inflation, unemployment, and economic growth rates. These basic indicators can comprehensively and effectively reflect a country's or region's future economic development. The center of radial basis function neural network and smoothing factor to take a uniform distribution of the random radial basis function artificial neural network will be the focus of this study. This stochastic learning method is a useful addition to the existing methods for determining the center and smoothing factors of radial basis function neural networks, and it can also help the network more efficiently train. GDP forecasting is aided by the genetic algorithm radial basis neural network, which allows the government to make timely and effective macrocontrol plans based on the forecast trend of GDP in the region. This study uses the genetic algorithm radial basis, neural network model, to make judgments on the relationships contained in this sequence and compare and analyze the prediction effect and generalization ability of the model to verify the applicability of the genetic algorithm radial basis, neural network model, based on the modeling of historical data, which may contain linear and nonlinear relationships by itself, so this study uses the genetic algorithm radial basis, neural network model, to make, compare, and analyze judgments on the relationships contained in this sequence.
  3. Katsos N, Cummins C, Ezeizabarrena MJ, Gavarró A, Kuvač Kraljević J, Hrzica G, et al.
    Proc Natl Acad Sci U S A, 2016 08 16;113(33):9244-9.
    PMID: 27482119 DOI: 10.1073/pnas.1601341113
    Learners of most languages are faced with the task of acquiring words to talk about number and quantity. Much is known about the order of acquisition of number words as well as the cognitive and perceptual systems and cultural practices that shape it. Substantially less is known about the acquisition of quantifiers. Here, we consider the extent to which systems and practices that support number word acquisition can be applied to quantifier acquisition and conclude that the two domains are largely distinct in this respect. Consequently, we hypothesize that the acquisition of quantifiers is constrained by a set of factors related to each quantifier's specific meaning. We investigate competence with the expressions for "all," "none," "some," "some…not," and "most" in 31 languages, representing 11 language types, by testing 768 5-y-old children and 536 adults. We found a cross-linguistically similar order of acquisition of quantifiers, explicable in terms of four factors relating to their meaning and use. In addition, exploratory analyses reveal that language- and learner-specific factors, such as negative concord and gender, are significant predictors of variation.
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links