Affiliations 

  • 1 School of Mathematical Sciences, Universiti Sains Malaysia, 11800, Penang, Malaysia
Heliyon, 2024 Feb 29;10(4):e26157.
PMID: 38404905 DOI: 10.1016/j.heliyon.2024.e26157

Abstract

Dimensionality reduction plays a pivotal role in preparing high-dimensional data for classification and discrimination tasks by eliminating redundant features and enhancing the efficiency of classifiers. The effectiveness of a dimensionality reduction algorithm hinges on its numerical stability. When data projections are numerically stable, they lead to enhanced class separability in the lower-dimensional embedding, consequently yielding higher classification accuracy. This paper investigates the numerical attributes of dimensionality reduction and discriminant subspace learning, with a specific focus on Locality-Preserving Partial Least Squares Discriminant Analysis (LPPLS-DA). High-dimensional data frequently introduce singularity in the scatter matrices, posing a significant challenge. To tackle this issue, the paper explores two robust implementations of LPPLS-DA. These approaches not only optimize data projections but also capture more discriminative features, resulting in a marked improvement in classification accuracy. Empirical evidence supports these findings through numerical experiments conducted on synthetic and spectral datasets. The results demonstrate the superior performance of the proposed methods when compared to several state-of-the-art dimensionality reduction techniques in terms of both classification accuracy and dimension reduction.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.