IEEE Trans Biomed Eng, 2011 May;58(5):1383-93.
PMID: 21177154 DOI: 10.1109/TBME.2010.2101073

Abstract

A signal subspace approach for extracting visual evoked potentials (VEPs) from the background electroencephalogram (EEG) colored noise without the need for a prewhitening stage is proposed. Linear estimation of the clean signal is performed by minimizing signal distortion while maintaining the residual noise energy below some given threshold. The generalized eigendecomposition of the covariance matrices of a VEP signal and brain background EEG noise is used to transform them jointly to diagonal matrices. The generalized subspace is then decomposed into signal subspace and noise subspace. Enhancement is performed by nulling the components in the noise subspace and retaining the components in the signal subspace. The performance of the proposed algorithm is tested with simulated and real data, and compared with the recently proposed signal subspace techniques. With the simulated data, the algorithms are used to estimate the latencies of P(100), P(200), and P(300) of VEP signals corrupted by additive colored noise at different values of SNR. With the real data, the VEP signals are collected at Selayang Hospital, Kuala Lumpur, Malaysia, and the capability of the proposed algorithm in detecting the latency of P(100) is obtained and compared with other subspace techniques. The ensemble averaging technique is used as a baseline for this comparison. The results indicated significant improvement by the proposed technique in terms of better accuracy and less failure rate.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.