METHODS: A literature search was conducted with the use of three online databases namely, Web of Science, Scopus, and ScienceDirect. Developed keywords strategy was used to include only the relevant articles. A Population Intervention Comparison Outcomes (PICO) strategy was used to develop the inclusion and exclusion criteria. Image quality was analyzed quantitatively based on peak signal-noise-ratio (PSNR), Mean Squared Error (MSE), Absolute Mean Brightness Error (AMBE), Entropy, and Contrast Improvement Index (CII) values.
RESULTS: Nine studies with four types of image enhancement techniques were included in this study. Two studies used histogram-based, three studies used frequency-based, one study used fuzzy-based and three studies used filter-based. All studies reported PSNR values whilst only four studies reported MSE, AMBE, Entropy and CII values. Filter-based was the highest PSNR values of 78.93, among other types. For MSE, AMBE, Entropy, and CII values, the highest were frequency-based (7.79), fuzzy-based (93.76), filter-based (7.92), and frequency-based (6.54) respectively.
CONCLUSION: In summary, image quality for each image enhancement technique is varied, especially for breast cancer detection. In this study, the frequency-based of Fast Discrete Curvelet Transform (FDCT) via the UnequiSpaced Fast Fourier Transform (USFFT) shows the most superior among other image enhancement techniques.
METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.
RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.
CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.