Top oil temperature (TOT) is an important parameter to evaluate the running state of a transformer. According to the variation trend of TOT, the internal thermal state of transformers can be predicted so as to arrange operation and maintenance reasonably and prevent the occurrence of accidents. However, due to the complex working environment in the field, there are often a large number of missing values in online monitoring data, which seriously affects the prediction of TOT. At the same time, it is affected by various factors such as load, ambient temperature, wind speed, and solar radiation, which cause the information of different time scales to be mixed in its monitoring data. Therefore, it is difficult to achieve the desired accuracy with a single model. In this article, a model for predicting TOT based on data quality enhancement is proposed. First, the Markov model is used to complete the online monitoring data containing missing values to obtain a complete and continuous time series. Then, using the ensemble empirical modal decomposition method, the time series of TOT is decomposed into multiple time series components to eliminate the interaction between different time scales of information, thus reducing the prediction difficulty. Finally, the sub-prediction model of the extreme learning machine is constructed, and the prediction results of all the sub-models are reconstructed to obtain the final prediction results of TOT. In order to verify the effectiveness of the model, the TOT of an operating transformer for the next two days is predicted in the article, and its mean absolute percentage error (MAPE) is 5.27% and its root mean square error (RMSE) is 2.46. Compared with the BP neural network model and the support vector machines (SVM) model, the MAPE is reduced by 69.56% and 61.92%, respectively, and the RMSE is reduced by 67.02% and 59.87%. The results of this study will provide important support for the fault diagnosis of the top oil temperature online monitoring system.
The efficient handling of wastewater pollutants is a must, since they are continuously defiling limited fresh water resources, seriously affecting the terrestrial, aquatic, and aerial flora and fauna. Our vision is to undertake an exhaustive examination of current research trends with a focus on nanomaterials (NMs) to considerably improve the performance of classical wastewater treatment technologies, e.g. adsorption, catalysis, separation, and disinfection. Additionally, NM-based sensor technologies are considered, since they have been significantly used for monitoring water contaminants. We also suggest future directions to inform investigators of potentially disruptive NM technologies that have to be investigated in more detail. The fate and environmental transformations of NMs, which need to be addressed before large-scale implementation of NMs for water purification, are also highlighted.
Thermal discomfort is one of the main triggers for occupants' interactions with components of the built environment such as adjustments of thermostats and/or opening windows and strongly related to the energy use in buildings. Understanding causes for thermal (dis-)comfort is crucial for design and operation of any type of building. The assessment of human thermal perception through rating scales, for example in post-occupancy studies, has been applied for several decades; however, long-existing assumptions related to these rating scales had been questioned by several researchers. The aim of this study was to gain deeper knowledge on contextual influences on the interpretation of thermal perception scales and their verbal anchors by survey participants. A questionnaire was designed and consequently applied in 21 language versions. These surveys were conducted in 57 cities in 30 countries resulting in a dataset containing responses from 8225 participants. The database offers potential for further analysis in the areas of building design and operation, psycho-physical relationships between human perception and the built environment, and linguistic analyses.
In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field.