Selection of a suitable and appropriate method is an important aspect in ensuring successful
implementation of a research. The proposed study aims to obtain weights for sustainable construction
criteria from the input and perception of industrial practitioner, and also to explore their opinion on
the criteria. Therefore, the selection and use of study implementation method will determine the
direction of the study whether the intended objectives can be achieved. This manuscript writing presents
the description of the structured interview used to obtain and collect the required data. The suitability
and implementation of the methods have been described in this study, in which the ultimate aim of its
application is to ensure that the collected data is meaningful to the study.
The first section of this paper is devoted to an analysis of some theoretical aspects of the Chinese system of reckoning ages, and the second section offers a method of collecting the age statistics of a Chinese population: A discussion of the errors found in the age returns and the unsuccessful measures taken to eradicate these errors in the Malayan censuses conducted prior to1957 leads to an appraisal of the method of collecting Chinese age data in the 1957 census.
The side sensitive group runs (SSGR) chart is better than both the Shewhart and synthetic charts in detecting small and moderate process mean shifts. In practical circumstances, the process parameters are seldom known, so it is necessary to estimate them from in-control Phase-I samples. Research has discovered that a large number of in-control Phase-I samples are needed for the SSGR chart with estimated process parameters to behave similarly to a chart with known process parameters. The common metric to evaluate the performance of the control chart is average run length (ARL). An assumption for the computation of the ARL is that the shift size is assumed to be known. In reality however, the practitioners may not know the following shift size in advance. In light of this, the expected average run length (EARL) will be considered to measure the performance of the SSGR chart. Moreover, the standard deviation of the ARL (SDARL) will be studied, which is used to quantify the between-practitioner variability in the SSGR chart with estimated process parameters. This paper proposes the optimal design of the estimated process parameters SSGR chart based on the EARL criterion. The application of the optimal SSGR chart with estimated process parameters is demonstrated with actual data taken from a manufacturing company.
The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. As a result of this, there is a need for medical image watermarking (MIW). However, MIW needs to be performed with special care for two reasons. Firstly, the watermarking procedure cannot compromise the quality of the image. Secondly, confidential patient information embedded within the image should be flawlessly retrievable without risk of error after image decompressing. Despite extensive research undertaken in this area, there is still no method available to fulfill all the requirements of MIW. This paper aims to provide a useful survey on watermarking and offer a clear perspective for interested researchers by analyzing the strengths and weaknesses of different existing methods.
Matched MeSH terms: Data Collection/methods*; Data Collection/statistics & numerical data
Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60-86% of the variance in the response patterns and allowed remarkably precise prediction of survey responses from humans, except in a personality test. Even the relationships between independent and their purported dependent variables were accurately predicted. This raises concern about the empirical nature of data collected through some surveys if results are already given a priori through the way subjects are being asked. Survey response patterns seem heavily determined by semantics. Language algorithms may suggest these prior to administering a survey. This study suggests that semantic algorithms are becoming new tools for the social sciences, opening perspectives on survey responses that prevalent psychometric theory cannot explain.
The effects of breast-feeding on infant health and mortality, particularly in the developing nations, are a matter of controversy and importance. The Malaysian Family Life Survey (MFLS) of over 1200 women has recently been the source of a great deal of valuable information on the influence of breast-feeding and interacting social variables on the incidence of infant mortality. Accuracy of reporting of breast-feeding duration is a key issue in the validity of studies of breast-feeding and infant mortality. This paper presents an illustrative analysis of the quality of breast-feeding data from the Malaysian Family Life Survey, using logit model schedules. Lesthaeghe and Page derived a logit model schedule of breast-feeding, summarizing empirical experience. This family of model breast-feeding duration curves is similar to the logit model life tables developed by Brass, and was intended for similar applications. To verify the MFLS retrospective breast-feeding reports, the observed median duration and variability were calculated for ethnic group/cohort subsets, and expected duration distribution curves were generated from the model using these observed parameter values. The expected curve generated from the model fit the observed curve of breast-feeding discontinuation extremely closely. Thus it is unlikely that any significant distortion of the pattern of discontinuation of breast-feeding occurred in data collection. Extensions of this method of data quality checking to other duration distributions are suggested.
Social media has taken an important place in the routine life of people. Every single second, users from all over the world are sharing interests, emotions, and other useful information that leads to the generation of huge volumes of user-generated data. Profiling users by extracting attribute information from social media data has been gaining importance with the increasing user-generated content over social media platforms. Meeting the user's satisfaction level for information collection is becoming more challenging and difficult. This is because of too much noise generated, which affects the process of information collection due to explosively increasing online data. Social profiling is an emerging approach to overcome the challenges faced in meeting user's demands by introducing the concept of personalized search while keeping in consideration user profiles generated using social network data. This study reviews and classifies research inferring users social profile attributes from social media data as individual and group profiling. The existing techniques along with utilized data sources, the limitations, and challenges are highlighted. The prominent approaches adopted include Machine Learning, Ontology, and Fuzzy logic. Social media data from Twitter and Facebook have been used by most of the studies to infer the social attributes of users. The studies show that user social attributes, including age, gender, home location, wellness, emotion, opinion, relation, influence, and so on, still need to be explored. This review gives researchers insights of the current state of literature and challenges for inferring user profile attributes using social media data.
This study is designed in qualitative form which focuses on musical coordination skill that is sing and
clapping rhythm simultaneously in meter . The researcher used one of music teaching method which
is Dalcroze Approach as an intervention in this study. Dalcroze Approach is a method which relates
musical concepts with movement. Research sample is among Year 4 students aged 10 years old from
different sex and race. Data have been collected through observation and interview. A comprehension
exam is conducted as a supplementary data collection. Findings show the students have achieved good
result in music coordination skill after the implementation of the Dalcroze Approach. Observation
revealed that all the students have increase their coordination skill in singing and clapping the rhythm simultaneously. Interview which is conducted on students found 60 percent of them are very confident
to do the skill as well. The result of comprehension exam shows 73 percent of students score A which
can be described as excellent. Researcher wish to have further study in developing the music
coordination skill by improving the intervention of the study.
Clustering is basically one of the major sources of primary data mining tools. It makes
researchers understand the natural grouping of attributes in datasets. Clustering is an
unsupervised classification method with the major aim of partitioning, where objects in the
same cluster are similar, and objects which belong to different clusters vary significantly,
with respect to their attributes. However, the classical Standardized Euclidean distance,
which uses standard deviation to down weight maximum points of the ith features on the
distance clusters, has been criticized by many scholars that the method produces outliers,
lack robustness, and has 0% breakdown points. It also has low efficiency in normal
distribution. Therefore, to remedy the problem, we suggest two statistical estimators
which have 50% breakdown points namely the Sn and Qn estimators, with 58% and 82%
efficiency, respectively. The proposed methods evidently outperformed the existing methods
in down weighting the maximum points of the ith features in distance-based clustering
The main purpose of this article is to introduce the technique of panel data analysis in econometrics modeling. The elasticity of labour, capital and economic of scale for twenty two food manufacturing firms covering from 1989 to 1993 is estimated using the Cobb-Douglas model. The three main techniques of panel data analysis discussed are least square dummy variables (LSDV), analysis of covariance (ANCOVA) and generalized least square (GLS). Ordinary Least Square (OLS) method is included as the basis of comparison.
Widespread use of mobile devices has resulted in the creation of large amounts of data. An example of such data is the one obtained from the public (crowd) through open calls, known as crowdsourced data. More often than not, the collected data are later used for other purposes such as making predictions. Thus, it is important for crowdsourced data to be recent and accurate, and this means that frequent updating is necessary. One of the challenges in using crowdsourced data is the unpredictable incoming data rate. Therefore, manually updating the data at predetermined intervals is not practical. In this paper, the construction of an algorithm that automatically updates crowdsourced data based on the rate of incoming data is presented. The objective is to ensure that up-to-date and correct crowdsourced data are stored in the database at any point in time so that the information available is updated and accurate; hence, it is reliable. The algorithm was evaluated using a prototype development of a local price-watch information application, CrowdGrocr, in which the algorithm was embedded. The results showed that the algorithm was able to ensure up-to-date information with 94.9% accuracy.
The characteristics of foveal suppression (FS) in fixation disparity (FD) due to visual stress were investigated and their relationship's between, age, symptoms, and the effect of temporary elimination of FD using prisms on the degree of the FS were analysed. Forty-five presbyopic subjects (15 without FD and 30 with stress related FD) participated in the study. The subjects underwent comprehensive optometric examination prior to the study. Their FS and FD were measured. The FD was later corrected with ophthalmic prisms, the power of which was equally divided between the eyes, and the FS was later verified. Age and FS had no significant correlation for subjects without FD (Spearman's rs = 0.17, p = 0.55, NS) and in subjects with FD (rs = 2.49, p = 0.19, NS), respectively. Correlation between the degree of FS and FD was weak (rs=0.38, p=0.07), however the magnitude of FD significantly increased with age (r=0.27, p=0.04). Subjects with FD had significantly larger degree of FS compared with subjects without FD (Wilcoxon's Z =-0.25, p=0.01). There was no significant difference in the magnitudes of FD (t = -0.38, p=0.07) and in their degrees of FS (Mann-Whitney U = 1.5, p=0.71) between subjects with and without symptoms. Correcting the FD with prisms generally reduced the degree of FS (Wilcoxon's Z =1.96, p=0.04), however, significant change in FS only occured in subjects with symptoms (Z=-1.97, p=0.03), but was not significant in subjects without symptoms (Z=-0.70, p=0.48).
"Internet of Things (IoT)" has emerged as a novel concept in the world of technology and communication. In modern network technologies, the capability of transmitting data through data communication networks (such as Internet or intranet) is provided for each organism (e.g. human beings, animals, things, and so forth). Due to the limited hardware and operational communication capability as well as small dimensions, IoT undergoes several challenges. Such inherent challenges not only cause fundamental restrictions in the efficiency of aggregation, transmission, and communication between nodes; but they also degrade routing performance. To cope with the reduced availability time and unstable communications among nodes, data aggregation, and transmission approaches in such networks are designed more intelligently. In this paper, a distributed method is proposed to set child balance among nodes. In this method, the height of the network graph increased through restricting the degree; and network congestion reduced as a result. In addition, a dynamic data aggregation approach based on Learning Automata was proposed for Routing Protocol for Low-Power and Lossy Networks (LA-RPL). More specifically, each node was equipped with learning automata in order to perform data aggregation and transmissions. Simulation and experimental results indicate that the LA-RPL has better efficiency than the basic methods used in terms of energy consumption, network control overhead, end-to-end delay, loss packet and aggregation rates.