The first section of this paper is devoted to an analysis of some theoretical aspects of the Chinese system of reckoning ages, and the second section offers a method of collecting the age statistics of a Chinese population: A discussion of the errors found in the age returns and the unsuccessful measures taken to eradicate these errors in the Malayan censuses conducted prior to1957 leads to an appraisal of the method of collecting Chinese age data in the 1957 census.
Selection of a suitable and appropriate method is an important aspect in ensuring successful
implementation of a research. The proposed study aims to obtain weights for sustainable construction
criteria from the input and perception of industrial practitioner, and also to explore their opinion on
the criteria. Therefore, the selection and use of study implementation method will determine the
direction of the study whether the intended objectives can be achieved. This manuscript writing presents
the description of the structured interview used to obtain and collect the required data. The suitability
and implementation of the methods have been described in this study, in which the ultimate aim of its
application is to ensure that the collected data is meaningful to the study.
The side sensitive group runs (SSGR) chart is better than both the Shewhart and synthetic charts in detecting small and moderate process mean shifts. In practical circumstances, the process parameters are seldom known, so it is necessary to estimate them from in-control Phase-I samples. Research has discovered that a large number of in-control Phase-I samples are needed for the SSGR chart with estimated process parameters to behave similarly to a chart with known process parameters. The common metric to evaluate the performance of the control chart is average run length (ARL). An assumption for the computation of the ARL is that the shift size is assumed to be known. In reality however, the practitioners may not know the following shift size in advance. In light of this, the expected average run length (EARL) will be considered to measure the performance of the SSGR chart. Moreover, the standard deviation of the ARL (SDARL) will be studied, which is used to quantify the between-practitioner variability in the SSGR chart with estimated process parameters. This paper proposes the optimal design of the estimated process parameters SSGR chart based on the EARL criterion. The application of the optimal SSGR chart with estimated process parameters is demonstrated with actual data taken from a manufacturing company.
Giancardo et al. recently introduced the neuroQWERTY index (nQi), which is a novel motor index derived from computer-key-hold-time data using an ensemble regression algorithm, to detect early-stage Parkinson's disease. Here, we derive a much simpler motor index from their hold-time data, which is the standard deviation (SD) of the hold-time fluctuations, where fluctuation is defined as the difference between successive natural-log of hold time. Our results show the performance of the SD and nQi tests in discriminating early-stage subjects from controls do not differ, although the SD index is much simpler. There is also no difference in performance between the SD and alternating-finger-tapping tests.
The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. As a result of this, there is a need for medical image watermarking (MIW). However, MIW needs to be performed with special care for two reasons. Firstly, the watermarking procedure cannot compromise the quality of the image. Secondly, confidential patient information embedded within the image should be flawlessly retrievable without risk of error after image decompressing. Despite extensive research undertaken in this area, there is still no method available to fulfill all the requirements of MIW. This paper aims to provide a useful survey on watermarking and offer a clear perspective for interested researchers by analyzing the strengths and weaknesses of different existing methods.
Matched MeSH terms: Data Collection/methods*; Data Collection/statistics & numerical data
Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60-86% of the variance in the response patterns and allowed remarkably precise prediction of survey responses from humans, except in a personality test. Even the relationships between independent and their purported dependent variables were accurately predicted. This raises concern about the empirical nature of data collected through some surveys if results are already given a priori through the way subjects are being asked. Survey response patterns seem heavily determined by semantics. Language algorithms may suggest these prior to administering a survey. This study suggests that semantic algorithms are becoming new tools for the social sciences, opening perspectives on survey responses that prevalent psychometric theory cannot explain.
The effects of breast-feeding on infant health and mortality, particularly in the developing nations, are a matter of controversy and importance. The Malaysian Family Life Survey (MFLS) of over 1200 women has recently been the source of a great deal of valuable information on the influence of breast-feeding and interacting social variables on the incidence of infant mortality. Accuracy of reporting of breast-feeding duration is a key issue in the validity of studies of breast-feeding and infant mortality. This paper presents an illustrative analysis of the quality of breast-feeding data from the Malaysian Family Life Survey, using logit model schedules. Lesthaeghe and Page derived a logit model schedule of breast-feeding, summarizing empirical experience. This family of model breast-feeding duration curves is similar to the logit model life tables developed by Brass, and was intended for similar applications. To verify the MFLS retrospective breast-feeding reports, the observed median duration and variability were calculated for ethnic group/cohort subsets, and expected duration distribution curves were generated from the model using these observed parameter values. The expected curve generated from the model fit the observed curve of breast-feeding discontinuation extremely closely. Thus it is unlikely that any significant distortion of the pattern of discontinuation of breast-feeding occurred in data collection. Extensions of this method of data quality checking to other duration distributions are suggested.
The characteristics of foveal suppression (FS) in fixation disparity (FD) due to visual stress were investigated and their relationship's between, age, symptoms, and the effect of temporary elimination of FD using prisms on the degree of the FS were analysed. Forty-five presbyopic subjects (15 without FD and 30 with stress related FD) participated in the study. The subjects underwent comprehensive optometric examination prior to the study. Their FS and FD were measured. The FD was later corrected with ophthalmic prisms, the power of which was equally divided between the eyes, and the FS was later verified. Age and FS had no significant correlation for subjects without FD (Spearman's rs = 0.17, p = 0.55, NS) and in subjects with FD (rs = 2.49, p = 0.19, NS), respectively. Correlation between the degree of FS and FD was weak (rs=0.38, p=0.07), however the magnitude of FD significantly increased with age (r=0.27, p=0.04). Subjects with FD had significantly larger degree of FS compared with subjects without FD (Wilcoxon's Z =-0.25, p=0.01). There was no significant difference in the magnitudes of FD (t = -0.38, p=0.07) and in their degrees of FS (Mann-Whitney U = 1.5, p=0.71) between subjects with and without symptoms. Correcting the FD with prisms generally reduced the degree of FS (Wilcoxon's Z =1.96, p=0.04), however, significant change in FS only occured in subjects with symptoms (Z=-1.97, p=0.03), but was not significant in subjects without symptoms (Z=-0.70, p=0.48).
This study is designed in qualitative form which focuses on musical coordination skill that is sing and
clapping rhythm simultaneously in meter . The researcher used one of music teaching method which
is Dalcroze Approach as an intervention in this study. Dalcroze Approach is a method which relates
musical concepts with movement. Research sample is among Year 4 students aged 10 years old from
different sex and race. Data have been collected through observation and interview. A comprehension
exam is conducted as a supplementary data collection. Findings show the students have achieved good
result in music coordination skill after the implementation of the Dalcroze Approach. Observation
revealed that all the students have increase their coordination skill in singing and clapping the rhythm simultaneously. Interview which is conducted on students found 60 percent of them are very confident
to do the skill as well. The result of comprehension exam shows 73 percent of students score A which
can be described as excellent. Researcher wish to have further study in developing the music
coordination skill by improving the intervention of the study.
Rehabilitation aims to optimize functioning of persons experiencing functioning limitations. As such the comparative evaluation of rehabilitation interventions relies on the analysis of the differences between the change in patient functioning after a specific rehabilitation intervention versus the change following another intervention. A robust health information reference system that can facilitate the comparative evaluation of changes in functioning in rehabilitation studies and the standardized reporting of rehabilitation interventions is the International Classification of Functioning, Disability and Health (ICF). The objective of this paper is to present recommendations that Cochrane Rehabilitation could adopt for using the ICF in rehabilitation studies by: 1) defining the functioning categories to be included in a rehabilitation study; 2) specifying selected functioning categories and selecting suitable data collection instruments; 3) examining aspects of functioning that have been documented in a study; 4) reporting functioning data collected with various data collection instruments; and 5) communicating results in an accessible, meaningful and easily understandable way. The authors provide examples of concrete studies that underscore these recommendations, whereby also emphasizing the need for future research on the implementation of specific recommendations, e.g. in meta-analysis in systematic literature reviews. Furthermore, the paper outlines how the ICF can complement or be integrated in established Cochrane and rehabilitation research structures and methods, e.g. use of standard mean difference to compare cross-study data collected using different measures, in developing core outcome sets for rehabilitation, and the use of the PICO model.
Clustering is basically one of the major sources of primary data mining tools. It makes
researchers understand the natural grouping of attributes in datasets. Clustering is an
unsupervised classification method with the major aim of partitioning, where objects in the
same cluster are similar, and objects which belong to different clusters vary significantly,
with respect to their attributes. However, the classical Standardized Euclidean distance,
which uses standard deviation to down weight maximum points of the ith features on the
distance clusters, has been criticized by many scholars that the method produces outliers,
lack robustness, and has 0% breakdown points. It also has low efficiency in normal
distribution. Therefore, to remedy the problem, we suggest two statistical estimators
which have 50% breakdown points namely the Sn and Qn estimators, with 58% and 82%
efficiency, respectively. The proposed methods evidently outperformed the existing methods
in down weighting the maximum points of the ith features in distance-based clustering
The main purpose of this article is to introduce the technique of panel data analysis in econometrics modeling. The elasticity of labour, capital and economic of scale for twenty two food manufacturing firms covering from 1989 to 1993 is estimated using the Cobb-Douglas model. The three main techniques of panel data analysis discussed are least square dummy variables (LSDV), analysis of covariance (ANCOVA) and generalized least square (GLS). Ordinary Least Square (OLS) method is included as the basis of comparison.
Widespread use of mobile devices has resulted in the creation of large amounts of data. An example of such data is the one obtained from the public (crowd) through open calls, known as crowdsourced data. More often than not, the collected data are later used for other purposes such as making predictions. Thus, it is important for crowdsourced data to be recent and accurate, and this means that frequent updating is necessary. One of the challenges in using crowdsourced data is the unpredictable incoming data rate. Therefore, manually updating the data at predetermined intervals is not practical. In this paper, the construction of an algorithm that automatically updates crowdsourced data based on the rate of incoming data is presented. The objective is to ensure that up-to-date and correct crowdsourced data are stored in the database at any point in time so that the information available is updated and accurate; hence, it is reliable. The algorithm was evaluated using a prototype development of a local price-watch information application, CrowdGrocr, in which the algorithm was embedded. The results showed that the algorithm was able to ensure up-to-date information with 94.9% accuracy.