Missing Values
For example, the theory of linear models examines the random error in a model as defined by the equation yi=α+βxi+εi for i=1,2,…,n where n is the size of the dataset. The residual error is equal to εi and the objective is to minimize the sum of the squared error. If more X-variables are added into the model, it is clear that this residual error has to decrease if additional X-variables are added into the model. For example, if the equation is extended to yi=α+β1x1i+ β2x2i +ε’i where β1x1i = βxi from the previous equation, then εi = β2x2i +ε’i, so that ε’i must be a smaller number than εi. One way of over-fitting a model is to increase the number of X-variables until the error term becomes virtually zero. With an error term that is smaller and closer to zero, it becomes more certain that the overall p-value will be statistically significant and the r2 value will increase as r2 is defined by the value
where the error sum of squares (SS
err) is the sum of the squared terms of the residuals and the total sum of squares (SS
tot) is the sum of the value y
i minus the sample mean,
As the residuals decrease, the fraction decreases so that one minus the fraction increases.
However, in practice, the r2 value can decrease rather than increase as the number of X-variables increases. This occurs because the likelihood that at least one of the X-variables is missing increases, resulting in fewer observations used in the model. In order to work with missing values, we first need to determine whether the missing values are random, or whether there is a pattern to them. If there is a trend in the missing values, the use of the variable containing those values can introduce a bias into the model that cannot be resolved. There are three major types of missing values: missing completely at random, missing at random, and missing not at random. Missing completely at random means that the probability that a value is missing is independent of both the observable values and the unobservable parameters of interest. Missing at random means that the reason the value is missing is independent of the value itself. Missing not at random means that there is an inherent bias in the data that cannot be resolved.
The first place to look is in the documentation. For example, in the documentation for the National Inpatient Sample, it states that not all states report on race; therefore, missing values for the variable, race, reflect a trend between states that report the variable, and states that do not. Care needs to be taken when using race in any analysis for this reason. It is helpful to determine the patterns in the missing values.
For example, in surveys that ask information concerning income, it is usually those responders in the higher income brackets who are more likely not to respond. Therefore, any attempt at resolving the missing values can introduce a bias in the final result.
If the values are missing completely at random, or just missing at random, it is possible to impute these missing values so that the observation can be used in the model. However, if there is a bias in the missing values, it should be accommodated in some way. Fortunately, a decision tree predictive model can accommodate missing values, including those that are not at random. We can compare the results of a regression model to those of a decision tree to see if the missing values introduce a bias into the results.