Term
|
Definition
the variation in one variable is completely explained by movements in another explanatory variable |
|
|
Term
severe imperfect multicolinearity |
|
Definition
linear relationship between two or more independent variables that is so strong that it can significantly affect the estimation of the coefficients of variables. |
|
|
Term
|
Definition
a variable that is so highly correlated with the dependent variable that it completely masks the effects of all other independent variables in the equation. |
|
|
Term
consequences of multicolinearity |
|
Definition
(1) Estimates will remain un biased; (2) The variances and thus the standard errors of the estimates will increase; (3) The computed t-scores will fall; (4) estimates become vary sensitive to changes in specification; (5) The overall fit of the eq. and estimation of the non-multicolinear variables will be largely unaffected |
|
|
Term
variance inflation factor |
|
Definition
method of detecting the severity of multicolinearity by looking at the extent to which a given explanatory variable can be explained by all the other explanatory variables in the equation (higher = more severe VIF) |
|
|
Term
|
Definition
an explanatory variable that represents the same effect on y as another expl. variable |
|
|
Term
|
Definition
occurs when CA V is violated in a correctly specified equation. (CA V: the variance of the error term is constant) |
|
|
Term
impure heteroskedasticity |
|
Definition
heteroskedasticity caused by incorrect specification |
|
|
Term
|
Definition
The variance of the error term changes proportionally to the square of Z. The higher the Zi, the higher the variance for that observation. |
|
|
Term
The consequences of heteroskedasticity |
|
Definition
(1) pure heteroskedasticity does not cause bias in the coefficient estimates; (2)HS generally causes OLS to no longer the Best(minimum variance) estimator; (3)heteroskedasticity causes the OLS estimates of the SE(Bhat)'s to be biased, leading to unreliable T-scores and hypothesis testing |
|
|
Term
|
Definition
tests the residuals of an equation to see if there is heteroskedasticity in the error term of the equation |
|
|
Term
|
Definition
(1) Obtain the residuals of the estimated reg eq; (2) Use these residuals to form the dependent variable in a second regression; (3) test the significance of the coefficient of Z in the second regression with a t-test |
|
|
Term
|
Definition
another test for HS; to be used when Z is unknown |
|
|
Term
|
Definition
(1) Obtain the residual of the estimated regression equation; (2) Use these residuals to form the dependent variable in a second regression that includes each X from the original equation, the square of each X, and the the product of eac X times every other X; (3) Test the overall significance of that equation with the chi-square test |
|
|
Term
heteroskedasticity-corrected (HC) standard errors |
|
Definition
SE(Bhats) that have been calculated specifically to avoid the consequences of heteroskedasticity |
|
|
Term
|
Definition
linear in the coefficients equation used to explain a dummy variable. |
|
|
Term
|
Definition
average percentage of ones explained correctly, and the percentage of zeros explained correctly |
|
|
Term
|
Definition
estimation technique that avoids the unboundedness of the linear probability model by using a variant of the cumulative logistic function |
|
|
Term
interpretation of an estimated logit coefficient |
|
Definition
divide by 4, interpret as a linear probability coefficient |
|
|
Term
|
Definition
best way to tackle situation with more than one option, extension of the binomial logit model. n-1 logit equations are used |
|
|
Term
Impure serial correlation |
|
Definition
occurs when CA IV is violated by way of an incorrectly specified equation. (CA IV: no correlation between error term observations) |
|
|
Term
first-order serial correlation |
|
Definition
current value of the error term is a function of the previous value of the error term |
|
|
Term
first order auto-correlation coefficient |
|
Definition
measures the functional relationship between the value of an observation of the error term and the value of the previous observation of the error term |
|
|
Term
Durbin Watson D-statistic |
|
Definition
used to determine if serial correlation exists in a given equation. No serial correlation exists if dw stat = 2, extreme negative correlation if D = 4, extrem positive correlation if D=0. |
|
|
Term
generalized least squares |
|
Definition
method of ridding an equation of pure first order serial correlation, and in the process restoring the minimum variance principal to the estimation |
|
|
Term
positive serial correlation |
|
Definition
error term tends to have the same sign from one period to the next |
|
|
Term
Newey-West standard errors |
|
Definition
SE(Bhat)'s that take into account without chaninging the betas |
|
|