Term
|
Definition
observations are independent rsubij = 0 (for i does not = j) |
|
|
Term
|
Definition
god loves .06 nearly as much as .05 problems with "reject/fail to reject" decisions in null hypothesis testing |
|
|
Term
|
Definition
We can determine if there is a relationship, how strong the relationship is (1, -1), and in what direction that relationship follow (+/-) |
|
|
Term
|
Definition
a is the y-intercept and b is the slope. X is a predictor or regressor and Y is the criterion variable Line fits the data better than any other line. |
|
|
Term
|
Definition
can predict, explain, account for, or represent some of the variability in Y--but not all of it. |
|
|
Term
|
Definition
Line of best Fit plus the residual (unexplainable part of Y) |
|
|
Term
|
Definition
Model cannot predict this part of Y; minimize the residual |
|
|
Term
Coefficient of Determination |
|
Definition
All the variability we can predict using X to predict Y. r^2=SSreg/SStotal |
|
|
Term
Coefficient of Alienation |
|
Definition
All the variability we cannot predict using X to predict Y.Proportion of variance in Y that is not predictable. 1-r^2=SSresid/SStotal |
|
|
Term
|
Definition
The magnitude of 'b' represents the expected change in Y given a one-unit change in X. |
|
|
Term
|
Definition
smallest possible squared residuals--fits the data better than any other |
|
|
Term
|
Definition
MSresidual-variance of the points around the prediction line |
|
|
Term
Standard Error of Estimate |
|
Definition
RMSE-Std. Deviation of the points around regression line. Typical size of the prediction errors. |
|
|
Term
Hypothesis Testing in Linear Regression |
|
Definition
Hypothesize about the population coefficient of determination. Hypothesize about the population value of the regression weight (slope). |
|
|
Term
Multiple Regression Models |
|
Definition
Better prediction Analyze variables in combination Control of extraneous variables |
|
|
Term
Partial Regression Coefficients |
|
Definition
Expected change in Y, given a one-unit change in X, while holding constant all of the other Xs. |
|
|
Term
|
Definition
0 to 1, the extent to which this predictor is not predictable from the other predictors; unique variance. |
|
|
Term
|
Definition
When tolerance becomes 0, the denominator is infinity |
|
|
Term
|
Definition
Analyze if the rho between two equations. Do X2 and X3 add to the predictability of Y after adjusting for X1? If not, then there is no sense in collecting other predictors. |
|
|
Term
Partial Correlation Coefficients |
|
Definition
The PPMC between two variables from which the effects of one (or more) other variables have been removed (partialled). Statistical control for confounding variables. Symmetric |
|
|
Term
|
Definition
Conceptually false correlation. Appears because of the indfluence from another variable. Once it is controlled, the correlation disappears. |
|
|
Term
|
Definition
|
|
Term
r2 (r correlation coefficient) |
|
Definition
The proportion of the variance in Y explained by knowing that it is related to X. |
|
|
Term
|
Definition
|
|
Term
|
Definition
Completely uncorrelated with any of the regressors that give rise to those residuals. Any variable that is a predictor that gave rise to those residuals will be zero. |
|
|
Term
|
Definition
predicted residuals from the regression |
|
|
Term
|
Definition
predicted residuals from the 2nd regression |
|
|
Term
|
Definition
Distort relationships, computes all possible partial correlations. |
|
|
Term
semi-partial correlation coefficient |
|
Definition
PPMC between two variables from which the effects of one (or more) other variables have been removed (partialed) from ONLY ONE of the two variables. The correlation between X1 and X2 with the effects of X3 removed from X2 only. Not symmetric |
|
|
Term
|
Definition
Tempting but stupid! Comparison of R2 and changes in R2 amounts to compare squared semi-partial correlation coefficients of different orders. |
|
|
Term
|
Definition
A variable that is uncorrelated with one of the original variables, yet when it is partialled out, the relationship between the two original variables increases. |
|
|
Term
|
Definition
Do not need theory in Y^ models; focus on R^2 obtain the largest value of R2 with the smallest # of predictors. |
|
|
Term
|
Definition
Intimately involved in theory. Focus on the slope, not necessarily for practical reasons. Estimate the magnitude of the processes related to an outcome. |
|
|
Term
|
Definition
Select the predictor with largest 0 order correlation. Then select the predictor with the largest squared 1st order semi-partial correlation. Select the predictor with the largest squared 2nd order semi-partial correlation. Continue to add variables until you fail to reject on the change R2 test. |
|
|
Term
|
Definition
Start with ALL predictors. Remove the predictor with the smallest order squared semi-partial correlation. Continue to remove until the change in R2 test is statistically significant. Put back the last removed variable then Quit. |
|
|
Term
|
Definition
Start with the single best predictor. Add the next best available predictor. Remove non-contributing regressors after change in R2 shows no statistical signifigance. |
|
|
Term
|
Definition
Failure to reject a null hypothesis-wrong decision Type II error |
|
|
Term
|
Definition
Type I error-rejecting a null hypothesis that is true. |
|
|
Term
|
Definition
Does adding the variable make the model more meaningful? |
|
|
Term
|
Definition
The difference in R2 values between the obtained value in sample 1 and sample 2. Least squares approximation of slopes. |
|
|
Term
|
Definition
Increase N Decrease k Increase R2 |
|
|
Term
|
Definition
Cohen's F Test f2 f2 = R2/1-R2
small .02 medium .15 large .35 |
|
|
Term
|
Definition
Use R2 or shrunken R2 Use variance of estimate or Std. Error Compute Std. Error and Confidence bands for individual predictions |
|
|
Term
|
Definition
unusual patterns of values on the variables |
|
|
Term
|
Definition
Extent to which the observation forces the equation to fit. |
|
|