Term
Explain why a t distribution is associated with n – 1 degrees of freedom and describe the information that is conveyed by the t statistic. |
|
Definition
The t-distribution is associated with n-1 degrees of freedom because it uses the sample variance to compute the standard error (which is how far the distribution is away from the population's distribution). As sample size increases the closer it becomes to the population variance, the tails become approach the x-axis faster; thus each changing t-distribution is associated with the same degrees of freedom as for sample variance: df = n-1. |
|
|
Term
Calculate the degrees of freedom for a one-sample t test and locate critical values in the t table. |
|
Definition
1. Locate sample size
2. Plug sample size number into the formula n-1
3. locate the total from the formula in Table C.2 in Appendix C. find the critical value under alpha level and df. |
|
|
Term
Identify the assumptions for the one-sample t test. |
|
Definition
1. Normality (assume normal distribution -- meaning that most of the data is close to the mean or median at zero).
2. Random Sampling (the sample was selected randomly minimizing error).
3. Independence (one outcome does not influence the other this is true because of the random sampling factor).
|
|
|
Term
Compute a one-sample t test and interpret the results. |
|
Definition
Step 1: State Hypotheses
H0 =
H1 not equal
Step 2: Set the criteria
- compute df=n-1
- locate df in Table C.2 in Appendix C
- if the test is two tailed then include +/- value
- if the test is one tailed then include either + or - value
Step 3: Compute the Tobt = M-μ/√(n)
Step 4: Make a decision
- compare Tobt to critical value
- For two-tailed, if the Tobt number is higher than the critical value number regardless if positive or negative then the answer is to reject null-hypothesis.
-For one-tailed, if the direction is expected to go lower then look in positive direction, if it is expected to go higher then look in negative direction for rejection region. |
|
|
Term
Compute and interpret effect size and proportion of variance for a one-sample t test. |
|
Definition
When we decide to retain the null hypothesis, we conclude that an effect does not exist. However, when we decide to reject the null hypothesis we conclude that an effect does exist. However, we don't know how much of an effect.
- Cohen's d measures effect size
- Eta squared
- omega-squared |
|
|
Term
Compute and interpret confidence intervals for the one-sample t test. |
|
Definition
Step 1: Compute the sample mean (Μ) and standard error (SΜ).
Step 2: Choose the level of confidence and find the critical values at that level of confidence
- Level of confidence = alpha value, convert % into alpha by subtracting % from 100.
- Look up alpha & df at C2
Step 3: Compute the estimation formula to find confidence limits.
- Compute estimated point t(SM)
- Compute upper confidence limit (M + (SM)
- Compute lower confidence limit (M -t(SM)) |
|
|
Term
Determine if a test is two-tailed or one tailed |
|
Definition
Step 1: Read the question.
Step 2: Rephrase the claim in the question with an equation.
- In sample question #1, Drop out rate = 25%
- In sample question #2, Drop out rate < 25%
- In sample question #3, Drop out rate > 25%.
Step 3: If step 2 has an equals sign in it, this is a two-tailed test. If it has > or < it is a one-tailed test. |
|
|
Term
Describe the between-subjects design, and identify two appropriate sampling methods used to select independent samples. |
|
Definition
Between-subjects design is a research design in which different participants are observed one time in each group or at each level of one factor. Example, we could observe learning outcomes during a study session held in a low or well-lit room. In this example lighting is the factor and it has two levels low and well lit. Each level of the factor constitutes a group.
Selecting independent samples includes:
1. quasi-experimental method of collecting independent samples: population 1, population 2.
2. The experimental method of collecting independent samples: Population 1: Group 1, Group 2, Sample. |
|
|
Term
Compute a two-independent-sample t test and interpret the results. |
|
Definition
Step 1: State Hypothesis
- H0=
or H0 =/><
- H1 does not equal
or H1 </>
Step 2: Set the criteria for a decision
- df-two-independent sample t-test = df1 + df2
- Look up critical value in C.2
Step 3: Compute the test statistic
- tobt = (M1-M2)-(μ1-μ2)/SM1-M2)
Step 4: Make a decision
- compare tobt to critical value if the tobt does not exceed the critical value then the statistic does not fall in the rejection region and is therefore retained. |
|
|
Term
Compute and interpret effect size and proportion of variance for a two-independent-sample t test. |
|
Definition
When we reject the null hypothesis, we conclude that an effect does exist in the population. When we retain the null we conclude that an effect does not exist in the population, which was the decision.
Effect size calculations:
Formulas provided. |
|
|
Term
Compute and interpret confidence intervals for a two-independent-sample t test. |
|
Definition
Step 1: compute the sample mean (M1-M2) and standard error (SM1-M2).
Step 2: Choose the level of confidence.
- subtract % from 100 for alpha value
- Look up alpha value and df in Table C.2
- determine critical value
Step 3: Compute the estimation formula.
-compute estimation formula
- compute upper confidence limit
- compute lower confidence limit. |
|
|
Term
Describe two types of research designs used when selecting related samples. |
|
Definition
Related Sample Designs - aka. dependent sample, participants are related.
1. Repeated-Measures Design -- participants are observed in more than one group, for example, the same participants are observed in different treatments. There are pre-post (dependent variable is measured before and after a treatment) and within-subject designs (researchers observe the same participants across many treatments but not necessarily before and after a treatment).
2. Matched Pairs Design (participants are matched based on common characteristics or traits). |
|
|
Term
State three advantages for selecting related samples. |
|
Definition
Most disadvantages pertain specifically to a repeated measures design and not a matched pairs design.
Advantages:
1. It is more practical. Example, it can be more practical to observe the behavior of the same participants before and after a treatment or to compare how well participants of similar ability master a task.
2. Selecting related samples reduces standard error. The value for the estimate of standard error for a related samples t-test will be smaller than that for a two-independent-sample t-test.
3. Selecting related samples increases power because of the reduction in standard error. |
|
|
Term
Calculate the degrees of freedom for a related-samples t test and locate critical values in the t table.
|
|
Definition
Step 1: Determine df = n-1
Step 2: Determine alpha value.
Step 3: Look up df and alpha value in table C.2
Step 4: Locate critical value
|
|
|
Term
Compute a related-samples t test and interpret the results. |
|
Definition
Related Sample t-test
Step 1: State the hypotheses
- H1 = 0
- H2 does not equal 0 or if two-tailed test <or>
Step 2: Set the criteria
- Look up df & alpha value in Table C.2.
- Determine critical value
Step 3: Compute the test statistic
Formula provided.
- MD = ΣD(differences)/nD(#participants (n))
- μD = null hypothesis # = 0
- SD = √(nD)
- S2D = SS(ΣD)^2/nD-1
Step 4: Make a decision
Compare tobt to critical value. We reject the null if the obt. value exceeds the critical value. |
|
|
Term
Identify the analysis of variance test and when it is used for tests of a single factor. |
|
Definition
The t-tests are limited to testing up to 2 groups, however, testing of more than 2 groups requires a new test statistic called analysis of variance or ANOVA. For the one- factor ANOVA, the same participants can be tested multiple times(within-subjects) or different participants can be observed at each level (between subjects). |
|
|
Term
Identify each source of variation in a one-way between-subjects ANOVA and a one-way within-subjects ANOVA. |
|
Definition
If group means are equal, then the variance of group means is equal meaning they do not vary. The larger the difference between group means, the larger the variance of group means.
Sources of variation one-way between-subjects ANOVA
-- variance is due to differences between group means = MSBG -- numerator.
Sources of variation one-way within-subjects ANOVA
-- variance that has nothing to do with having different groups, but the error that happens by chance = MSE -- denominator. |
|
|
Term
Calculate the degrees of freedom and locate critical values for a one-way between-subjects ANOVA and a one-way within-subjects ANOVA. |
|
Definition
Between-subjects
1. Degrees of freedom
- dfBG=k-1
- goes in the numerator
2. Critical Value
- locate dfBG in table C.3
Within-subjects
1. Degrees of freedom
-dfE = N-k
- goes in the denominator
2. Critical Value
-locate dfE in table C.3 |
|
|
Term
Identify the assumptions for a one-way between-subjects ANOVA and a one-way within-subjects ANOVA. |
|
Definition
One-way between subjects test Assumptions
1. Normality -- normal distribution
2. Random Sampling -- the data is obtained from a sample that was selected using a random sampling procedure
3. Independence -- the outcomes of the study are independent
4. Homogeneity -- we assume that the variance in each population is equal to that of the others.
One-way within-subjects test assumptions
1. Normality
2. Independence
3. Homogeneity of variance -- variance in each population is equal to that in the others.
4. Homogeneity of covariance -- participants scores in each group are related |
|
|
Term
Follow the steps to compute a one-way between-subjects ANOVA and a one-way within-subjects ANOVA, and interpret the results. |
|
Definition
Steps for Between-Subjects Test
Step 1: State Hypotheses
H0 = 0
H1 > 0
Step 2: Set the criteria for a decision
dfBG provided -- numerator
dfE provided -- denominator
dfT provided
Locate dfBG & dfE in table C.3 for critical value.
Step 3: Compute test statistic
Stage 1:
k = groups number of groups or levels of factors
n = number of participants per group
N = number of participants overall
ΣxT = sum of all scores in a study
Σx(2/T) sum of all scores individually squared in a study
Stage 2:
[1] (ΣxT)²/N
[2] Σ(x²/n)
[3] Σx(2/T)
Stage 3:
SSBG = [2]-[1]
SST = [3]-[1]
SSE = SST-SSBG
Stage 4 provided
Step 4: Make a decision
- When Fobt < critical value retain the null hypothesis
- When Fobt > critical value reject null hypothesis
Steps for Within-Subjects Test
Step 1: State Hypotheses
H0
H1
Step 2: Set the criteria for a decision
dfBG provided -- numerator
dfBP provided
dfE provided -- denominator
dfT provided
Locate dfBG & dfE in table C.3 for critical value.
Step 3: Compute test statistic
Stage 1:
k = groups number of groups or levels of factors
n = number of participants per group
ΣxT = sum of all scores in a study
Σ p sum of scores for each person
Σx(2/T) sum of all scores individually squared in a study
Stage 2:
[1] (ΣxT)²/k*n
[2] Σ(x²/n)
[3] Σx(2/T)
[4] Σ p²/k
Stage 3:
SSBG = [2]-[1]
SSBP = [4]-[1]
SST = [3]-[1]
SSE = SST-SSBG-SSBP
Stage 4 provided
Step 4: Make a decision
- When Fobt < critical value retain the null hypothesis
- When Fobt > critical value reject null hypothesis |
|
|
Term
Compute and interpret Tukey’s HSD post hoc test and identify the most powerful post hoc test alternatives. |
|
Definition
Step 1: compute the test statistic for each pairwise comparison
- formula provided.
Step 2: compute the critical value for each pairwise comparison
- compute Tukey'sHSD: qα√(MSe/n)
- q is the studentized range statistic -- find in table C.4
- to find q we need to know dfE and real range r (equal to the number of groups).
-dfE provided formula.
Step 3: Make a decision to retain or reject the null
hypothesis for each pairwise comparison
- if the test statistic is larger than the critical value we computed then the two groups are significantly different. |
|
|
Term
Compute and interpret proportion of variance for the one-way between-subjects ANOVA and the one-way within-subjects ANOVA |
|
Definition
Proportion of variance means effect size.
Eta squared provided
Omega squared provided |
|
|
Term
Summarize the results of the one-way between-subjects ANOVA and the one-way within-subjects ANOVA in APA format. |
|
Definition
Summarize results of a one-way ANOVA test
Report the test statistic, degrees of freedom, and the p-value, additionally, the effect size for significant analyses. Summarize the means, standard error, or standard deviations measured in a study in a figure or a table or in the main text of the article. To report the results of a post hoc test, identify which post hoc test you computed and the p-value for significant results. |
|
|
Term
Define and explain the following terms: cell, main effect, and interaction. |
|
Definition
Main effect -- is a source of variation associated with mean differences across the levels of a single factor. In the two way ANOVA, there are two factors and therefore two main effects; one for Factor A and for B.
Interaction -- is a source of variation associated with the variance of group means across the combination of levels of two factors. It is a measure of how cell means at each level of one-factor change across the levels of a second factor.
Cell -- is the combination of one level from each factor, as represented in a cross-tabulation. Each cell is a group in a research study. |
|
|
Term
Calculate the degrees of freedom for the two-way between-subjects ANOVA and locate critical values in the F table. |
|
Definition
Critical values two-way between-subjects ANOVA
1. Numerator Factor A degrees of freedom = dfA = p-1 (p is levels of factor A)
2. Factor B degrees of freedom = dfB = q-1 (q is level of factor B).
3. A x B degrees of freedom = dfA x B = (p-1)(q-1)
4. Denominator degrees of freedom error = dfE = pq(n-1)
5. total degrees of freedom = dfT= (npq)-1
Locate critical value in table C.3 |
|
|
Term
Compute and interpret the Pearson correlation coefficient and the coefficient of determination, and test for significance. |
|
Definition
Pearson correlation coefficient = r provided
Coefficient of determination = r² = η²
Test for significance = hypothesis testing
Step 1: state the hypotheses
H0: p = 0
H1: p≠ 0
Step 2: Set the criteria for a decision
- determine alpha level
- df = n-2
- locate critical value in table C.5
Step 3: Compute the test statistic
- compute r (formula provided).
Step 4: Make a decision
- if r is greater than critical value reject the null hypothesis |
|
|
Term
Identify and explain three assumptions and three limitations for evaluating a correlation coefficient. |
|
Definition
1. Homoscedasticity - constant variance among data points, assume equal variance of data points dispersed along the regression line.
2. Linearity -- assumption that the best way to describe a pattern of data is using a straight line.
3. Normality--
4. Causality -- correlation doesn't equal causation it merely shows the direction and the strength of the relationship between two factors. Other possibilities: changes could be due to a third variable, reverse causality, systematic cause an effect.
5. Outliers -- can obscure the relationship between two factors by altering the direction and the strength of an observed correlation.
6. Restriction of Range -- is a problem that arises when the range of data for one or both correlated factors in a sample is limited or restricted, compared to the range of data in the population from which the sample was selected. |
|
|
Term
Delineate the use of the Spearman, point-biserial, and phi correlation coefficients. |
|
Definition
1. Spearman -- rs, is a measure of the direction and strength of the linear relationship of two ranked factors on an ordinal scale of measurement.
2. Point-biserial -- rpb, is a measure of the direction and strength of the linear relationship of one factor that is continuous on an interval or ratio scale of measurement and a second factor that is dichotomous on a nominal scale of measurement.
3. Phi correlation Coefficient -- rω, is a measure of the direction and strength of the linear relationship of two dichotomous factors on a nominal scale of measurement. |
|
|
Term
Distinguish between a predictor variable and a criterion variable. |
|
Definition
Predictor variable (X) is the variable with values that are known and can be used to predict the values of another variable.
Criterion variable -- (Y) is the variable with unknown values that can be predicted or estimated, given known values of the predictor variable. |
|
|
Term
Compute and interpret the method of least squares. |
|
Definition
The method of least squares is a statistical procedure used to compute the slope and y-intercept (a) of the best fitting straight line to a set of data points.
Equation provided.
Step 1: Compute preliminary calculations
Step 2: Calculate the slope (b)
Step 3: Calculate the y-intercept (a). |
|
|
Term
Identify each source of variation in an analysis of regression, and compute an analysis of regression and interpret the results. |
|
Definition
Sources of variation:
regression variation -- is the variance in Y that is related to or associated with changes in X. The closer data points fall to the regression line, the larger the value of regression variation.
residual variation -- is the variance in Y that is not related to changes in X. This is the variance in Y that is left over or remaining. The farther data points fall from the regression line, the larger the value of residual variation.
Analysis of Regression
Step 1: State Hypothesis:
H0 variance in the Y is not related to change in X
H1 variance in the Y is related to change in X.
Step 2: Set the criteria
- set alpha level
- dfreg= # of predictor variables - numerator
- dfres = n-2 -denominator
- look up critical value in table C.3
Step 3: Compute test statistic
- formula provided
Step 4: Make a decision
- compare fobt to critical value if fobt exceeds critical value then reject the null hypothesis. |
|
|