Term
|
Definition
Application: when a mean from one sample needs to be compared to a population mean obtained from literature
Assumptions: -Dependent variable is normally distributed -Data points are independent of each other -Dependent variable must be ordinal, interval, or ratio level |
|
|
Term
Calculating the test statistic |
|
Definition
[Mean (from sample) - population mean]/standard error of the mean
If the computed t value is greater than the critical t-value reject the null hypothesis |
|
|
Term
Confidence interval caluculation |
|
Definition
Difference in mean +/- (t score x SE) |
|
|
Term
Independent sample t-test |
|
Definition
Applications: when a mean from one sample/group needs to be compared to a mean from a different sample/group
Assumptions: -Dependent variable is normally distributed -Data points are independent of each other -Dependent variable must be ordinal, interval, or ratio level -The variances of the two comparisons groups must be equal (only if the sample sizes of the 2 groups are different) -Not more than 2 groups/levels |
|
|
Term
Assumption of equality of variances is violated? |
|
Definition
If variance in the two groups is equal, the denominator in the formula for t-statistic uses the pooled variance from the 2 groups
If variance in the 2 groups is not equal, the formula for t statistic does not pool variance from the 2 groups (Use modified formula) |
|
|
Term
|
Definition
Applications: -Change in scores or clinical levels after an intervention (Pre- & post studies) -Comparing individually matched samples
Assumptions: -Dependent variable is normally distributed -Data points are independent of each other -Dependent variable must be ordinal, interval or ratio level -Not more than 2 groups or time points |
|
|
Term
|
Definition
An extension of a t-test
A way to test for differences between means of: -More than 2 groups -More than 1 independent variable -more than 2 time periods of follow-up |
|
|
Term
|
Definition
Used when there is a single independent variable and the dependdent variable is continuous
Applications: Comparison of means of two or more groups
Assumptions: -Dependent variable must be ordinal, interval, or ratio level -Dependent variable is normally distributed -Independent variable is categorical -Data points are independent of each other -The variances of the comparison groups MUST BE EQUAL |
|
|
Term
|
Definition
Designs involving more than one independent variable -A single design becomes an efficient way of examining the effect of 2 independent variables -Aids in the interpretation of both main and interaction effects |
|
|
Term
|
Definition
Used when you have to assess measurements over time with more than 2 time periods -Can be considered as a factorial ANOVA where one independent variable is type of drug and the other independent variable is time |
|
|
Term
|
Definition
Total variability in the data= (MS total)
Variability between the different groups= MS between -Represents the spread of group means around the total grand mean Variability within the groups = MS within -Represents the spread of scores within each group around the group mean |
|
|
Term
|
Definition
F= (MS between)/(MS within)
If the null hypothesis is true and there is no effect, then F will fluctuate around 1 If null hypotesis is not true and there is effect, then large positive values of F are expected |
|
|
Term
What if the assumption of equality of variances is violated? |
|
Definition
-If variance in the comparison groups are equal, use the F test from the ANOVA table -If variance in the comparison groups are not equal, use a modified formula (Welch/Brown-Forsythe) |
|
|
Term
|
Definition
-ANOVA is an omnibus test -The rejection of the null hypothesis in a one-way ANOVA simply tells us that the population means are not all equal -It doesn't say which ones are different |
|
|
Term
Per Comparison Error (PC) |
|
Definition
Normal level of significance Normally set at (0.05) |
|
|
Term
|
Definition
Based on the number of post-hoc comparisons FWE= 1-(1-a)^c c= # of comparison models a= your Per Comparison error |
|
|
Term
|
Definition
Used when the data is skewed Typically for ordinal data (Likert scales)
Don't always use these because they are less powerful/sensitive (higher chance of type 2 error) -Interpreting results is not as intuitive
-Cannot be used for complex analytical designs |
|
|
Term
Diagnosing non-normal distribution |
|
Definition
-Visual detection -Statistical tests (Kolmogorov-Smirnov). If p is less than 0.05 then the null hypothesis of "This distribution is not different from a normal distribution" is rejected and we conclude that the distribution is not normal |
|
|
Term
Mann-Whitney U Test/Wilcoxon Rank Sum Test |
|
Definition
Applications: When a mean from on sample/group needs to be compared to a mean from a different sample/group -Same as independent samples t-test
Assumptions: -Data points are independent of each other -Dependent variable must be ordinal, interval, or ratio level -Not more than 2 groups/levels
If the computed test value is greater than the critical value, reject the null hypothesis If the p value is less than 0.05, reject the null hypothesis |
|
|
Term
|
Definition
Applications: Comparison of different values of two or more groups -Same as one-way ANOVA
Assumptions: -Dependent variable must be ordinal -Independent variable is categorical -Data points are independent of each other
Post-Hoc tests are not available on SPSS |
|
|
Term
|
Definition
Divide 0.05 by the number of comparisons # of comparisons = k(k-1)/2 |
|
|
Term
Wilcoxon Signed-Ranks test |
|
Definition
Applications: -Change in scores or clinical levels after an intervention (pre-post studies) -Comparing individually matched samples Same as Paired sample t-test
Assumptions: -Data points are independent of each other -Dependent variable must be ordinal, interval, or ratio level -Not more than 2 groups or time points |
|
|
Term
Summary of non-parametric tests |
|
Definition
-When data violate the assumptions of parametric tests we can sometimes find a non-parametric equivalent - |
|
|
Term
|
Definition
-When both the independent and dependent variable are measured at the nominal/categorical level -The data consists of frequencies -Typical statistical tests cannot be applied as we are dealing with percentages/proportions rather than means/medians -Use a separate class of non-parametric tests: X^2 test |
|
|
Term
Types of Chi-squared tests (X^2) |
|
Definition
-Binomal -Test of association -McNemar's Test |
|
|
Term
Types of Chi-squared tests (X^2) |
|
Definition
-Binomal -Test of association -McNemar's Test |
|
|
Term
|
Definition
Applications: -When proportion from one sample needs to be compared to a proportion from a population
Assumptions -Data points are independent of each other -The data must be binominal (not more than 2 categories) Steps: 1. Draw the cells 2. Fill in the observed frequencies basted on your study sample 3. Fill in the expected frequencies. The expected frequencies are always based on your null hypothesis being true. |
|
|
Term
Calculation for Chi-squared |
|
Definition
X^2= Sum of (O-E)^2/E (add both groups) -Calculate for both groups (adherent and non adherent) |
|
|
Term
|
Definition
Applications: To examine the association/lack of association btwn categorical variables
Assumptions: -Data points are independent of each other -Both the independent and dependent variables must be categorical and measured at the nominal/ordinal level -The categories must be exhaustive and mutually exclusive |
|
|
Term
Coefficients of Associations |
|
Definition
Chi squared does a statistical test to examine the presence or absence of association The strength of association is not assessed Relative risk & odds ratio are typically used for this |
|
|
Term
|
Definition
Applications: -Change in value/category after an intervention -Comparing individually matched samples
Assumptions: -Data points are independent of each other -The dependent and the independent variables must be nominal/categorical
Degrees of freedom= (# of rows-1)(# of columns-1) |
|
|
Term
Confidence interval calculations for RR & OR |
|
Definition
95% CI= RR [1 +/- (1.96/square root of Chi squared)] Same for OR, just substitute OR where RR is in the equation |
|
|