Term
Philosophy and Theory in Research Science |
|
Definition
A method of inquiry; a way of learning and knowing things about the world around us. |
|
|
Term
Non-scientific sources of knowledge: Tradition |
|
Definition
Inherited knowledge passed down through a specific culture; these are the things that “everybody knows.” |
|
|
Term
|
Definition
Knowledge gained by listening to experts and social or professional figures with high status (e.g., doctors, teachers, parents). |
|
|
Term
|
Definition
Widely shared beliefs based on tradition, authority, or consensus within a society. |
|
|
Term
|
Definition
This includes information from newspapers, television, the Internet, etc. |
|
|
Term
|
Definition
An approach to inquiry that attempts to safeguard errors commonly made in casual human inquiry. |
|
|
Term
Chief features of the scientific method include: |
|
Definition
1. Viewing all knowledge as provisional (i.e., serving for the time being) and subject to refutation. 2. Searching for evidence based on systematic and comprehensive observation. 3. Pursuing objectivity in observation. 4. Replication |
|
|
Term
|
Definition
The ultimate expression of scientific inquiry in social work research is evidence-based practice. Using the best scientific evidence available in deciding how to intervene with individuals, families, groups, or communities. |
|
|
Term
|
Definition
is a fundamental model or frame of reference that shapes our observations and understandings. |
|
|
Term
Two paradigms that have been debated over how best to conduct research in social work and the social sciences are called |
|
Definition
positivism and interpretivism |
|
|
Term
|
Definition
A paradigm introduced by August Comte that held that social behavior could be studied and understood in a rational, scientific manner – in contrast to explanations based in religion or superstition (i.e., a modern rather than pre-modern interpretation). Asserts that only verifiable claims based directly on experience could be considered genuine knowledge. |
|
|
Term
|
Definition
emphasize quantitative research method characteristics, such as objectivity, precision, and generalizability in research. |
|
|
Term
|
Definition
The interpretivist paradigm emphasizes gaining an empathic understanding of how people feel inside, how they interpret their everyday experiences, and what idiosyncratic reasons they may have for their behaviors |
|
|
Term
Qualitative research methods characteristics are more in accord with |
|
Definition
|
|
Term
|
Definition
Recognizes that discretionary judgment is unavoidable in science, that proving causality with certainty in explaining social phenomena is problematic, that knowledge is embedded in historically specific paradigms and is therefore relative rather than absolute, and that all methods are imperfect, so multiple methods, both qualitative and quantitative, are needed to generate and test theory, improve understanding over time of how the world operates, and support informed policymaking and social program decision making. |
|
|
Term
|
Definition
A systematic set of interrelated statements intended to explain some aspect of social life. |
|
|
Term
Two types of scientific methods distinguished primarily by the ways in which they use theory in research |
|
Definition
|
|
Term
|
Definition
The logical model in which specific expectations of hypotheses are developed on the basis of general principles. |
|
|
Term
|
Definition
The logical model in which general principles are developed from specific observations. |
|
|
Term
Types of social welfare research: |
|
Definition
Descriptive Explanatory EvaluativeExploratory |
|
|
Term
|
Definition
Descriptive research involves measuring, counting, or describing a phenomenon. |
|
|
Term
|
Definition
Explanatory research attempts to find correlational or causal relationships – what effect does one factor have on another? |
|
|
Term
|
Definition
Evaluative research is often considered a subtype of explanatory research.Evaluative research attempts to study the effects of a specific intervention or program. |
|
|
Term
|
Definition
Exploratory research attempts to better understand a problem, population, or phenomenon when no or very little information exists. |
|
|
Term
|
Definition
are a type of exploratory research. |
|
|
Term
Conceptualization and Operationalization |
|
Definition
together represent the process of moving from vague ideas about what you want to study to being able to recognize and measure what you want to study. |
|
|
Term
|
Definition
The process of refining and specifying abstract concepts. The process through which we specify precisely what we will mean when we use particular terms. |
|
|
Term
|
Definition
Specific concepts or theoretical constructs under investigation. Comprises more than one attribute, or value, and thus is capable of varying. If a variable is not capable of varying, it is a constant. A variable must also be translatable into observable terms. |
|
|
Term
|
Definition
The concepts that make up a broader concept or variable. |
|
|
Term
|
Definition
A variable postulated to explain another variable. |
|
|
Term
Also known as a predictor or explanatory variable. |
|
Definition
|
|
Term
|
Definition
The variable being explained |
|
|
Term
Also known as the outcome or response variable |
|
Definition
|
|
Term
|
Definition
A third variable that effects or explains the relationship between independent and dependent variables Extraneous variables represent alternative explanations for relationships that are observed between independent and dependent variables. |
|
|
Term
Also known as a confounding variable. |
|
Definition
|
|
Term
|
Definition
Extraneous variable that is controlled, or adjusted, for in our research design Controlling for a variable means taking each level of that control variable separately and looking at how the original relationship between the independent and dependent variable changes or remains the same within each level of the control variable |
|
|
Term
|
Definition
Hypothesis – Statement that postulates the relationship between the independent and dependent variables. Hypotheses are tentative statements that predict what we expect to find about the way our variables co-vary together. Positive – As independent variable increase (or decreases), dependent variable also increases (or decreases). Negative – Independent and dependent variable move in opposite direction Curvilinear – Relationship changes at certain levels of the variables. Non-directional – Direction of relationship is not declared. Null – There is no relationship between the variables. |
|
|
Term
|
Definition
Statement that postulates the relationship between the independent and dependent variables. Hypotheses are tentative statements that predict what we expect to find about the way our variables co-vary together. Types of relationships |
|
|
Term
|
Definition
As independent variable increase (or decreases), dependent variable also increases (or decreases). Types of relationships |
|
|
Term
|
Definition
Independent and dependent variable move in opposite directions. Types of relationships |
|
|
Term
|
Definition
Relationship changes at certain levels of the variables |
|
|
Term
|
Definition
Direction of relationship is not declared. Types of relationships |
|
|
Term
|
Definition
There is no relationship between the variables. Type of relationship |
|
|
Term
|
Definition
Developing specific research procedures that will result in empirical observations of constructs that represent these concepts in the real world. In operationalization, concrete empirical procedures that will result in measurements of variables are specified. |
|
|
Term
|
Definition
Translating a variable into observable terms. The operations, or indicators, we will use to determine the attribute we observe about a particular concept. |
|
|
Term
|
Definition
Markers that indicate the presence or absence of the concept we are studying. |
|
|
Term
|
Definition
A type of composite or cumulative measure composed of several items that have a logical or empirical structure among them. |
|
|
Term
|
Definition
Refers to whether the observed scores accurately reflect the true scores of the construct or variable under investigation. Does the instrument measure what it intends to measure? |
|
|
Term
|
Definition
Non-empirically-based and empirically-based |
|
|
Term
Non-empirically-based validity |
|
Definition
Non-empirically-based validity
Content validity |
|
|
Term
Non-empirically-based validity |
|
Definition
Refers to whether an instrument appears to measure what it intends to measure. Think of “face value.” The researcher constructing the instrument and experts with knowledge of the construct are the best judges of face validity. |
|
|
Term
|
Definition
Refers to whether an instrument measures all possible dimensions of the construct or variable under investigation The degree to which a measure covers the range of meanings included within a concept. Content validity is also assessed by professionals and experts with knowledge of the construct. |
|
|
Term
Empirically- or evidence-based validity: |
|
Definition
Criterion-based validity Example: Concurrent validity Predictive validity:
Construct validity Construct validity Discriminant validity Reliability
|
|
|
Term
|
Definition
Refers to whether an instrument can predict concurrent criterion (concurrent validity) or future criterion (predictive validity). Example: Concurrent validity: A written driving test that predicts how well a person will drive soon after passing the written test demonstrates concurrent validity. Example: Predictive validity: A GRE score that predicts how well a student will do in future graduate studies demonstrates predictive validity. Concurrent and predictive validity are different forms of criterion-based validity. Construct validity – Refers to whether an instrument has high correlation with existing instruments that measure the same construct (convergent validity) or has low correlation with instruments that measure other constructs (discriminant validity). |
|
|
Term
|
Definition
A written driving test that predicts how well a person will drive soon after passing the written test demonstrates concurrent validity. |
|
|
Term
|
Definition
Refers to whether an instrument can predict concurrent criterion (concurrent validity) or future criterion (predictive validity). This is a type of Empirically- or evidence-based validity: |
|
|
Term
|
Definition
: A GRE score that predicts how well a student will do in future graduate studies demonstrates predictive validity. |
|
|
Term
Concurrent and predictive validity are different forms of criterion-based validity. Construct validity – Refers |
|
Definition
Refers to whether an instrument has high correlation with existing instruments that measure the same construct (convergent validity) or has low correlation with instruments that measure other constructs (discriminant validity). |
|
|
Term
|
Definition
is assessed by examining the relationship between the construct measured by the new instrument and the same construct as measured by an existing instrument. |
|
|
Term
|
Definition
is assessed by examining the relation between one construct and another as measured by a different instrument |
|
|
Term
|
Definition
That quality of measurement method that suggests that the same data would have been collected each time in repeated observations of the same phenomenon. It refers to whether a particular measure or test applied repeatedly to the same subjects or objects would yield the same results each time. An instrument can be reliable, but not valid. |
|
|
Term
|
Definition
Interrater reliability – The extent of agreement, or consistency, between or among observers. Test-retest reliability – A method for testing reliability of written instruments that indicates how stable a measure is over time. Data collection methods: Test, wait, retest (same subjects, same test, two occasions). Parallel-forms reliability – Comparing the results from two instruments designed to measure the same construct. Data collection methods: Give test 1; give test 2 (same subjects, two tests, one occasion). Internal consistency reliability – A practical and commonly used approach to assessing reliability that examines the homogeneity of a measurement instrument by dividing the instrument into equivalent halves and then calculating the correlation of the scores of the two halves (Cronbach’s alpha). Data collection methods: Give one test (same subjects, one occasion). |
|
|
Term
|
Definition
The extent of agreement, or consistency, between or among observers. |
|
|
Term
|
Definition
A method for testing reliability of written instruments that indicates how stable a measure is over time. |
|
|
Term
Parallel-forms reliability |
|
Definition
Comparing the results from two instruments designed to measure the same construct. Data collection methods: Give test 1; give test 2 (same subjects, two tests, one occasion). |
|
|
Term
Internal consistency reliability |
|
Definition
A practical and commonly used approach to assessing reliability that examines the homogeneity of a measurement instrument by dividing the instrument into equivalent halves and then calculating the correlation of the scores of the two halves (Cronbach’s alpha). Data collection methods: Give one test (same subjects, one occasion). |
|
|
Term
|
Definition
-An error in measurement with a consistent pattern of effects. -This error occurs when the information we collect reflects a false picture of the concept we seek to measure, either because of the way we collect the data or the dynamics of those who are providing the data. -Systematic error is caused by any factors that systematically affect measurement of the variable across the sample. |
|
|
Term
Validity has to do with the amount of systematic error in a measurement explain how errors occur and type |
|
Definition
-The more valid the measure, the less systematic error in it. -Systematic errors (unlike random errors) tend to be consistently either positive or negative -- because of this, systematic error is sometimes considered to be bias in measurement. |
|
|
Term
|
Definition
That quality of a measurement device that tends to result in a misrepresentation of what is being measured in a particular direction |
|
|
Term
Acquiescent response set bias |
|
Definition
– Agreeing or disagreeing with most or all statements regardless of content. |
|
|
Term
|
Definition
The tendency of people to say or do things that will make them or their reference group look good |
|
|
Term
|
Definition
Bias resulting from cultural disparities (e.g., language, norms, and values) |
|
|
Term
|
Definition
– A measurement error that has no consistent pattern and that reduces the reliability of measurement. -Random error is caused by any factors that randomly affect measurement of the variable across the sample. For instance, each person's mood can inflate or deflate their performance on any occasion. |
|
|
Term
Reliability has to do with the amount of random error in a measurement. |
|
Definition
The more reliable the measure, the less random error in it. -Through research methodology we try to minimize measurement errors that would destroy the credibility and utility of our findings |
|
|
Term
|
Definition
Sampling is the process of selecting observations. -If a sample of individuals from a population is to provide useful descriptions of the total population, then it must contain essentially the same variations that exist in the population. |
|
|
Term
|
Definition
The quality of a sample having the same distribution of characteristics as the population from which it was selected. |
|
|
Term
|
Definition
– The quality of a research finding that justifies the inference that it represents something more than the specific observations on which it was based. -To infer that the findings of a particular study represent causal processes or apply to settings or populations beyond the study conditions. -Usually, this involves the generalization of findings from a sample to a population. |
|
|
Term
|
Definition
That unit about which information is collected and that provides the basis for analysis. |
|
|
Term
|
Definition
– The list or quasi-list of units that compose a population from which a sample is selected. -The sampling frame is a list of the elements composing the study population. -Properly drawn samples provide information appropriate for describing the population of elements composing the sampling frame – nothing more. |
|
|
Term
|
Definition
theoretically specified aggregation of study elements |
|
|
Term
Two types of sampling methods: |
|
Definition
Probability sampling Random selection |
|
|
Term
|
Definition
Sampling that typically involves some random-selection mechanism based on probability theory |
|
|
Term
|
Definition
A probability sampling procedure in which each element has an equal chance of selection independent of any other event in the selection process. -Probability theory is a branch of mathematics that provides the tools researchers need to devise sampling techniques that produce representative samples. -When researchers generalize from a sample, they are using sample observations (i.e., statistics) to estimate population parameters. |
|
|
Term
|
Definition
summary description of a given variable in a population. |
|
|
Term
Probability sampling techniques: |
|
Definition
A basic principle of probability sampling is that a sample will be representative of its population if all members of that population have an equal chance of being selected in the sample – an equal chance of selection. Probability samples, even if never perfectly representative, are typically more representative than other types of samples because of the avoidance of sampling biases |
|
|
Term
|
Definition
– A type of probability sample in which units that compose a population are assigned numbers. -A set of random numbers is then generated, and the units having those numbers are included in the sample. -Once a sampling frame has been established, to use simple random sampling the researcher assigns a single number to each element in the list without skipping any number in the process. -A table of random numbers is then used to select elements for the sample. |
|
|
Term
|
Definition
– A type of probability sample in which every kth unit in a list is selected for inclusion in the sample. -We compute k by dividing the size of the population by the desired sample size. -The result is called the sampling interval. |
|
|
Term
|
Definition
The standard distance between elements selected in the sample. There is one danger involved in systematic sampling – the arrangement of elements in the list. Systematic sampling is usually superior to SRS, in convenience, if nothing else. |
|
|
Term
Stratified sampling (define) & The ultimate function of stratification |
|
Definition
– A probability sampling procedure that uses stratification to ensure that appropriate numbers of elements are drawn from homogeneous subsets of that population. -The ultimate function of stratification is to organize the population into homogeneous subsets. -is a method for obtaining a greater degree of representativeness. -Stratified sampling with a uniform proportion of cases drawn from each homogeneous grouping is also referred to as proportionate stratified sampling. |
|
|
Term
Disproportionate stratified sampling |
|
Definition
– A sampling method aimed at ensuring that enough cases of certain minority groups are selected to allow for subgroup comparisons within each of these minority groups. |
|
|
Term
|
Definition
– A procedure employed in connection with sampling whereby units selected with unequal probabilities are assigned weights in such a manner to make the sample representative of the population from which it was selected. |
|
|
Term
|
Definition
– A multi-stage sampling procedure in which natural groups (clusters) are sampled initially, with the members of each selected group being subsampled afterward. -Multi-stage cluster sampling involves the repetition of two basic steps: listing and sampling. -Cluster sampling can be conducted using simple random sampling, systematic, or stratified sampling techniques. |
|
|
Term
Probability proportionate to size – |
|
Definition
This refers to a type of multi-stage cluster sample in which clusters are selected not with equal probabilities, but with probabilities proportionate to their sizes – as measured by the number of units to be subsampled. -This technique guards against over representation among cluster elements with large units and under representation among cluster elements with small units. |
|
|
Term
Nonprobability sampling techniques: |
|
Definition
1-Availability sampling/convenience sampling. 2- Purposive sampling – 3-Snowball sampling – 4- Quota sampling In practice, there is greater likelihood that a probability sample will be representative of the population from which it is drawn than will a nonprobability sample. |
|
|
Term
|
Definition
– Also referred to as convenience sampling. -A sampling method that selects elements simply because of their ready availability and convenience. -Very popular in social science research due to its ease of application and relative low cost. |
|
|
Term
|
Definition
– Also referred to as judgmental sampling. Selecting a sample of observations or elements that the researcher believes will yield the most comprehensive understanding of the study, based on the researcher’s intuitive feel for the subject that comes from extended observation and reflection. |
|
|
Term
|
Definition
– Asking each person interviewed to suggest additional people for interviewing. -Snowball sampling is appropriate when the members of a special population are difficult to locate. |
|
|
Term
|
Definition
A type of sampling in which units are selected into the sample on the basis of prespecified characteristics so that the total sample will have the same distribution of characteristics as are assumed to exist in the population being studied. -Quota sampling is the nonprobabilistic equivalent to probability proportionate to size and disproportionate stratified sampling |
|
|
Term
Sample size In determining sample size, you must take into consideration
|
|
Definition
-the number of comparison or control groups, the anticipated response rate, and the anticipated attrition rate. -In lieu of a statistical power analysis, the general rule is a minimum of 30 or 10% of the study population, whichever is greater. -You must have a minimum of 30 or 10% of the study population, whichever is greater, for each comparison group or variable with the most categories. -For any type of survey study, adjust for anticipated response rate. -If you want 30 study participants and the expected response rate is 50%, you must send out 60 surveys. -If you want 30 per group, you must send out 120 surveys (60 to each group) |
|
|
Term
Survey Research Four main methods of administering survey questionnaires: |
|
Definition
-Today, survey research is perhaps the most frequently used mode of observation in the social sciences. -Survey research is the administration of questionnaires to a sample of respondents selected from some population. 1. Self-administered questionnaires 2. Staff-administered or interview surveys 3. Telephone surveys 4. Online surveys |
|
|
Term
Self-administered surveys: |
|
Definition
Mailed, delivered, or given at a set location. An acceptable response rate for a mailed survey is over 50%. |
|
|
Term
|
Definition
Better response rate than mail surveys. Interviewers can probe respondents for answers, as well as explain complicated or unclear questions. Interview surveys are much more expensive than any other type of survey format. |
|
|
Term
|
Definition
Big advantages are safety to the interviewer, time, and money. Problems with telephone surveys can be unlisted telephone numbers, answering machines, caller id, and losing respondents mid-interview. |
|
|
Term
|
Definition
The biggest problem with online surveys is representativeness of respondents. |
|
|
Term
Advantages of different types of administration: |
|
Definition
Advantages of a self-administered questionnaire over an interview, whether by mail or email, are 1. Economy 2. Speed 3. Lack of interviewer bias 4. Safety of interviewers 5. The possibility of anonymity and privacy to encourage more candid responses on sensitive issues The advantages of an interview survey over a self-administered questionnaire are: 1. Fewer incomplete questionnaires 2. Fewer misunderstood questions 3. Generally higher response rates 4. Greater flexibility in terms of sampling and special observations |
|
|
Term
Strengths of Survey Research: |
|
Definition
1-Surveys can be good for descriptive studies of large populations. 2-Surveys allow large sample sizes and good generalizability. 3-Surveys can allow for multivariate analyses. 4-Survey research is generally strong on reliability. |
|
|
Term
Weaknesses of Survey Research |
|
Definition
1-Survey research can seldom deal with the context of social life. They can be superficial. 2-Surveys are subject to artificiality; they cannot measure social action or interaction. 3-They are generally weak on validity. |
|
|
Term
Client satisfaction surveys:
-One big problem with client satisfaction surveys is |
|
Definition
-Most professionals in the human or social services, if told to design a program evaluation, would likely begin thinking about surveying clients to determine what they thought of the services received. -Such an approach is among the simplest and most frequently used measures of program evaluation and is known as conducting a client satisfaction study. -Asking for feedback about their experiences gives professional staff and management the opportunity to remove obstacles that prevent clients from having satisfying encounters with programs. -Client satisfaction surveys stem from the assumption and belief that clients are the best source of information on quality and delivery of services. -problem with client satisfaction surveys is that they are nearly always positive. In other words, they are inherently biased |
|
|
Term
Reasons for positive responses: in client surveys are...... |
|
Definition
1. Selection bias (more satisfied clients complete surveys) 2. Naïve clients would not know bad service from good service3. Social desirability bias 4. Experimental expectancies 5. Psychological defense against failure in program 6. Invalid instrument -Quantitative assessments tend to yield high levels of satisfaction while qualitative reports tend to reveal lower satisfaction. -Client satisfaction surveys are sensitive to attrition (i.e., clients dropping out of programs prematurely will not have the opportunity to complete a client satisfaction survey). - Clients that stay in treatment longer are also more likely to give favorable responses. -Mandated clients tend to score lower on client satisfaction surveys.-Client satisfaction surveys do not prove program effectiveness. -They do not provide information on outcomes – whether clients actually improved from the services provided. -However, many administrators may use client satisfaction information for just such purposes. |
|
|
Term
Single Systems Research Designs (Single-Case Evaluation Designs) |
|
Definition
A time series design used to evaluate the impact of an intervention or a policy change on individual cases or systems.-In a single-case design, the sample size is one. -Generally, one individual; however, your sample element may also be a family or a community. -Single-case designs are traditionally utilized when a practitioner can reliably measure a behavior over a number of consecutive observation points or days. The logic of single-case designs requires taking enough repeated measures to make it unlikely that extraneous factors (such as changes in a client’s environment; i.e., history) would account for improvements that take place in the target problem with the onset of intervention.Key concepts here are multiple measurement points and unlikely coincidence (i.e., it is not likely that what you are observing is the result of chance). -Baseline (A) – The phase of repeated measures that occurs before intervention is introduced.-A baseline is a control phase. -Data patterns collected during baseline phases are compared to the data patterns collected during the intervention phases.-Intervention (B; C; D) – The phase of repeated measures that occurs following the introduction of an intervention. |
|
|
Term
|
Definition
– Inferring or attributing a change in your dependent variable as “caused” by a change in your independent variable. -Just because a dependent variable varies in a predictable fashion with variation in an independent variable, that is not a sufficient basis for inferring that the independent variable caused the dependent variable to change |
|
|
Term
Requirements for inferring causality (Memorize this): |
|
Definition
1. The cause must precede the effect (i.e., the action of the independent variable must precede the change in the dependent variable). [Infering Change in DV "cause change in IV] 2. Two variables must be empirically correlated with one another. 3. The observed empirical correlation between two variables cannot be explained away as the result of the influence of some third variable. |
|
|
Term
Types of single-case designs: |
|
Definition
Basic AB design Basic AB design Multiple-baseline designs (A type of single-case evaluation) |
|
|
Term
Types of single-case designs: Basic AB design |
|
Definition
-The simplest single-case evaluation design that includes one baseline phase (A) and one intervention phase (B). -This is a popular design among practitioners and researchers because it involves only one baseline and therefore poses the least conflict with service delivery priorities. -It has less control for history, however, than most alternative single-case evaluation designs. -When the baseline trend signifies improvement, it may be advisable to continue collecting baseline measures until the improving trend levels off. -Baselines ideally should be extended until a stable trend in the data is evident. -How many measures or data points are adequate? The more measures the better. -You must have a minimum of three data points. Most authors recommend 5-10 data points to establish a baseline. |
|
|
Term
|
Definition
A single-case evaluation design that adds a second baseline phase (A) and a second intervention phase (B). This design assumes that if the intervention caused the improvement in the target problem during the first intervention period, then the target problem will reverse toward its original baseline level during the second baseline. When the intervention is reintroduced, the target problem should start improving again. Problems with ABAB design include ethics (i.e., withholding a working treatment) and irreversible effects caused by the first intervention period (e.g., newly learned social or coping skills would probably not be reversible). |
|
|
Term
Multiple-baseline designs |
|
Definition
A type of single-case evaluation design that attempts to control for extraneous variables by having more than one baseline and intervention phase (simultaneously). Although each baseline starts simultaneously, the intervention is introduced at a different point for each. |
|
|
Term
Problems with single-case design: |
|
Definition
-The prime weakness of single-case designs is their limited external validity. -With a sample of one, we are dealing with idiosyncratic conditions that cannot be generalized to other clients, practitioners, or settings. -However, replicating the experiment, if possible, can increase confidence in a single-case study’s external validity. |
|
|