Term
Project Selection Elements |
|
Definition
-Project Alignment - Project Payoff - Project Scope |
|
|
Term
Project Prioritize Components |
|
Definition
1. Importance to customer 2. Cost to implement 3. Feasibility (likelihood of success) 4. Cost reduction 5. Leverage (potential impact on other areas) |
|
|
Term
|
Definition
What is the goal (mission) of the project? Why are we pursuing this goal (problem statement / opportunity statement)? How does it relate to the business plan and to the bottom line (the business case)? What are all of the outputs of this project? (deliverables) Who are the players? - Team Leader, Team Members, Sponsor Who are the primary stakeholders? When is the team starting? When will the project be completed? How will we measure success? |
|
|
Term
|
Definition
Project Name Team Leader Name Champion Mission Statement/Objective Problem Statement/Project Description Business Case Project Scope Deliverables Goals/Metrics Key Customers Expectations Milestones/Completion Dates Team Members Expected Resource Needs |
|
|
Term
|
Definition
Gives a timeline of different activities that need to be completed, also lists the critical path. In addition sometimes lists the resources required and who is completing the tasks. |
|
|
Term
|
Definition
Forming: As a group forms and members learn how to deal with one another. |
|
|
Term
|
Definition
The Storming phase was named for the swirling change, individual emotion, and group conflict that characterize this step of the process. |
|
|
Term
|
Definition
As teams resolve issues generated in the Storming phase and reach consensus on the team's "future state" they enter the more productive Norming phase. |
|
|
Term
|
Definition
Performing: As teams mature they may enter the most productive phase of team development |
|
|
Term
|
Definition
strategy to focus on a portion of the population to understand more about subgroups and make sure they are represented by collecting independent samples within each stratum, or group. |
|
|
Term
|
Definition
Supplier, input, processing, output, customer chart. Y=F(x)'s, can add metrics for each phase. |
|
|
Term
|
Definition
Critical to quality characteristics represent customer desires that must be met, strong correlation to the buying decision, often forms the basis for comparison. start broad and then extend branches to reach internal specific measurable characteristics. Use survey and data collected to map specifics. (starts out with needs and then goes into drivers). he Tree Diagram moves from general WHATs to specific HOWs. |
|
|
Term
|
Definition
The how of the first house becomes the what of the next house (goes with customer requirements for the what and how technical features for the next) |
|
|
Term
|
Definition
include what, when, where, to what extent. do not include money savings for project goal. |
|
|
Term
|
Definition
Satisfaction on the y axis and performance on the x axis. |
|
|
Term
Voice of the Customer Process |
|
Definition
1. Identify (segment and target) 2. Gather (active and passive information) 3. Sort (affinity diagram) 4. Develop (CTQC diagram) 5. Translate (develop specifications) |
|
|
Term
|
Definition
Groups voice of customer data into categories to move towards CTQC's. |
|
|
Term
|
Definition
For CTQ items, needs to be clear, unambiguous, and observable standard of acceptance. If at all possible, operational definitions should also be measurable.three elements that are used to apply it:
Criterion - The standard which establishes the objective for the key quality characteristics and against which the test results will be evaluated. Test - A specific procedure for measuring a key quality characteristic. Decision - A determination as to whether or not the test results indicate that the characteristic meets the criterion. |
|
|
Term
|
Definition
matrix used to translate the Voice of the Customer into Product Features, then Product Specifications, and ultimately, Process Parameters. is most useful for process/product design or re-design, and may also be employed in either the Improve or Control phases of the DMAIC cycle |
|
|
Term
|
Definition
Teams use QFD to identify, resolve and prioritize issues involved in providing products, processes, and services that will more than satisfy their customers. The object is to delight customers and exceed their desires rather than just maintain or improve product and service performance.market research is a prerequisite to QFD because you must have a very good understanding of what the customer wants in order to provide products that meet or exceed all requirements. Gathering this information is the process of understanding:
What the customer wants The importance of the benefits How well customers perceive the performance of those who provide products that address these benefits |
|
|
Term
|
Definition
Value Streams include the tangible flow of material as well as the intangible flow of information and services - stretching from product design and raw material procurement, through manufacturing/service origination, and past delivery to the customer to include after-sales service. |
|
|
Term
|
Definition
Upstream inputs provide the most leverage to reduce process variability, so improvement actions should be prioritized accordingly - at the point of greatest leverage. |
|
|
Term
|
Definition
Separates Value-Added from Non-Value-Added operations.Identifies waste of many types. Stratifies time. |
|
|
Term
|
Definition
Focused on systemic cause and effect - shows reinforcing and balancing forces along with unintended consequences.Examines behaviors behind process performance - non-linear. |
|
|
Term
|
Definition
Shows decision points and If/Then logic. Displays procedures and logic of process |
|
|
Term
|
Definition
At the most detailed level, inputs can be classified as those that are Controllable (like settings on a machine), Noise (uncontrollable factors, like weather or normal person-to-person variability), or Common Sense Factors (procedures that can be expected to be performed to execute a process, like turning on a machine). A closely related activity is the Cause & Effect (Fishbone) diagram, which can be used to help identify the root cause of input variability. |
|
|
Term
|
Definition
A focus of the improvement effort would be to eliminate non-value-added operations, combine operations to improve flow and minimize the handoffs, and reduce the queue times. |
|
|
Term
|
Definition
A sample drawn such that every member of the population has an equal chance of being selected. |
|
|
Term
|
Definition
Descriptive statistics aim to describe and summarize the important features of a population or process. Graphs such as pie, bar and line graphs, histograms, boxplots, scatterplots, etc. and numerical summaries like means, variances, etc. are examples of descriptive statistics, as illustrated below. use things such as mode, range, standard deviation, correlation, etc. |
|
|
Term
|
Definition
nferential statistics use sample data to help make comparisons among, or draw inferences about the effects of different solutions or treatments on the overall population. When the entire population cannot be measured, a smaller sample of data is used to infer, or estimate, the characteristics of the wider population. Regression analysis, hypothesis tests and experimental design fall into this category. use regression analysis, hypothesis testing, confidence intervals, |
|
|
Term
|
Definition
representations of categories or attributes. can be good/bad, scale of 1-5, etc. counts of items or objects that can only come in counts of whole units.So number of blades of grass in a yard is discrete. whole unit of 1 thing |
|
|
Term
|
Definition
derived from a scale that is infinitly divisible. weight, height, voltage are examples. gives greatest possible information content because values are represented directly and not into categories. |
|
|
Term
|
Definition
Type of discrete data, groups are labels. no order |
|
|
Term
|
Definition
Discrete. Groups are in a logical order (ie compliant, non compliant. rating something 1-10) |
|
|
Term
|
Definition
Discrete, number of items or events. (# of cars purchased per week) |
|
|
Term
|
Definition
measurements are made along a continium. (ie speed of a pitch, gas mileage of a car). highest cost and highest level of information from continous. goes on down where nominal data is the cheapest and least amount of information. |
|
|
Term
|
Definition
The length of bins and number of bins can skew data one way or another. |
|
|
Term
|
Definition
How much spread or variation there is in a process. |
|
|
Term
Variance/Standard Deviation |
|
Definition
average squared distance between the mean and the individual observations. It is a better measure of dispersion than the range because it includes all the observations in the sample, rather than just the minimum and maximum. The standard deviation is the positive square-root of the variance. Variance can only be positive or 0. |
|
|
Term
|
Definition
curve is symmetric about its mean (half the values fall on either side of the mean) with the ends (tails) extending out to infinity. Its shape and location are fully defined by two parameters: the mean and standard deviation (or variance). The normal distribution is important for two reasons:
Many widely used statistical techniques are based on the assumption that the distribution underlying the sample data is normal. The distribution of sample means from any distribution is well approximated by the normal distribution, due to the Central Limit Theorem (CLT), which states that as the sample size increases, the distribution of the sample mean tends toward the normal, irrespective of the shape of the original (parent) population. In addition, the distribution of the sample mean has a smaller spread, or variance, compared to the original distribution (due to the effects of averaging). |
|
|
Term
|
Definition
68% of the values under the normal curve fall within one standard deviation of the mean, 95% within two standard deviations and 99.7% within three standard deviations of the mean.The empirical rule holds for all normal distributions (with any mean and standard deviation) and helps estimate normal probabilities. |
|
|
Term
|
Definition
1. Identify what to measure 2. Determine how to measure the characteristic 3. Develop sampling plan and reaction plan 4. Validate measurement system 5. Add to overall control plan |
|
|
Term
|
Definition
used to prioritize the process inputs (causes, or X's) that have been identified as potentially contributing to an Effect (Y).rovides a way to prioritize potential causes based on the number of CTQC's affected and the strength of the relationship between potential causes (X's) and effects (Y's). |
|
|
Term
Measurement system analysis components |
|
Definition
1. Stability-refers to the capacity of a measurement system to produce the same values over time when measuring the same sample. As with statistical process control charts, stability means the absence of "Special Cause Variation", leaving only "Common Cause Variation" (random variation). 2. Bias, also referred to as Accuracy (see diagram below), is a measure of the distance between the average value of the measurements and the "True" or "Actual" value of the sample or part. See the illustration below for further explanation. 3. Linearity is a measure of the consistency of Bias over the range of the measurement device. 4. Repeatability- assesses whether the same appraiser can measure the same part/sample multiple times with the same measurement device and get the same value. 5. Reproducibility assesses whether different appraisers can measure the same part/sample with the same measurement device and get the same value. |
|
|
Term
Requirements for a reliable measurement system |
|
Definition
Statistical stability over time. Variability small compared to the process variability. Variability small compared to the specification limits (tolerance). The resolution, or discrimination, of the measurement device must be small relative to the smaller of either the specification tolerance or the process spread (variation). As a rule of thumb, the measurement system should have resolution of at least 1/10th the smaller of either the specification tolerance or the process spread. If the resolution is not fine enough, process variability will not be recognized by the measurement system, thus blunting its effectiveness. |
|
|
Term
out of control conditions |
|
Definition
f one or more points falls outside of the upper control limit (UCL), or lower control limit (LCL). The UCL and LCL are three standard deviations on either side of the mean - see section A of the illustration below. If two out of three successive points fall in the area that is beyond two standard deviations from the mean, either above or below - see section B of the illustration below. If four out of five successive points fall in the area that is beyond one standard deviation from the mean, either above or below - see section C of the illustration below. If there is a run of six or more points that are all either successively higher or successively lower - see section D of the illustration below. If eight or more points fall on either side of the mean (some organization use 7 points, some 9) - see section E of the illustration below. If 15 points in a row fall within the area on either side of the mean that is one standard deviation from the mean - see section F of the illustration below. |
|
|
Term
|
Definition
The ability of a process to meet specifications (customer expectations) |
|
|
Term
|
Definition
The difference between the upper and lower specification is know as the tolerance. |
|
|
Term
Specifications vs control limits |
|
Definition
Specifications are not related to control limits - they are completely separate. Specifications reflect "what the customer wants", while control limits tell us "what the process can deliver". |
|
|
Term
|
Definition
If cp<1 it doesn't fit it's spread is greater than the tolerance CP=1 six sigma spread is = tolerance CP>1 it fits and is capable Cp is often referred to as "Process Potential" because it describes how capable the process could be if it were centered precisely between the specifications. A process can have a Cp in excess of one but still fail to consistently meet customer expectations |
|
|
Term
|
Definition
The measurement that assesses process centering in addition to spread, or variability. Take the lower value with the formula and the process has the ability to reach the higher one. |
|
|
Term
Measurement system analysis steps |
|
Definition
1. Prepare 2. Stability 3. Bias 4. Linearity 5. Repeatability and re productively 6. Graph |
|
|
Term
Component of measurement error |
|
Definition
Reproducability amongst appraisers. |
|
|
Term
|
Definition
Gauge by the same appraiser can they produce the same result on multiple measurements. |
|
|
Term
Measurement system rule of thumb |
|
Definition
The measurement system is ideal if the measurement error (%GRR) is 10% or less and acceptable if %GRR is between 10% and 30% of the smaller of the process tolerance or process variation. For more critical applications, the minimum allowable amount of %GRR for an acceptable gauge may be lower than 30%. |
|
|
Term
|
Definition
f there are three or less different range values within the control limits, the measurement system lacks adequate resolution: If there are only four different range values within the control limits AND more than 25% of the observations are Zero, then the measurement system lacks adequate resolution. Such a condition is illustrated by the chart below from a two-appraiser (A & B) Gauge R & R study: |
|
|
Term
|
Definition
which is used to assess the degree of measurement error (noise) relative to total system variation. On this type of chart, measurement error is expressed by the area inside the control limits, so we want to see plot-points outside the limits! |
|
|
Term
Average chart rule of thumb |
|
Definition
The rule of thumb for acceptance of a measurement system based on the Averages Chart is one half or more of the averages falling outside of the control limits. |
|
|
Term
|
Definition
Operator Bias Averages Chart All appraisers have at least 50% of values outside the control limits No bias present One or more appraisers have less than 50% of values outside the control limits Bias is present Operator Consistency Range Chart All appraisers ranges are within the control limits Results are consistent across appraisers One or more appraisers ranges are outside the control limits Results are not consistent across appraisers |
|
|
Term
Attribute measurement systems |
|
Definition
Go/No go guage, Rating scales. Will have confidence intervals on repeatability and reproducability. If the level of agreement of the assessments (between appraisers as well as across trials for a single appraiser) is high, then the measurement system is acceptable. If not, then the measurement system must be improved - usually by clarifying the Operational Definition or providing better training to appraisers to reduce variation in application of the standard. |
|
|
Term
|
Definition
Calibration is the process of checking the measurement output of an instrument using a known standard, and adjusting the instrument as required to read-out the proper value. |
|
|
Term
|
Definition
Can be due to any of the following: 1. Measurement 2. Operational 3. Interaction bias 4. Perception (measurer see's what they want to see) 5. Sampling- data may be collected at times that do not represent the process |
|
|
Term
Benefits of automation in measurement |
|
Definition
Lower variable cost. Consistently applied method. Potential to measure 100%. Automatic data entry into database. Closed-loop feedback system, including alarms or automatic paging of appropriate personnel. |
|
|
Term
|
Definition
Defective units (called "defectives'), which are counted as (1) regardless of how many defects there are on a given unit, and Defects per unit of production, where there may be multiple defects on any given unit. |
|
|
Term
|
Definition
(Units processed-Actual Defects)/Actual Defects |
|
|
Term
|
Definition
process-wide metric used to express the probability that a given unit will make it through the whole system without a defect. Rolled Throughput Yield is calculated by multiplying out all of the individual Throughput Yields |
|
|
Term
|
Definition
(Number of Defects x 1,000,000) / ((Number of Defect Opportunities/Unit) x Number of Units) |
|
|
Term
|
Definition
Probability or random samples ensure that the different population units have equal, non-zero probabilities of being chosen. In contrast, Volunteer, Judgment and Convenience samples do not use probability, and Zero-bias samples may not be random. |
|
|
Term
|
Definition
used to show trends in data over time. All processes vary, so single point measurements can be misleading. Displaying data over time increases understanding of the real performance of a process, particularly with regard to an established target or goal. |
|
|
Term
|
Definition
XmR charts can be used with any type of data, and are particularly useful to chart the whole count during a given period. They are not as useful for processes with high volume output, where a subgroup of output is collected each period. In that case, an X-bar and R, or X-bar and S chart should be used, depending on the subgroup size. |
|
|
Term
|
Definition
Used to chart the fraction defective when the subgroup size is variable or constant (although usually greater than 50). This chart assumes that the process has a binomial distribution. |
|
|
Term
|
Definition
Used to chart the number of defective unitst ( when the subgroup size is constanusually greater than 50). This chart assumes that the process has a binomial distribution. |
|
|
Term
|
Definition
Used to chart the number of defects when the subgroup size is constant (although usually greater than 50). This chart assumes that the process has a Poisson distribution. |
|
|
Term
|
Definition
Used to chart the number of defects per unit when the subgroup size is variable or constant. This chart assumes that the process has a Poisson distribution. |
|
|
Term
|
Definition
The random variable counts the number of events that take place in a given interval (usually of time or space) All events take place independently of all other events The rate at which events take place is constant usually denoted λ |
|
|
Term
|
Definition
A fixed number of repeated, identical, independent trials. n is usually the parameter chosen to label the number of trials. Every trial results in either a success, with probability p, or a failure, with probability 1-p. These must be the only two outcomes for a trial. The random variable of interest is the total number of trials that ended in a success. |
|
|
Term
|
Definition
X-Bar and R charts are recommended for subgroup sizes of 10 or less. If the subgroup size exceeds 10, the range chart is replaced by a chart of the subgroup standard deviation, or S chart. |
|
|
Term
|
Definition
Cpk and Ppk are not appropriate capability indices for attribute or discrete data. If you have a direct measure of defect frequency, such as p-bar, c-bar, or X-bar of a count, then that direct measure is the best indication of capability. In any case, the process should be stable if you are making any representation of capability.Cpk and Ppk are alternate ways of expressing the Capability Level - they both reflect the probability of producing a defect. Cpk uses the range value from a control chart to estimate the standard deviation, while Ppk calculates the standard deviation directly from the data. |
|
|
Term
|
Definition
Cp is often referred to as "Process Potential" because it describes how capable the process could be if it were centered precisely between the specifications. The calculation does not consider where the process is actually centered. |
|
|
Term
|
Definition
Think of Cpk as a Cp calculation that is handicapped by considering only the half of the distribution that is closest to the specification. |
|
|
Term
Conditions for something to be considered capable |
|
Definition
The process must be stable, or in control. The shape of the process distribution must be understood. The standard Capability calculations presented in this lesson assume the process is normally distributed, which must be verified through testing. |
|
|
Term
|
Definition
Sigma is a measure of standard deviation, whereas sigma level is a metric for the number of standard deviations from the mean to the nearest specification limit. |
|
|