Logo en.artbmxmagazine.com

Iso 9000 and data-driven decision making

Table of contents:

Anonim

The title of principle No. 7 of the 2000 version of ISO 9000 sounds quite suggestive, although we prefer its statement that says "Decision making based on data analysis" and why is this?

Our experience has permanently confirmed that most decisions are made based on data that is not properly analyzed, which you can easily verify when reviewing the indicators that your company has:

  • Average monthly sales Average salary Average sales per employee

An example of not understanding the average is calling the attention of a sales agent for being below the average, in every group of data a certain amount of them above the average and others below it is normal. Another example heard, "we are not so bad at least 50% are above average."

This reminds us of the story of the man who drowned in a river with an average depth of one meter.

What other data are there besides an average: there is a range, a variation, a mode, a median, a variance, etc., values ​​that in many cases are more significant than the average.

When conducting a survey on the level of employee satisfaction: from very dissatisfied (1) to very satisfied (5), the average was calculated and a level of satisfaction was considered for x question of 2. With the same data, an index was determined satisfaction of 22% when taking the number of responses from the upper values ​​(4 and 5) and dividing by the total responses. This allowed for a better indicator, comparable to other variables. When calculating the standard deviation it was found that it was 1.17 with a Coefficient of Variation of 50% (SD / Average), which indicated little uniformity in the responses. The perception was radically changed from when 'satisfaction level' was measured to when 'satisfaction index' was established.(The first measures how satisfied customers are and the second how many customers are satisfied).

When only the average is used, it could be the mistake of saying that the answer to two questions is similar because they have the same average, which is not always true. It is the same when we calculate average weight, average height, average temperature, average salary, average production, average rejection.

Acceptance of the data:

When doing more rigorous analysis of the data, such as a correlation between the different questions or variables under study, only those results that coincide with the paradigm or with the obvious are accepted. (Employee satisfaction level versus their salary level), but when the correlation index does not coincide with their paradigms, it is rejected as invalid, without further analysis of why such value.

While it is true a correlation index says that there is a relationship between two variables or questions, not necessarily one is the cause of the other as it could be misinterpreted, but it is convenient to analyze the reason for such a relationship. When the correlation index is squared, the coefficient of determination can be obtained, which indicates the probability that to a certain answer in a question, a similar or inverse answer can be given in another question.

Example for question 1, from a survey the following relationships were found:

Question

Correlation

Coef. Determ.

4

.78

61%

6

.22

4%

14

-.84

70%

Question 4 is directly related to question 1, the probability that when they answer with a high value (for example 5) in question 1, they will answer with a high value in question 4, is 61%. Question 6 is not related to question 1. Question 14 has a high relationship but in the opposite direction with question 1. The probability that when they answer high (5) in question 1, they will answer with a low value (for example 1) in question 14 is 70%.

This type of analysis can be done by adding a control question to the customer survey, for example: "In general terms, what is the level of satisfaction?", The correlation allows you to differentiate which variables have the greatest impact on satisfaction and which are indifferent.

Correlation studies are very important when it comes to establishing cause-and-effect relationships in the indicators of the Balanced Score Card, thereby allowing to define their weight with respect to the superior indicator. Additionally, it allows differentiating what are indicators (of results) from what are inducers (of processes).

Use of graphs:

The graphs that are used in companies often only show the values, without indicating their trend, which in some cases is much more important and when the trend is used it is only done through the equation of the straight line or trend linear. A very important curve can be the logarithmic curve. If you have a graph in «excell», just by positioning yourself at any of the points on the graph, you press the right button of the «mouse» and it appears in the «add trend line» menu, with the mentioned curves and many more.

Why is this type of analysis important ?: In a client company we were analyzing the monthly sales of a product comparing two different years.

The two-year graph showed a steady increase in sales. By superimposing sales from year 1 with year 2, the last year showed that sales were higher. When calculating the trend curve for each year, a positive trend is shown for both periods, when the trend curve was changed to a logarithmic curve, it could be observed that the previous year, the trend increased vertically, on the contrary this new year the trend remained horizontal and rather decreasing. This indicated that although sales were increasing each year, the rate of increase was decreasing. This is equivalent to making a graph of the percentage increase in sales for one month over the previous one. To be happy with the increase in sales year after year,there was concern that the rate of increase was decreasing significantly.

If the logarithmic curve is upward (almost vertical trend), it could indicate real growth. A horizontal trend may indicate decrease, even though sales are increasing year by year. Something similar to what is called diminishing returns.

Additionally, with respect to the graphs, we observe many of them where only a certain value is plotted, either individual or average, and the range is not indicated, be it the data with which the average is calculated or a moving range for values individual. Graphs on percentages (P) are used, but graphs are not used, of defective units (NP), nor of defects (C), much less defects per unit (U), but this is a topic that we will expand on in «Statistical Control of Processes and Quality Improvement ».

The enemy of Quality:

When a QMS (Quality Management System) is developed, it is established for the purpose of having "standard" products, achieving repeatability, and reproducibility, which is not always achieved by writing procedures.

Why? Because the way to control the enemy of all processes and therefore of quality is not always established, which is its variation. The variability of any process can and should be measured, which is achieved by using the standard deviation, this could also be seen through the range when using a graph of values, another way to verify and measure variability.

Through what has been raised here, a strong statistical training of the author could be assumed, which is not the case, therefore experts in the field could deepen the concepts expressed here.

Having clarified the above, we return to the standard deviation, one of the best indicators, as well as its multiple uses. The DS allows us to know the variability of the process (plus minus 3 DS), calculate the capacity of that process to comply with the established specifications (Cpk) or with the client's requirements (Capacity to deliver an order within 8 days), the probability of achieving a sales goal (P (z)). It also allows you to compare two completely different processes or areas such as sales and delivery times using the coefficient of variation, or the coefficient of determination. It is feasible to be able to set limits that allow differentiation when processes are affected by normal causes of variation and to be able to separate special causes. Also the DS helps in more interesting analyzes such as asymmetry,kurtosis or analysis of variance.

By means of the DS, it is possible to establish the normal limits of variation to budgets, to differentiate really outstanding sellers as well as to estimate the need to have to review an entire batch of products from a sample. Motorola has bequeathed us the concept of 6 sigma (6 DS) and its application to both production and administrative processes, as well as the importance of seeking ever greater challenges, leaving the traditional percentage of defects or errors for a more ambitious goal of PPM (Parts per million, 1.5% equals 15,000 parts per million).

One last tip:

Do not think that everything we have talked about requires sophisticated computer programs, it is as simple as having "excell". It is likely that the "excell" of your computer does not have data analysis activated and how is it activated? Simple, go to "tools" (tools), select from the menu "add-ins", then mark the two options «Tools for VBA data analysis» and press accept. When you return to "tools", you will see in the menu the option "data analysis", with a large number of statistical options, beyond the ones you can use.

Conclusion:

The sound principle of "making decisions based on data" comes from the recommendations that Dr. WE Deming gave to the Japanese when he told them "In God I trust, the rest must present data", and even more should be mistrusted of the source of those data, because decisions made on the wrong data or wrong analysis can lead to fatal decisions.

There are a number of statistical tools for the analysis and interpretation of the data that come out of the processes, in such a way that it allows decision-making to have a scientific component, beyond the simple «feelling», which although it will always be important, should not be the unique. Additionally, we must conclude that the data is not reality but that it does represent it in a way that is worth studying. We recall the master's research classes, the professor's sentence was very true: "first there is the problem to be investigated and then there are the data", so it is convenient not to go around taking data everywhere without really knowing what the problem is we want to investigate.

________________________

By: Gilberto Quesada for Grupo Kaizen.SA

Iso 9000 and data-driven decision making