Process Data Analysis: Filtering Out the Noise, and Acting On the Signals

Process Data Analysis: Filtering Out the Noise, and Acting On the Signals

This guest post is authored by Tim Gellner, a senior consultant with MAVERICK Technologies.

Prior to the ISA Automation Week 2013 conference, I wrote a blog post titled “Measuring the Value of Overall Equipment Effectiveness in Manufacturing Processes.” The post outlined several key elements needed to implement a successful overall equipment effectiveness (OEE) program. The last element discussed was the need to "CONTINUOUS IMPROVEMENT" Crossword (process business strategy)standardize methods for data analysis using process behavior charts. This article discusses the properties and uses of process behavior charts in analyzing process data using the OEE example.

As automation professionals we design and implement systems that gather data from every process in a manufacturing facility for use in aggregating performance metrics. Often the analysis of the data and subsequent visualization of the metrics fall under IT and accounting rather than operations and engineering, resulting in standard business type charting (bar charts, pie charts, waterfalls etc.) which describe the data but generally do not lend themselves to statistical analysis of the data for continuous improvement.

In the book Understanding Variation – The Key to Managing Chaos, Donald J. Wheeler writes:

We analyze numbers in order to know when a change has occurred in our processes or systems. We want to know about such changes in a timely manner so that we can respond appropriately. So, in our analysis of numbers, we need to have a way to distinguish those changes in the numbers that represent changes in our process from those that are essentially noise. To this end Walter Shewhart made a crucial distinction between two types of variation in the numbers. Some variation is routine, and is to be expected even when the process has not changed. Other variation is exceptional and therefore to be interpreted as a signal of a process change. In order to separate variation into these two components he created the control chart, which is now being called a process behavior chart.

As with the control chart, the process behavior chart is comprised of both an individual value (I) chart and a moving range (mR) chart. The upper and lower control limits are referred to as the upper and lower natural process limits within the context of the process behavior chart.

The individual values for OEE measurements of the process, the average of the values, and the upper and lower natural process limits (UNPL and LNPL) are plotted on the I chart. The UNPL and LNPL are computed by adding and subtracting the product of a scaling factor (2.66) and the average moving range, which yields limits of ±3 sigma. The mR chart displays the variation for each measurement from the previous measurement, the average of the moving range, and the upper range limit (URL). The URL is calculated by the product of a scaling factor (3.268) and the value of the average moving range. The result is a filtering effect of the routine variation. By characterizing the extent of the inherent routine variation, the natural process limits on the process behavior chart provide for the differentiation between routine and exceptional variation.

A predictable process is one that exhibits only routine variation which can be defined as measurements falling above and below the average in non-sustained patterns (less than seven consecutive values on the same side of the average) and inside the limits. This type of variation is expected.

Exceptional variation indicates that a change has occurred in the process and is characterized by one or more of the following;

  • One or more measurements which fall outside the natural process limits.
  • Seven or more consecutive measurements falling on the same side of the average.
  • Six or more consecutive measurements aligned in the same direction (upward or downward).

Each component data set contributing to the overall OEE measurement (e.g. availability, performance and quality) is analyzed using the same methods. This provides a natural drill-down to more granular information providing the basis for root cause analysis when an investigation is warranted, or when determining the best candidate for improvement in a predictable process to increase the overall performance.

Process Data Analysis ChartThe chart depicts a predictable process that has undergone a change on or about 1/19 resulting in the 1/19 OEE data point being outside the lower natural process limit on the I chart. All values on the I chart prior to 1/19 represent routine variation, the value on 1/19 represents exceptional variation and requires investigation.

The chart also contains signals indicating that an additional process change has occurred as early as 1/22 but certainly by 1/26 when the sixth consecutive OEE value was above the process average and all six points are in the same direction. This signal indicates; that a change in the process has occurred (the OEE percent has increased and the variability has decreased), that the attributes of the change can be identified through more in depth analysis of the component data sets, and that the natural process limits have also changed and must now be re-calculated.

To promote and implement continuous improvement in manufacturing processes, the manner in which we analyze the data is as important as having the data to begin with. We must employ the appropriate analytical tools to filter out the noise and highlight the signals, and then act on the signals.

Tim Gellner

About the Author
Tim Gellner is a senior consultant in MAVERICK Technologies’ Operational Consulting group with more than 20 years of experience in discrete and continuous manufacturing processes, manufacturing intelligence, and process improvement. He earned his bachelor‘s degree in systems and control engineering from the University of West Florida in Pensacola, Fla.  Contact Tim at: tim.gellner@mavtechglobal.com.

Measuring the Value of Overall Equipment Effectiveness in Manufacturing Processes

This guest post is authored by Tim Gellner, a senior consultant with the manufacturing IT group at MAVERICK Technologies.

Is overall equipment effectiveness (OEE) a “magic metric” or just another flavor of the month measurement?  The answer is: it can be either, but not both.  We have become quite good at producing production metrics that are boiled down to a value that is either green, yellow or red as compared to an arbitrary goal or world class standard and that is as far as we go.  The result is alternating periods of contentment, concern and panic. When it’s Business Percentage Conceptgreen we pay no attention to it, when yellow we hope that it will be green again soon, so we tweak something and watch it periodically.  When it slips to red we scramble and then write reports.  Before long the efforts to keep us out of the red compound the original problem and we struggle to get back to where we were before we had the green, yellow and red metric. At this point the program becomes relegated to “flavor of the month” and it is abandoned.  For OEE to become the “magic metric” there are some empirical truths that must be ingrained in the organization at the outset of the program definition.

OEE is a continuous improvement program and therefore requires a sustained commitment to produce results.  A successful implementation requires an overarching vision for the program and comprehensive support from management (top down) as well as understanding, knowledge and realization of benefits from the plant floor (bottom up).  The meeting ground is the data analysis and improvement methodology (we are all on the same team with a common goal).  It is imperative that those responsible for the OEE numbers see the program as both a method for analysis and a means for improvement, not another number to have to defend or explain.

Implementing an OEE program is as much about people and processes as it is about technology.  Ignoring the contribution by the first two components will lead to an inevitable failure regardless of the sophistication of the third.  Conversely if the technology is cumbersome and difficult to use, the knowledge and enthusiasm of the people will be lost in their attempts to interact with the technology (hardware and software) and the OEE program is doomed.

Standardize everything, get everyone to agree, and stick to it. When you stop laughing consider this: Even if the program will only encompass a single machine in a single facility you must still define and document what you are measuring, how you will measure it, and how you will analyze the resulting data.  Now consider the case where the program will be rolled out to multiple facilities, with multiple lines and machines.  If the OEE metric is be to meaningful (i.e. facilitates “apples to apples” analysis), then the following items must be standardized across the enterprise:

  • OEE Calculation Model − The basic calculation is straight forward. OEE = Availability x Performance x Quality.  The variance occurs in the lower levels of the calculation.  The OEE calculation model diagram below represents the calculation model agreed upon by a company in the initial phase of a multi-line, multi-facility OEE program roll out.
  • OEE Factor Definitions − The terms in each block of the diagram must be fully defined as they relate to the production processes within the organization that are to be included in the program.
  • Downtime Reasons − While downtime reasons are not required for the calculation of OEE, it is advantageous to assign reasons which provide meaningful context to the event/duration data. Ideally, in an automated system the reasons for unplanned stops can be automatically assigned, are reliable, can readily adhere to standards, and are generally not available.  We usually have to rely on downtime reason assignment done manually by operators.  To be effective we must ensure that the reason assignment process is quick and easy; to this end we must present a list of reasons that is sufficiently broad to provide assignable explanations for both planned and unplanned downtime events and at the same time is narrow enough so that it is not overwhelming.
  • Methods for Data Analysis − The data that is derived from manufacturing processes contain both natural variability and signals.  The key to meaningful data analysis is to filter out the natural variability and leave the signals. Understanding and acting on the signals in the data is the key to improvement, which is the goal of any OEE program.  Far too often, we rush to produce slick, glitzy charts that while they look great, they do not support meaningful analysis and often become the basis for the alternating periods of contentment, concern and panic as they encapsulate the variability and the signals into a description of the data, not an analysis of the data. To avoid this pitfall, I turn to Shewart’s Process Behavior charts.  A discussion of the properties and uses of the process behavior chart will follow in an upcoming post.
OEE Calculation Model

OEE Calculation Model

Measuring and analyzing OEE has been proven to be a very powerful tool for improving manufacturing processes.  Getting more out of existing equipment goes straight to the bottom line. R. Hansen’s book, Overall Equipment Effectiveness, noted: “A 10% improvement in OEE can result in a 50% improvement in ROA (return on assets), with OEE initiatives generally ten times more cost-effective than purchasing additional equipment.”  Given this incentive, why not take advantage of this opportunity and employ thoughtful and standard methods to make OEE the magic metric.

Tim Gellner

About the Author
R. Tim Gellner has more than 20 years of experience in discrete and continuous manufacturing processes, manufacturing intelligence and process improvement.  Tim advocates continuous improvement through the use of actionable and timely information. He earned a bachelor‘s degree in systems and control engineering from the University of West Florida in Pensacola, Fla.  Contact Tim at: tim.gellner@mavtechglobal.com.

What Your DCS Knows But Won’t Tell You

This is an abstract that will be presented at ISA Automation Week 2012 in Orlando, Florida. Click HERE for information on ISA Automation Week 2012.
This session is in the Control Performance Track: Manufacturing Intelligence

Presented By:

George BuckbeeMr. George Buckbee, ExperTune, Inc. Read Bio

Abstract:

Your DCS is keeping secrets from you.  The distributed control system and the data historian have a huge amount of data from your plant.  But they are not telling you the most important secrets that lie within. Your control system is awash in data.  Every second more data pours in, from instruments and analyzers throughout the plant.  The data historian dutifully stores it away, so that you have Megabytes, Gigabytes, or even Terabytes of data.  It is so much data that it can be overwhelming. You could easily spend days or weeks poring through the data, looking for something significant.

The DCS and data historian are dutifully sharing it all.  And yet, it seems they are hiding the important information, somewhere in this mountain of data. Specific techniques can be used to extract the meaningful information from the control system.  Simple queries and mathematical techniques can be used to isolate equipment, process, and control problems.  These problems can have far-reaching effects, and often go un-noticed.

This paper explains how to uncover these secrets, and use the answers to improve the bottom-line operation of your plant.  Learn about specific techniques to uncover process bottlenecks, improve efficiency, save energy, and resolve quality issues.

Estimated Standard Deviation

Because an exact number for standard deviation could only be obtained by taking an infinite number of measurements, the below formula calculates the estimated standard deviation.

Let’s take a set of measurements from a temperature transmitter with a range of 0-100 deg, Celsius.

IDEAL VALUES     AS FOUND VALUES

0

1

10

12

20

21

30

28

40

42

50

51

60

62

70

68

80

85

90

91

100

102

The first step for calculating for estimated standard deviation will be to find our average deviation or mean.

For our example the deviations are:

1,2,1,2,2,1,2,2,5,1,2

The average deviation would be 1.909 degrees Celsius.

The next step is to determine how far off each measurement is from the average deviation.

.9,.1,.9,.1,.1,.9,.1,.1,3.1,.9,.1

Then we take the square of each of these and sum.

.81+.01+.81+.01+.01+.81+.01+.01+9.61+.81+.01=12.91

Then we divide by the number of measurements minus 1, which is 10, to arrive at 1.29

The square root of 1.29 is 1.136 and is our estimated standard deviation.

The Estimated standard deviation, based upon the 11 checks made, is 1.136 degrees Celsius for the transmitter.

Estimated Standard Deviation is expressed in same units as data.

References: A Beginner’s Guide to Uncertainty of Measurement by Stephanie Bell.

http://www.easycalculation.com/statistics/standard-deviation.php

 

Improved Solid State Hydrogen-specific Analyzing Systems

Improved Solid State Hydrogen-specific Analyzing Systems

Prabhu SoundarrajanPrabhu Soundarrajan is currently the director for ChemPID division at ISA and is the Growth Leader for gas sensor business at LumaSense Technologies. Prabhu Soundarrajan is leading development and commercialization of infrared and photo acoustic gas analyzers based to chemical, petroleum, energy, cleantech, medical and utility markets.[/dropshadowbox]

This presentation was delivered at the 55th Annual Symposium of the Analysis Division by Prabhu Soundarrajan. It explains why solid-state hydrogen sensors enable new monitoring applications through both enhanced capability and reduced cost.

Pin It on Pinterest