What are New Technologies and Approaches for Batch and Continuous Process Control?

What are New Technologies and Approaches for Batch and Continuous Process Control?

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

 

Danaca Jordan’s Question

What is the technical basis and ability of technologies other than PID and model predictive control (MPC)? These technologies seem fascinating and I would like to know more, particularly as I study for the ISA Certified Automation Professional (CAP) exam.

Greg McMillan’s Answer

Michel Ruel has achieved considerable success in the use of fuzzy logic control (FLC) in mineral processing as documented in “Ruel’s Rules for Use of PID, MPC and FLC.” The process interrelationships and dynamics in the processing of ores are not defined due to the predominance of missing measurements and unknown effects. Mineral processing PID loops are often in manual, not only for the usual reasons of valve and measurement problems, but also because process dynamics between a controlled and manipulated variable radically change, including even the sign of the process action (reverse or direct) based on complex multivariable effects that can’t be quantified.

If the FLC configuration and interface is set up properly for visibility, understandability and adjustability of the rules, the plant can change the rules as needed, enabling sustainable benefits. In the application cited by Michel Ruel, every week metallurgists validate rules, make slight adjustments, and work with control engineers to make slight adjustments. A production record was achieved in the first week. The average use of energy per ton had decreased by 8 percent, and the tonnage per day had increased by 14 percent.

There have been successful applications of PID and MPC in the mining industry as detailed in the Control Talk columns “Process control challenges and solutions in mineral processing” and “Smart measurement and control in mineral processing.”

I have successfully used FLC on a waste treatment pH system to prevent RCRA violations at a Pensacola, Fla. plant because of my initial excitement about the technology. It did very well for decades but the plant was afraid to touch it. The Control magazine article “Virtual Control of Real pH” with Mark Sowell showed how you could replace the FLC with an MPC and PID strategy that could be better maintained, tuned and optimized.

We used FLC integrated into the software for a major supplier of expert systems in the 1980s and 1990s but there were no real success stories for FLC. There was one successful application of an expert system for a smart level alarm but it did not use FLC. However, a simple material balance could have done as well. There were several applications for smart alarms that were turned off. After nearly 100 man-years, we have not much at all to show for these expert systems. You could add a lot rules for FLC and logic based on the expertise of the developer of the application, but how these rules played together and how you could tell which rule needed to be changed was a major problem. When the developer left the production unit, operators and process engineers were not able to make changes inevitably needed.

The standalone field FLC advertised for better temperature setpoint response cannot do better than a well-tuned PID if you use all of the PID options summarized in the Control magazine article “The greatest source of process control knowledge,” including PID structure such as 2 Degrees of Freedom (2DOF) or a setpoint lead-lag. You can also use gain scheduling in the PID if necessary. The problem with FLC is how you tune it and update it for changing process conditions. I wrote the original section on FLC in A Guide to the Automation Body of Knowledge but the next edition is going to have it omitted due to common agreement between me and ISA that making more room to help get the most out of your PID was more generally useful.

FLC has been used in pulp and paper. I remember instances of FLC for kiln control but since then we have developed much better PID and MPC strategies that eliminate interaction and tuning problems.

So far as artificial neural networks (ANN), I have seen some successful applications in batch end point detection and prediction and for inferential dryer moisture control. The insertion of time delays on inputs to make them coincide with measured output is required for continuous operations. For plug flow operations like dryers, this can be readily done since the deadtime is simply the volume divided by the flow rate. For continuous vessels and columns, the insertion of very large lag times and possibly a small lead time are needed besides dead time. No dynamic compensation is needed for batch operation end point prediction.

You have to be very careful not to be outside of the test data range because of bizarre nonlinear predictions. You can also get local reversals of process gain sign causing buzzing if the predicted variable is used for closed loop control. Finally, you need to eliminate correlations between inputs. I prefer multivariate statistical process control (MSPC) that eliminates cross correlation of inputs by virtue of principle component analysis and does not exhibit process gain sign reversals or bizarre nonlinearity upon extrapolation outside of test data range. Also, MSPC can provide a piecewise linear fit to nonlinear batch profiles, which is a technique we commonly do with signal characterizers for any nonlinearity. I think there is an opportunity for MSPC to provide more intelligent and linear variables for an MPC like we do with signal characterizers.

Join the ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

For any type of analysis or prediction, whether using ANN or MSPC, you need to have inputs that show the variability in the process. If a process variable is tightly controlled, the PID or MPC has transferred the variability to the manipulated variable. Ideally, flow measurements should be used, but if only position or a speed is available and the installed flow characteristic is nonlinear, signal characterization should be used to convert position or speed to a flow.

Hunter Vegas’ Answer:

I implemented a neural network some years ago on a distillation column level control. The column was notoriously difficult to control. The level would swing all over and anything would set it off, such as weather or feed changes. The operators had to run it in manual because automatic was a hopeless waste of time.

At the time (and this information might be dated) the neural network was created by bringing a stack of parameters into the calculation and “training” it on the data. Theoretically the calculation would strengthen the parameters that mattered, weaken the parameters that didn’t, and eventually configure itself to learn the system.

The process taught me much.  Here are my main learning points:

1)      Choose the training data wisely.  If you give it straight line data then it learns straight lines.  You need to teach it using upset data so it learns what to do when things go wrong. (Then use new upset data to test it.)

2)      Choose the input parameters wisely. I started by giving it everything. Over time I came to realize that the data it needed wasn’t the obvious. In this case it needed:

  • The level valve output (not a surprise).
  • The incoming flow (again, not a surprise).
  • The pressure control valve position (This was a surprise. I figured it wanted pressure, but the control valve kept the pressure very flat. However as the control valve moved around to maintain the pressure, the level swung so knowing the valve position helped the level controller).
  • The temperature valve position (same idea as pressure).
  • Sometimes the derivative (rate of change) of a parameter is much more important than the parameter itself.

3)      Ultimately the system worked very well – but honestly by the time I had gone through four iterations of training and building the system I KNEW the physics behind it. The calculation for controlling the level was fairly simple when all was said and done. I probably could have just fed it into a feedforward PID and accomplished the same thing.

The experience was interesting and fun, and I actually got an award from ISA for the work.  However when all was said and done, I realized it wasn’t nearly as impressive a tool as all the marketing brochures suggested. (At the time it was all the rage – companies were selling neutral network controller packages and magazine articles were predicting it would replace PID in a matter of years.)

Danaca Jordan’s Subsequent Question:

Thank you, this is a lot more practical insight than I have been able to glean from the books.

I imagine the batch data analytics program offered by a major supplier of control systems is an example of the MSPC you mentioned. I think I have some papers on it stashed somewhere, since we have considered using it for some of our batch systems. What is batch data analytics and what can it do?

Greg McMillan’s Answer:

Yes, batch data analytics uses MSPC technology with some additional features, such as dynamic time warping. The supplier of the control system software worked with Lubrizol’s technology manager Robert Wojewodka to develop and improve the product for batch processes as highlighted in the InTech magazine article “Data Analytics in Batch Operations.” Data analytics eliminates relationships between process inputs (cross correlations) and reduces the number of process inputs by the use of principal components constructed that are orthogonal and thus independent of each other in a plot of a process output versus principle components. For two principal components, this is readily seen as an X, Y and Z plot with each axis at a 90-degree angle to the each other axis. The X and Y axis covers the range of values principal components and the Z axis is the process output. The user can drill down into each principal component to see the contribution of each process input. The use of graphics to show this can greatly increase operator understanding. Data analytics excels at identifying unsuspected relationships. For process conditions outside of the data range used in developing the empirical models, linear extrapolation helps prevent bizarre extraneous predictions. Also, the use of a piecewise linear fit means there are no humps or bumps that cause a local reversal of process gain and buzzing.

Batch data analytics (MSPC) does not need to identify the process dynamics because all of the process inputs are focused on a process output at a particular part of the batch cycle (e.g., endpoint). This is incredibly liberating. The piecewise linear fit to the batch profile enables batch data analytics to deal with the nonlinearity of the batch response. The results can be used to make mid-batch corrections.

There is an opportunity for ANN to be used with MSPC to deal with some of the nonlinearities of inputs but the proponents of MSPC and ANN often think their technologies is the total solution and don’t work together. Some even think their favorite technology can replace all types of controllers.

Getting laboratory information on a consistent basis is a challenge. I think for training the model, you could enter the batch results manually. When choosing batches, you want to include a variety of batches but all with normal operation (no outliers from failures of devices or equipment or improper operations). The applications as noted in the Wojewodka article emphasize what you want to have as a model is the average batch and not the best batch (not the “golden batch”). I think this is right to start detecting abnormal batches but process control seeks to find the best and reduce the variability from the best so eventually you want a model that is representative of the best batches.

I like MSPC “worm plots” because they tell me from tail to the head the past and future of batches with tightness of coil adding insight. The worm plot is a series of batch end points expressed as a key process variable (PV1n) that is plotted as scores of principal component 1 (PC1) and principal component 2 (PC2)

If you want to do some automated correction of the prediction by taking a fraction of the difference between the predicted result and lab result, you would need to get the lab result into your DCS probably via OPC or some lab entry system interfaced to your DCS. Again the timing of the correction is not important for batch operations. Whenever the bias correction comes in, the prediction is improved for the next batch. The bias correction is similar to what is done in MPC and the trend of the bias is useful as a history of how the accuracy is changing and whether there is possibly noise in the lab result or model prediction.

The really big name in MSPC is John F. MacGregor at McMaster University in Ontario, Canada. McMaster University has expanded beyond MSPC to offer a process control degree. Another big name there is Tom Marlin, who I think came originally from the Monsanto Solutia Pensacola Nylon Intermediates plant. Tom gives his view in the InTech magazine article “Educating the engineer,” Part 2 of a two-part series. Part 1 of the series, “Student to engineer,” focused on engineering curriculum in universities.

For more on my view of why some technologies have been much more successful than others, see my Control Talk blog “Keys to Successful Control Technologies.”

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article “Enabling new automation engineers” for candid comments from some of the original program participants. See the Control Talk column “How to effectively get engineering knowledge” with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column “How to succeed at career and project migration” with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (process systems automation group manager at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont) and Bart Propst (Process Control Leader for the Ascend Performance Materials Chocolate Bayou plant).
What Are Best Practices and Standards for Control Narratives?

What Are Best Practices and Standards for Control Narratives?

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

 

Adrian Taylor’s Question

At the place I work we are typically good at documenting how we configure our controls in the form of DDS documents but not always as good at documenting why they have been configured that way in the form of rigorous control narratives.

We now have an initiative to start retrospectively producing detailed control narratives for all our existing controls and I am looking for best practice, standards and examples of what good looks like for control narratives.

I wondered if you had any good resources in this regard or you could point me in any direction. (I did look at ANSI/ISA-5.06.01-2007 but this seems more concerned with URS/DDS/FDS documents rather than narratives).

We are mainly DeltaV now.

Hunter Vegas’ Answer

We do a lot of DeltaV systems and we use 3 different ways to “document” the control system.  As a system integrator “document” for me may mean something than different than for you so let me explain that these documents are my way to tell my programmers exactly how I want the system to be configured.  These documents fully define the system’s logic so they can program it and I can test against it.

As I said there are three parts:

  1. Tag List
  2. Logic Notes
  3. Batch Flowsheets

Obviously batch flowsheets do not apply if your system isn’t batch but the same flow sheets can be used to define an involved sequence.

The tag list is simply a large excel spreadsheet that includes all of the key parameters – module name, IO Name, tuning constants, alarm constants, etc.  It also includes a “comment” cell that can include relatively simple logic like “Man only on/off FC valve with open/close limits and 30 sec stroke” or “analog input” or “Rev acting PID with man/auto modes and FO valve” etc.  Most of the modules can be defined on this spreadsheet.

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

The logic notes are usually a couple of paragraphs each and explain logic that is more complicated.  Maybe we have an involved set of interlocks or ratio or cascade logic.  If I have a logic note I’ll reference it in the tag list so the programmer knows to look for it.

The flow sheets are the last part.  I usually have a flow sheet for every phase which defines the phase parameters, logic paths, failures, etc.  (See Figure 1 for an example of an agitate phase.) Then I create a flow chart for every recipe which defines what phases I am using and what parameters are being passed.  (See Figure 2 for an example of a partial recipe.)

 

Figure 1: Control Narrative Best Practices Agitator Phase

 

Figure 2: Control Narrative Best Practices Recipe Sample

Hiten Dalal’s Pipeline Feed System Example

I find the American Petroleum Institute Standard API RP 554 Part 1 (R2016) “Process Control Systems: Part 1-Process Control Systems Functions and Functional Specification Development” and the ISA Standard ANSI / ISA 5.06.01-2007 Functional Requirements Documentation for Control Software Applications to be very useful. ANSI/ISA95 also offers guidance on “Enterprise-Control System Integration.” These types of documents in my opinion help include the opinion of all stakeholders in the logic without the stakeholder having to be familiar with flow charting or logic diagrams or specific control system engineering terminology. The functional specification in my opinion is a progressive elaboration of a simple process description done by the process engineer. Once finalized, the functional specification can be developed into a SCADA/DCS operations manual by listing normal sequence of operation along with analysis of applicable responsibility such as operator action/responsibility, logic solver responsibility, and HMI display. You may download my example of a pipeline control system functional specification: Condensate Feed Pump & Alignment Motor Operated Valves (MOVs).

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article “Enabling new automation engineers” for candid comments from some of the original program participants. See the Control Talk column “How to effectively get engineering knowledge” with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column “How to succeed at career and project migration” with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (process systems automation group manager at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont) and Bart Propst (Process Control Leader for the Ascend Performance Materials Chocolate Bayou plant).
When and How to Use Derivative Action in a PID Controller

When and How to Use Derivative Action in a PID Controller

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

Introduction

Derivative action is the least frequently used mode in the PID controller. Some plants do not like to use derivative action at all because they see abrupt changes in PID output and lack an understanding of benefits and guidance on how to set the tuning parameter (rate time). Here we have a question from one of the original protégés of the ISA Mentor Program and answers by a key resource on control Michel Ruel concluding with my view.

Hector Torres’ Initial Question

Is there a guideline in terms of when to enable the derivate term in a PID?

Michel Ruel’s Initial Answer

Derivative is more useful when dead time is not pure dead time but instead a series of small time constants; using derivative “eliminate” one of those small time constants.

You should use the derivative time equal to the largest of those small time constants. Since we usually do not know the details, a good rule of thumb is adjusting Derivative time to half the dead time.

Adding derivative (D) will increase robustness (higher gain and phase margin) since D will reduce apparent dead time of the closed loop.

A good example is the thermowell in a temperature loop: if the thermowell represents a time constant of 10 s, using a D of 10 seconds will eliminate the lag of the thermowell.

Hence, the apparent dead time of the closed loop is reduced and you can use more propositional, shorter integral time; the settling time will be shorter and stability better.

When you look at formulas to reject a disturbance, you observe that in presence of D, proportional and integral can be stronger.

We recommend using derivative only if the derivative function contains a built-in filter to remove high frequency noise. Most DCSs and PLCs have this function but some do not or there is a switch to activate the derivative filter.

Hector Torres’ Subsequent Question

What does having a higher phase margin increase the robustness?

Michel Ruel’s Subsequent Answer

Robustness means that the control loop will remain stable even if the model changes. Phase and gain margin represents the amplitude of the change before it becomes unstable, i.e. before reaching -180 degrees or a loop gain above one.

Ta analyze, we use open loop frequency response, the product of controller model and process model. On a Bode plot, gain are multiplied (or added if plot in dB) and total phase is the sum of process phase and controller phase.

Phase margin is the number of degrees required to reach -180 degrees when the open loop gain is 1 (0 dB). If this number is large (high phase margin), the system is robust meaning that the apparent dead time can increase without reaching instability. If the phase margin is small, a slight change in apparent dead time will bring the control loop to instability.

Adding derivative adds a positive phase, hence increases phase margin (compare to adding a dead time or a time constant that reduces the phase margin).

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

Greg’s Concluding Remarks

The use of derivative is more important in lag dominant (near-integrating), true integrating, and runaway processes (highly exothermic reactions). The derivative action benefit declines as the primary time constant (largest lag) approaches the dead time because the process changes become too abrupt due to lack of a significant filtering action by a process time constant.

Temperature loops have a large secondary time constant courtesy of heat transfer lags in the thermowell or the process heat transfer areas. Setting the derivative time equal to the largest of the secondary lags can cancel out almost 90 percent of the lag assuming the derivative filter is about 1/8 to 1/10 the rate time setting. Highly exothermic reactors can have positive feedback that causes acceleration of the temperature. Some of these temperature loops have only proportional and derivative action because integral action is viewed as unsafe.

If a PID Series Form is used, increasing the rate time reduces the integral mode action (increases the effective reset time), reduces the proportional mode action (decreases effective PID gain or increases effective PID proportional band) and moderates the increase in derivative action. The interaction factors moderates all of the modes preventing the resulting effective rate time from being greater than one-quarter the effective reset time. This helps prevent instability if the rate time setting approaches the reset time setting. There is no such inherent protection in the ISA Standard Form. It is critical that the user prevent the rate time from being larger than one-quarter the reset time in the ISA Standard Form. While in general it is best to identify multiple time constants, a general rule of thumb I use is the rate time should be the largest of a secondary time constant identified or one-half the dead time and never larger than one-quarter the reset time.

It is critical to convert tuning based on setting units and PID form used as you go from one vintage or supplier to another. It is best to verify the conversion with the supplier of the new system. The general rules for converting from different PID forms are given in the ISA Mentor Program Q&A blog post How Do You Convert Tuning Settings of an Independent PID with the last series of equations K1 thru K3 showing how to convert from a series PID form to the ISA Standard Form.

In general, PID structures should have derivative action on the process variable and not error unless the resulting kick in the PID output upon a setpoint change is useful to get to setpoint faster particularly if there is a significant control valve or VFD deadband or resolution limit.

A small setpoint filter in the analog output or secondary loop setpoint along with external reset feedback of the manipulated variable can make the kick a bump. A setpoint lead-lag on the primary loop where the lag time is the reset time and the lead is one-quarter of the lag or a two degrees of freedom structure with the beta set equal to 0.5 and the gamma set equal to about 0.25 can provide a compromise where the kick is moderated while getting to the primary setpoint faster.

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article “Enabling new automation engineers” for candid comments from some of the original program participants. See the Control Talk column “How to effectively get engineering knowledge” with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column “How to succeed at career and project migration” with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (process systems automation group manager at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont) and Bart Propst (Process Control Leader for the Ascend Performance Materials Chocolate Bayou plant). 

Image Credit: Wikipedia

 

How to Manage Pipeline Valve Positioner and PID Tuning

How to Manage Pipeline Valve Positioner and PID Tuning

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

Hiten Dalal’s Question #1

I have been trying to get a handle on small ripples in one of the pipelines by using a rule of thumb to successively reduce proportional action by 20 percent and integral action by 50 percent.  Using the same rule, I could stabilize the ripples on Friday. On Sunday, the product changed in the pipeline and with that back came those 4 percent ripples. There is one control valve that impacts line pressure. I could stretch ripples a bit but could not eliminate them. Output going to zero is natural scheduled shutdown of pipeline. I know it is a lot of information that I am providing but perhaps you can glance through and pinpoint something that stands out. I am learning since I started tuning the control valve that it is product sensitive as well.

Greg McMillan’s Answer #1

Since I don’t know if there is a trend of valve signal and valve flow, I am not sure what is happening. If the considerable decrease in gain does not help or makes it worse, I am wondering if there is some valve stiction or backlash, respectively. Is the valve the same for both products? Could a product be causing more stiction due to buildup or coating on valve seating or sealing surfaces or stem? Could the Sunday valve be closer to the shutoff where friction is greatest?

It sure looks like you have too much proportional (P) action for the new product. The integral action is already greatly reduced and most of the overcorrection is occurring very quickly due to proportional action. I would try decreasing the proportional mode action (proportional mode gain) by 50 percent (cut gain in half). If this helps, reduce the proportional gain again. Based on the very small integral (I) action, you may be able to increase integral action once you decrease proportional action. However, I reiterate that if decreasing the gain simply increases the period of the oscillation, you have backlash or stiction. If amplitude stays the same, you have stiction.

Please make sure there is no integral action in the digital valve controller.

Hiten Dalal’s Question #2

When you say no integral action, do you mean in valve positioner or in controller? I don’t think our positioner has any PID setup. Only PID action is in controller. Since it is liquid pressure and flow, we use P&I. Are you suggesting we use only P action in my controller?

Greg McMillan’s Answer #2

I meant no integral in the valve positioner that for Fisher is called a digital valve controller (DVC). You should use integral action in most process controllers (e.g., flow and pressure). Integral action in the process controllers is essential for the PID control of many processes. So far as tuning the process controller for pipeline control, the integral time also known as reset time (seconds per repeat) should generally be greater than four times the deadtime for an ISA Standard Form. You must be careful about what PID form, structure and tuning setting units are being used. If the integral setting is an integral gain, such as what is used in the “parallel” PID form depicted in textbooks and used in some PLCs, the integral setting may not just be a simple factor of the deadtime (e.g., four times deadtime) but will also depend upon other dynamics. Also, some integral settings are in repeats per minute instead of seconds.

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

Please make sure you extensively test any tuning settings by making small changes in the setpoint with the controller in automatic or in the controller output by momentarily putting the controller in manual. There should be little to no oscillation. The tests should be done at different valve positions particularly if the valve installed flow characteristic is nonlinear. Oscillations may be most prone near the shutoff positioner where stiction is greatest from seat/seal friction.

If there is interaction between loops, the least important loop must be made slower or decoupling used by means of a feedforward signal. If you are going to do some optimization via a controller that seeks to minimize or maximize a valve position, the proportional gain divided by the reset time for this controller doing optimization must be an order of magnitude smaller than process controller to prevent interaction. These PID controllers used for optimizing a valve position are called “valve position controllers” (VPC). I hesitated to mention this to avoid confusion because these are not valve positioners and are only used for optimization. Also, nonlinear or notch gains and directional move suppression via external reset feedback are used to keep the VPC from responding too much or too little so the process controller does not oscillate or run out of valve.

Many newer smart positioners have added integral action to positioners in the last two decades. In some cases, integral action is enabled as the default. This prompted me to write the Control Talk blog post “Getting the Most Out of Positioners.” This blog does not address setting integral action in process controllers (e.g., flow and pressure controllers).

Hiten Dalal’s Question #3

Do you teach a control valve tuning class? Is there a specific method you recommend for a pipeline control valve?

Greg McMillan’s Answer #3

I do not offer a class on tuning positioners. Supplier courses on tuning positioners are good but you will need to insist on turning off integral action. You can have them talk to me if they disagree. In general you should make sure you do not use integral action and that you use the highest valve positioner gain that does not cause oscillation since for pipeline flow and pressure control, oscillations are not filtered. If you have an Emerson Digital Valve Controller (DVC), I recommend “travel control” with no integral action and with the highest gain that still gives an overdamped response. The valve must be a true throttling valve and not an on-off valve posing as a throttling valve as discussed in the Control Talk blog “Getting the Most out of Valve Positioners”. Note that in this blog we are going for a more aggressive response than what you need. Because of the lack of a significant process time constant in a pipeline, you need a smooth valve response. In the blog, the valve positioner gain is described to be set high enough to cause a slight overshoot and oscillation that quickly settles out. Oscillations in the valve response are useful to get a faster response for vessels and columns since there is a a large process time constant to filter out oscillations. You want to still use a high gain and no integral action in the positioner but seek an overdamped (non-oscillatory) response of valve position.

Hiten Dalal’s Follow-up Reply

I have bought Tuning and Control Loop Performance Fourth Edition. I reference tables from there for suggested PID values. I have removed derivative from several pressure and flow loops and observed them to be equally efficient. In the process of tuning I have learned that operations installations have impact on loop tuning. I have made the following types of corrections,

(1) As installed, the logic had the PID getting initiated as soon as block valve #1 was fully opened but block valve #2 was getting commanded to open after #1 causing PID output to ramp off to high output limit since the control valve was not seeing full flow. We solved this by setting temporary upper clamp in PID output at safe limit to avoid overshoot until block valve #2 was fully opened.

(2) Transmitter range was high and margin of error was not acceptable by operations. Re-ranged transmitter to suitable range and brought error within acceptable margin.

(3) EIM Controls Electric and REXA electrohydraulic actuators have a limit on number of actuations. I added an acceptable dead band to reduce number of actuations.

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article “Enabling new automation engineers” for candid comments from some of the original program participants. See the Control Talk column “How to effectively get engineering knowledge” with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column “How to succeed at career and project migration” with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (process systems automation group manager at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont) and Bart Propst (Process Control Leader for the Ascend Performance Materials Chocolate Bayou plant). 

Image Credit: Wikipedia

How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements

How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements

This article was authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

Wireless measurements offer significant life-cycle cost savings by eliminating the installation, troubleshooting, and modification of wiring systems for new and relocated measurements. Some of the less recognized benefits are the eradication of EMI spikes from pump and agitator variable speed drives, the optimization of sensor location, and the demonstration of process control improvements. However, loss of transmission can result in process conditions outside of the normal operating range. Large periodic and exception reporting settings to increase battery life can cause loop instability and limit cycles when using a traditional PID (proportional-integral-derivative) for control. Analyzers offer composition measurements key to a higher level of process control but often have a less-than-ideal reliability record, sample system, cycle time, and resolution or sensitivity limit. A modification of the integral and derivative mode calculations can inherently prevent PID response problems, simplify tuning requirements, and improve loop performance for wireless measurements and sampled analyzers.

Wireless measurements

The combination of periodic and exception reporting by wireless measurements can be quite effective. The use of a refresh time (maximum time between communications) enables the use of a larger exception setting (minimum change for communication). Correspondingly, the use of an exception setting enables a larger refresh time setting. The time delay between the communicated and actual change in process variable depends upon when the change occurs in the time interval between updates (sample time). Since the time interval between a measured and communicated value (latency) is normally negligible, on the average, the true change can be considered to have occurred in the middle of the sample time. This delay limits how quickly control action is taken to correct changes introduced by process disturbances.

Analytical measurements

Since ultimately what you often want to control is composition in a process stream, online analyzers can raise process performance to a new level. However, analyzers, such as chromatographs, have large sample transportation and processing time delays that contribute to the total loop deadtime and are generally not as reliable or as sensitive as the pressure, level, and temperature measurements.

The sample transportation delay from the process to the analyzer is the sample system volume divided by the sample flow rate. This delay can be five or more minutes when analyzers are grouped in an analyzer house. Once the sample arrives, the processing and analysis cycle time normally ranges from 10 to 30 minutes. The analysis result is available at the end of the cycle time. If you consider the change in the sample composition occurs in the middle of the cycle time and is not reported until the end of the next cycle time, the analysis delay is 1½ times the cycle time. This cycle time delay is added to the sample transportation delay, process deadtime, and final control element delay to get the total loop deadtime. The sum of the 1½ analyzer cycle time plus the sample transportation delay will be referred to as the sample time.

Smart PID

Most of the undesirable reaction to discontinuous measurement communication is the result of integral and derivative action in a traditional PID. Integral action will continue to drive the output to eliminate the last known offset from the setpoint even if the measurement information is old. Since the measurement is rarely exactly at the setpoint within the A/D and microprocessor resolution, the output is continually ramped by reset. The problem is particularly onerous if the current error is erroneous.

Derivative action will see any sudden change in a communicated measurement value as occurring all within the PID execution time. Thus, a change in the measurement causes a spike in the controller output. The spike is especially large for restoration of the signal after a loss in communication. The spike can hit the output limit opposite from the output limit driven to from integral action. The spike from large refresh time can also cause a significant spike, because the rate of change calculation uses the PID execution time.

A smart PID has been developed that makes an integral mode calculation only when there is a measurement update. The change in controller output from the proportional mode reaction to a measurement update is fed back through an exponential response calculation with a time constant equal to the reset time setting to provide an integral calculation via the external reset method. For applications where there is an output signal selection (e.g., override control) or where there is a slowly responding secondary loop or final control element, the change in an external reset signal can be used instead of the change in PID output for the input to exponential response calculation. The feedback of actual valve position as the external reset signal can prevent integral action from driving the PID output in response to a stuck valve. The use of a smart positioner provides the readback of actual position and drives the pneumatic output to the actuator to correct for the wrong position without the help of the process controller.

For a reset time set equal to the process time constant so the closed loop time constant is equal to the open loop time constant, the response of the integral mode of the smart PID matches the response of the process. This inherent compensation of process response simplifies controller tuning and stabilizes the loop. For single loops dominated by a large time in between updates (large sample time), whether due to wireless measurements or analyzers, the controller gain can be the inverse of the process gain.

In the smart PID, the time interval used for the derivative mode calculation is the elapsed time from the last measurement update. Upon the restoration of communication, derivative action considers the change to have occurred over the time duration of the communication failure. Similarly, the derivative response to a large sample time or exception setting spreads the measurement change over the entire elapsed time. The reaction to measurement noise is also attenuated. This smarter derivative calculation combined with the derivative mode filter eliminates spikes in the controller output.

The proportional mode is active during each execution of the PID module to provide an immediate response to setpoint changes. The module execution time is kept fast so the delay is negligible for a corrective change in the setpoint of a secondary loop or signal to a final control element. With a controller gain approximately equal to the inverse of the process gain, the step change in PID output puts the actual value of the process variable extremely close to the final value needed to match the setpoint. The delay in the correction is only the final control element delay and process deadtime. After the process variable changes, the change in the measured value is delayed by a factor of the measurement sample time. Consequently, the observed speed of response is not as fast as the true speed of process response, a common deception from measurements with large signal delay or lag times.

Communication failure

Communication failure is not just a concern for wireless measurements. Any measurement device can fail to sense or transmit a new value. For pH measurements, the broken glass electrode or broken wire will result in a 7 pH reading, the most common setpoint. The response of coated or aged electrodes and large air gaps in thermowells can be so slow to show no appreciable change. Plugged impulse lines and sample lines can result in no new information from pressure transmitters and analyzers. Digitally communicated measurements can fail to update due to bus or transmitter problems.

If a load upset occurs and is reported just before the last communication, integral action in the traditional controller drives the PID output to its low limit. The smart PID can make an output change that almost exactly corrects for the last reported load upset, since the controller gain is the inverse of the process gain.

Sample time

The wireless measurement sample time and transport delay associated with sample analyzers must be taken into account when using these measurements in control. A minimum wireless refresh time of 16 seconds is significant compared to the process response for flow, liquid pressure, desuperheater temperature, and static mixer composition and pH control. The sample time of chromatographs makes nearly all composition loops deadtime dominant except for industrial distillation columns and extremely large vessels. To eliminate excessive oscillations and valve travel caused by sample time and transport delay, a traditional PID controller is tuned for nearly an integral-only type of response by reducing the controller gain by a factor of 5. Increasing the reset time instead of reducing could also provide stability, but the offset is often unacceptable especially for flow feedforward and ratio control.

The smart PID can be aggressively tuned by setting the gain equal to the inverse of the process gain for deadtime dominant loops. The result is a dramatic reduction in integrated absolute error and rise time (time to reach setpoint). The immediate response of the smart PID is particularly advantageous for ratio control of feeds to wild flows and for cascade and model predictive control by higher level loops. The advantage may not be visible in the wireless or analyzer reported value because of the large measurement delay. The improvement in performance is observed in the speed and degree of correction by the controller output and reduced variability in upper level measurements and process quality. A similar deception also occurs for measurements with a large lag time relative to the true process response due to large signal filters and transmitter damping settings, and slow sensor response times. An understanding of these relationships and the temporary use of fast measurements can help realize and justify process control improvement. The ability to temporarily set a fast wakeup time and tight exception reporting for a portable wireless transmitter could lead to automation system upgrades.

Level loops on large volumes can use the largest refresh time of 60 seconds without any adverse affect because the integrating process gain is so slow (ramp rate is less than 1% per minute). Temperature loops on large vessels and columns can use an intermediate refresh time (30 seconds) and the maximum refresh time (60 seconds), respectively, because the process time constant is so large. However, gas and steam pressure control of volumes and headers will be adversely affected by a refresh time of 16 seconds because the integrating response ramp is so fast that the pressure can move outside of the control band (allowable control error) within the refresh time. Furnace draft pressure can ramp off scale in seconds. Highly exothermic reactors (polymerization reactors) can possibly run away if the largest refresh time of 60 seconds is used. To mitigate the effect of a large refresh time, the exception reporting setting is lowered to provide more frequent updates.

Measurement sensitivity

Measurements have a limit to the smallest detectable or reportable change in the process variable. If the entire change beyond threshold for detection is communicated, the limit is termed sensitivity. If a quantized or stepped change beyond the threshold is reported, the limit is termed resolution. Ideally, the resolution limit is less than the sensitivity limit.  Often, these terms are used indiscriminately.

Wireless measurements have a sensitivity setting called deadband that is the minimum change in the measurement from the last value communicated that will trigger a communication when the sensor is awake. In the near future, the wakeup time in most wireless transmitters of 8 seconds is expected to be reduced. pH transmitters already have a wakeup time of only 1 second enabling a more effective use on static mixers.

A traditional PID will develop a limit cycle whose amplitude is the sensitivity and resolution limit, whichever is larger, from integral action. The period of the limit cycle will increase as the gain setting is reduced and the reset time is increased. A smart PID will inherently prevent the limit cycle.

Bottom line

Wireless and composition measurements offer a significant opportunity for optimizing process operation. A smart PID can dramatically improve the stability, reliability, and speed of response for wireless measurements and analyzers. The result is tighter control of the true process variables and longer battery and valve packing life.

 

A version of this article originally was published at InTech magazine.

Pin It on Pinterest