What are New Technologies and Approaches for Batch and Continuous Process Control?

What are New Technologies and Approaches for Batch and Continuous Process Control?

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

 

Danaca Jordan’s Question

What is the technical basis and ability of technologies other than PID and model predictive control (MPC)? These technologies seem fascinating and I would like to know more, particularly as I study for the ISA Certified Automation Professional (CAP) exam.

Greg McMillan’s Answer

Michel Ruel has achieved considerable success in the use of fuzzy logic control (FLC) in mineral processing as documented in “Ruel’s Rules for Use of PID, MPC and FLC.” The process interrelationships and dynamics in the processing of ores are not defined due to the predominance of missing measurements and unknown effects. Mineral processing PID loops are often in manual, not only for the usual reasons of valve and measurement problems, but also because process dynamics between a controlled and manipulated variable radically change, including even the sign of the process action (reverse or direct) based on complex multivariable effects that can’t be quantified.

If the FLC configuration and interface is set up properly for visibility, understandability and adjustability of the rules, the plant can change the rules as needed, enabling sustainable benefits. In the application cited by Michel Ruel, every week metallurgists validate rules, make slight adjustments, and work with control engineers to make slight adjustments. A production record was achieved in the first week. The average use of energy per ton had decreased by 8 percent, and the tonnage per day had increased by 14 percent.

There have been successful applications of PID and MPC in the mining industry as detailed in the Control Talk columns “Process control challenges and solutions in mineral processing” and “Smart measurement and control in mineral processing.”

I have successfully used FLC on a waste treatment pH system to prevent RCRA violations at a Pensacola, Fla. plant because of my initial excitement about the technology. It did very well for decades but the plant was afraid to touch it. The Control magazine article “Virtual Control of Real pH” with Mark Sowell showed how you could replace the FLC with an MPC and PID strategy that could be better maintained, tuned and optimized.

We used FLC integrated into the software for a major supplier of expert systems in the 1980s and 1990s but there were no real success stories for FLC. There was one successful application of an expert system for a smart level alarm but it did not use FLC. However, a simple material balance could have done as well. There were several applications for smart alarms that were turned off. After nearly 100 man-years, we have not much at all to show for these expert systems. You could add a lot rules for FLC and logic based on the expertise of the developer of the application, but how these rules played together and how you could tell which rule needed to be changed was a major problem. When the developer left the production unit, operators and process engineers were not able to make changes inevitably needed.

The standalone field FLC advertised for better temperature setpoint response cannot do better than a well-tuned PID if you use all of the PID options summarized in the Control magazine article “The greatest source of process control knowledge,” including PID structure such as 2 Degrees of Freedom (2DOF) or a setpoint lead-lag. You can also use gain scheduling in the PID if necessary. The problem with FLC is how you tune it and update it for changing process conditions. I wrote the original section on FLC in A Guide to the Automation Body of Knowledge but the next edition is going to have it omitted due to common agreement between me and ISA that making more room to help get the most out of your PID was more generally useful.

FLC has been used in pulp and paper. I remember instances of FLC for kiln control but since then we have developed much better PID and MPC strategies that eliminate interaction and tuning problems.

So far as artificial neural networks (ANN), I have seen some successful applications in batch end point detection and prediction and for inferential dryer moisture control. The insertion of time delays on inputs to make them coincide with measured output is required for continuous operations. For plug flow operations like dryers, this can be readily done since the deadtime is simply the volume divided by the flow rate. For continuous vessels and columns, the insertion of very large lag times and possibly a small lead time are needed besides dead time. No dynamic compensation is needed for batch operation end point prediction.

You have to be very careful not to be outside of the test data range because of bizarre nonlinear predictions. You can also get local reversals of process gain sign causing buzzing if the predicted variable is used for closed loop control. Finally, you need to eliminate correlations between inputs. I prefer multivariate statistical process control (MSPC) that eliminates cross correlation of inputs by virtue of principle component analysis and does not exhibit process gain sign reversals or bizarre nonlinearity upon extrapolation outside of test data range. Also, MSPC can provide a piecewise linear fit to nonlinear batch profiles, which is a technique we commonly do with signal characterizers for any nonlinearity. I think there is an opportunity for MSPC to provide more intelligent and linear variables for an MPC like we do with signal characterizers.

Join the ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

For any type of analysis or prediction, whether using ANN or MSPC, you need to have inputs that show the variability in the process. If a process variable is tightly controlled, the PID or MPC has transferred the variability to the manipulated variable. Ideally, flow measurements should be used, but if only position or a speed is available and the installed flow characteristic is nonlinear, signal characterization should be used to convert position or speed to a flow.

Hunter Vegas’ Answer:

I implemented a neural network some years ago on a distillation column level control. The column was notoriously difficult to control. The level would swing all over and anything would set it off, such as weather or feed changes. The operators had to run it in manual because automatic was a hopeless waste of time.

At the time (and this information might be dated) the neural network was created by bringing a stack of parameters into the calculation and “training” it on the data. Theoretically the calculation would strengthen the parameters that mattered, weaken the parameters that didn’t, and eventually configure itself to learn the system.

The process taught me much.  Here are my main learning points:

1)      Choose the training data wisely.  If you give it straight line data then it learns straight lines.  You need to teach it using upset data so it learns what to do when things go wrong. (Then use new upset data to test it.)

2)      Choose the input parameters wisely. I started by giving it everything. Over time I came to realize that the data it needed wasn’t the obvious. In this case it needed:

  • The level valve output (not a surprise).
  • The incoming flow (again, not a surprise).
  • The pressure control valve position (This was a surprise. I figured it wanted pressure, but the control valve kept the pressure very flat. However as the control valve moved around to maintain the pressure, the level swung so knowing the valve position helped the level controller).
  • The temperature valve position (same idea as pressure).
  • Sometimes the derivative (rate of change) of a parameter is much more important than the parameter itself.

3)      Ultimately the system worked very well – but honestly by the time I had gone through four iterations of training and building the system I KNEW the physics behind it. The calculation for controlling the level was fairly simple when all was said and done. I probably could have just fed it into a feedforward PID and accomplished the same thing.

The experience was interesting and fun, and I actually got an award from ISA for the work.  However when all was said and done, I realized it wasn’t nearly as impressive a tool as all the marketing brochures suggested. (At the time it was all the rage – companies were selling neutral network controller packages and magazine articles were predicting it would replace PID in a matter of years.)

Danaca Jordan’s Subsequent Question:

Thank you, this is a lot more practical insight than I have been able to glean from the books.

I imagine the batch data analytics program offered by a major supplier of control systems is an example of the MSPC you mentioned. I think I have some papers on it stashed somewhere, since we have considered using it for some of our batch systems. What is batch data analytics and what can it do?

Greg McMillan’s Answer:

Yes, batch data analytics uses MSPC technology with some additional features, such as dynamic time warping. The supplier of the control system software worked with Lubrizol’s technology manager Robert Wojewodka to develop and improve the product for batch processes as highlighted in the InTech magazine article “Data Analytics in Batch Operations.” Data analytics eliminates relationships between process inputs (cross correlations) and reduces the number of process inputs by the use of principal components constructed that are orthogonal and thus independent of each other in a plot of a process output versus principle components. For two principal components, this is readily seen as an X, Y and Z plot with each axis at a 90-degree angle to the each other axis. The X and Y axis covers the range of values principal components and the Z axis is the process output. The user can drill down into each principal component to see the contribution of each process input. The use of graphics to show this can greatly increase operator understanding. Data analytics excels at identifying unsuspected relationships. For process conditions outside of the data range used in developing the empirical models, linear extrapolation helps prevent bizarre extraneous predictions. Also, the use of a piecewise linear fit means there are no humps or bumps that cause a local reversal of process gain and buzzing.

Batch data analytics (MSPC) does not need to identify the process dynamics because all of the process inputs are focused on a process output at a particular part of the batch cycle (e.g., endpoint). This is incredibly liberating. The piecewise linear fit to the batch profile enables batch data analytics to deal with the nonlinearity of the batch response. The results can be used to make mid-batch corrections.

There is an opportunity for ANN to be used with MSPC to deal with some of the nonlinearities of inputs but the proponents of MSPC and ANN often think their technologies is the total solution and don’t work together. Some even think their favorite technology can replace all types of controllers.

Getting laboratory information on a consistent basis is a challenge. I think for training the model, you could enter the batch results manually. When choosing batches, you want to include a variety of batches but all with normal operation (no outliers from failures of devices or equipment or improper operations). The applications as noted in the Wojewodka article emphasize what you want to have as a model is the average batch and not the best batch (not the “golden batch”). I think this is right to start detecting abnormal batches but process control seeks to find the best and reduce the variability from the best so eventually you want a model that is representative of the best batches.

I like MSPC “worm plots” because they tell me from tail to the head the past and future of batches with tightness of coil adding insight. The worm plot is a series of batch end points expressed as a key process variable (PV1n) that is plotted as scores of principal component 1 (PC1) and principal component 2 (PC2)

If you want to do some automated correction of the prediction by taking a fraction of the difference between the predicted result and lab result, you would need to get the lab result into your DCS probably via OPC or some lab entry system interfaced to your DCS. Again the timing of the correction is not important for batch operations. Whenever the bias correction comes in, the prediction is improved for the next batch. The bias correction is similar to what is done in MPC and the trend of the bias is useful as a history of how the accuracy is changing and whether there is possibly noise in the lab result or model prediction.

The really big name in MSPC is John F. MacGregor at McMaster University in Ontario, Canada. McMaster University has expanded beyond MSPC to offer a process control degree. Another big name there is Tom Marlin, who I think came originally from the Monsanto Solutia Pensacola Nylon Intermediates plant. Tom gives his view in the InTech magazine article “Educating the engineer,” Part 2 of a two-part series. Part 1 of the series, “Student to engineer,” focused on engineering curriculum in universities.

For more on my view of why some technologies have been much more successful than others, see my Control Talk blog “Keys to Successful Control Technologies.”

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article “Enabling new automation engineers” for candid comments from some of the original program participants. See the Control Talk column “How to effectively get engineering knowledge” with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column “How to succeed at career and project migration” with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (process systems automation group manager at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont) and Bart Propst (Process Control Leader for the Ascend Performance Materials Chocolate Bayou plant).
Use Field Analyzers to Measure Key Component Concentrations

Use Field Analyzers to Measure Key Component Concentrations

The following tip is from the ISA book by Greg McMillan and Hunter Vegas titled 101 Tips for a Successful Automation Career, inspired by the ISA Mentor Program. This is Tip #63.

101 Tips for a Successful Automation CareerWhen we did process control improvements in the 1980s and 1990s, the major limitation was the lack of a reliable field analyzer. None of the plants had field analyzers on raw materials. The specialty chemicals production units had very few field analyzers and were flying blind. Plants for chemical intermediate products had field analyzers on most key product streams that were supported by an excellent, extensive plant analyzer group. Unfortunately, the technology was old, dating back to the 1970s. Many of the analyzers, analog circuits, and sampling systems were literally homemade and were dependent upon the expertise of the analyzer group. Most of these analyzer specialists have retired. Fortunately, the technology developed was adopted by new analyzer companies. Subsequently, the technology has steadily advanced and the electronics have become small computers providing diagnostics, intelligent interfaces, and standardized communication. Sample valves and sampling system design have also improved. Analyzer systems are more reliable today but still require maintenance and special expertise. Unfortunately, onsite analyzer specialists are becoming extinct.

Quite a bit of effort was devoted in the 1990s to developing artificial neural networks (ANN) to predict compositions in streams instead of installing analyzers. Unfortunately, the technology was oversold by big neural network suppliers, who claimed, “No hardware, no engineering, and no maintenance. Just dump all of your plant data into the ANN.” PID setpoints, process variables, and outputs were inputs, ignoring the fact that the PID algorithm was in play. The necessary Design of Experiments (DOE) was not done. Steady-state data without changes to process inputs led to relationships that violated first principles and wild extrapolations when the plant deviated from the test conditions. Principal components and drill-down into contributions were not available. Process engineers did not review the relationship of each input to the predicted output. Automated feedback correction from lab analysis was often not used. As the result of all this, ANN’s achievements were temporary at best. Today, ANN integrated into a DCS are easier to use and may offer benefits for focused applications.

There are some examples of a brighter future for online and at-line analysis. Coriolis meters offer an incredibly accurate, reliable, and nearly maintenance-free density measurement for online analysis where two components have significantly different densities. The capability has been extended with more sophisticated digital computations to include percent solids and bubbles. Conductivity and pH offer online analysis for high and low concentrations, respectively, of acids and bases. The Nova BioProfile Flex at-line analyzer with an automated sampling system is becoming the standard for bioreactors in the biopharmaceutical industry. Within minutes, the Nova analyzer can provide cell size, count, and health along with the concentration of nutrients and inhibiting byproducts from 1 ml samples.

Concept: Process efficiency and capacity often depend upon the composition of input and output process streams. Automation systems commonly measure temperature, pressure, level, pH, and flow but rarely composition. Sometimes pH, pressure, and temperature can be an indicator of composition in reactions and separations for a given set of input stream compositions. At present, lab measurements are relied upon for confirmation of the estimated values. Large continuous plants tend to have analyzers on product streams. What is primarily missing despite advances in technology are on-line and at-line analyzers for raw materials and batch operations.

Details: A history of lab results can show the variability of key components in key process streams. Simulations can show the effects of changes in composition on the process. Field analyzers on streams where the key components show variability or need to be optimized can be used for a higher level of control. The higher level can be as simple as cascade control or as sophisticated as model predictive control. Coriolis meters should be installed on all reactant and product streams (Tip #73). Online analyzers require less maintenance than at-line analyzers because there is no sampling system but they often provide an inferential measurement that does not reflect the effects of changes in process conditions. The most effective analyzer for the process industry is the gas chromatograph, as discussed in the December 2011 Control Talk column “Analyze This!” and the January 2012 Control Talk column “Gas Chromatographs Rule.”

Watch-outs: Droplets or solids remaining in process samples after sample conditioning will adversely affect the results from a gas chromatograph (GC). The sample used in a GC must be completely vaporized. The sample point for any at-line analyzer may not be representative of the process composition because of the separation of phases and
non-ideal mixing at the point of sample extraction. Near Infrared (NIR) analyzers are only as good as the set of samples used to develop the Projection to Latent Structure (PLS) statistical models. NIR models (NIR calibration) must be updated as raw materials and operating conditions change. Special mathematical expertise is needed for understanding and improving NIR models. The maintenance costs of analyzers (except for Coriolis meters) usually exceed the hardware cost. Analyzers should not be used in closed loop control until they have proven to be sufficiently accurate and reliable in actual plant operation. At-line analyzer sample time, cycle time, and multiplex time will increase the total loop deadtime, destabilizing the loop, unless the controller is retuned or an enhanced PID developed for wireless is used. Online first principle models and experimental models (e.g., linear dynamic estimators) periodically corrected by at-line analyzers can provide an immediate predicted composition, eliminating the additional deadtime.

Exceptions: Analyzers should not be installed if there is no onsite support with the required expertise or if the Return on Investment (ROI) is insufficient based on actual analyzer downtime and life cycle cost, which includes maintenance cost.

Insight: Field analyzers enable a higher level of control, such as model predictive control, to improve product quality and process efficiency and capacity.

Rule of Thumb: Install a field analyzer on key continuous streams and batch unit operations if there is an adequate ROI and onsite technical support.

Pin It on Pinterest