With all the hype in the press about the “new” Internet of Things (IoT) and what it offers industry, it is challenging to decide which pieces are best for an organization and how to get started. The fact is, what is best for your organization is probably at least a bit different from what is best for another organization.
Systems and processes no longer exist in and of themselves. With connectivity and visibility throughout the enterprise, the interrelationship between all parts of an organization is becoming obvious. This provides new opportunities for improvement and for leveraging technology to achieve more efficient operations.
Production processes, maintenance protocols, safety initiatives, training content (or lack of it), staffing, and schedules affect each other in ways we could never have foreseen even 10 years ago. Yes, we understood that interconnected components were potentially valuable, but now we have technology in place to better define, measure, and act on these interrelationships.
After the business challenges of the 2008 global financial crisis, most of us have harvested the low-hanging fruit that keeps the doors open and allows us to continue improving. The next step is to take advantage of the tools of IoT. IoT has been part of our lives for quite a while, whether we know it or not. Our cars, assets, and even our Amazon accounts use huge numbers of data streams to protect us, improve performance, and drive our buying habits. Perhaps IoT should be called “things we can communicate with,” but TWCCW does not quite roll off the tongue.
What is the difference between now and 10 years ago? Analytic methods and software have been refined. They are easier to use and available on demand with cloud services. In the past, we could see most of the obvious correlations. Those that were less obvious were only identified by really smart subject-matter experts using spreadsheets, really geeky math, and big mainframe computers such as the IBM 360.
Analytic engines give us the capability to work many of these concepts much more quickly. Just as the PC distributed processing from the mainframe to the desktop, analytic engines drive the processing the same direction. We can process thousands of data streams, and discover—or let the machine discover—hidden relationships and correlations. Then, we can use these relationships to validate changes to systems or processes.
For example, weather conditions, such as temperature, humidity, and pending storms, affect how assets perform. Understanding what is “normal” is predicated on the kind of process running, the quality of the raw materials, and the quality of the energy delivered by the provider. Seeking correlations between these influencers is not optional anymore; this capability is considered a base requirement for operations and maintenance.
How do you get started? Pick one area, one piece of equipment, or a high-tech process to focus on. Cloud computing offers easy access to business analytic models (see IBM.com/IoT for more information) that you can experiment with. Connecting a data stream is as easy as deploying an app. Let existing models show you correlations you did not know about—explore and expand on these as signposts to early successes.
Recognizing that effective use of data is dependent on an understanding of what you already have, determine where decision-support data lives and bring it all into a single framework. Only then can you move toward more forward-looking possibilities.
Where do you start? Unless you are living under a wet rock, it really does not matter what your role in the enterprise is; we can all see an opportunity. Grab a small piece, and get started.
Reactive to predictive
Technology development is expanding the tools available to increase the effectiveness of maintenance to dramatically improved uptime and equipment availability. Reactive maintenance, which waits for machines and other equipment to break down and then fixes them, is a costly method, affecting production efficiency and manufacturing quality. This practice also has a big impact on increased life-cycle costs, often shortening the useful life of equipment.
Preventive maintenance based on calendar time improves equipment effectiveness. However, lacking a link between equipment use and wear, this method has not proven to be reliable, and it requires a significant commitment of labor resources. Much of the work and materials are overkill. Condition-based maintenance using real-time monitoring to constantly assess the condition of assets can dramatically improve availability and limit downtime. The big next step in maintenance is enabled by IoT technology and cloud computing. Companies identify and correlate patterns in variables that, taken as a whole, affect equipment performance to determine actions that can prevent failures. The application of predictive methods can significantly improve maintenance strategy and the ability to anticipate performance issues and mitigate them before they impact operations and cause unscheduled downtime.
Exploiting asset data
More and more intelligence is built into sensors on equipment every day. Automation systems linked to these intelligent sensors deliver insights into real-time performance data. With the application of Internet of Things technology, these terabytes of data turn into actionable information. The opportunity is for a much clearer fact-based understanding of asset performance and efficiencies to lower maintenance costs, improve production uptime (lower downtime), improve product quality, improve production yield, reduce unplanned downtime, and optimize maintenance labor resources. This data can also be used to justify replacement of existing equipment and verify performance of new production processes and recently installed equipment.
Newer and easier-to-use analytic modeling software is becoming available due to the demands of customers whose appetites are whetted by compelling results and who drive the need for more and more insight into their business operations. Analytic models are bringing high-hanging fruit in reach; maintenance and operational improvement directly affects the bottom line, and that is why large enterprises are so interested in leveraging these technologies.
Exploring potential worth
Data from automation and monitoring systems, leveraged with analytics, monitoring, and reporting, creates the basis for a real-time maintenance program. The potential impact of employing predictive maintenance is significant, as illustrated by a Nucleus Research analysis of potential improvements:
- Reduction of annual unplanned downtime: 60–90 percent
- Reduction of excess capacity required to compensate for unplanned downtime: up to 90 percent
- Scrap or rework reduction: up to 50 percent
- Asset life extension improving lifetime return on assets: 5–15 percent
Identify and prioritize needs
A valuable analysis is to identify and prioritize your situation considering three factors relative to analytic use cases.
- Operational and organizational readiness: Are you ready, or do people need more information and training?
- Business and strategy alignment: Is this in line with your company’s goals and objectives?
- Risk and return value: For your operations, what is the economic potential?
|How real-time maintenance and analytics affect an operation depends on the organizational characteristics|
|Areas of improvement||Organizational characteristics with highest value return|
|Asset quality yield improvement from predictive analytics impacting production and manufacturing processes||Complex discrete and process manufacturing|
|Asset quality yield improvement for higher levels of quality of finished goods and services||Complex discrete and process manufacturing|
|Process-driven root-cause detection and diagnosis and prognosis for quick resolution of complex problems||High-risk industry, multisite operations|
|Process-driven predictive tool calibration for improved throughput, uptime, and accuracy to maintain tolerance accuracy||Precision manufacturing|
|Reducing recalls and warranty exposure based on predictive, early alert, field-asset problem determination||Competitive markets, costly product development cycles|
|Asset track and trace to detect and predict asset movement and location (supply-chain management)||Collaborative partners for subcomponents, expensive assets, outsourced maintenance|
|Reducing scrap due to improvements in production process analytics and root-cause analysis||High cost of raw materials, fixed cost for processed goods|
|Early warning and predictive parameter modeling for early and precise problem determination||Precision manufacturing|
|Product service improvements as a result of defect detection and prevention results in customer loyalty||Competitive markets, expensive recalls, risk to company reputation|
|Asset monitoring and analytics for regulatory compliance warranty or recall||Highly regulated industry, high cost of noncompliance|
Mary Bunzel is the general manager of the Manufacturing Industry Solutions Group at Intel Corporation. Previouysly she was with IBM and brings more than 30 years of experience in best practices for manufacturing industries, with a special focus on the automotive, industrial products, and food and pharmaceutical industries. Before joining IBM, Bunzel spent 10 years working for MRO Software (PSDI), where her role was Maximo’s strategic accounts manager for General Mills, J&J, Cargill, ADM, Ford, and General Motors. Bunzel serves as IBM’s voice to the market, to customers, and to analyst groups on the state of the manufacturing market as it relates to Maximo asset management offerings.
Connect with Mary:
A version of this article originally was published at InTech magazine.