Best Practices for Process Alarm Management

The purpose of process control alarms is to use automation to assist human operators as they monitor and control processes, and alert them to abnormal situations. Incoming process signals are continuously monitored, and if the value of a given signal moves into an abnormal range, a visual and/or audio alarm notifies the operator of that condition.

This seems like a simple concept, almost not worthy of a second thought, and unfortunately, sometimes the configuration of alarms in a control system doesn’t get the attention it deserves. Configuring and maintaining alarms properly requires careful planning and has a significant impact on the overall effectiveness of a control system.

Early Alarm Systems

Before digital process control, each alarm indicator required a dedicated lamp and some physical wiring. This meant that:

  1. Due to the effort required, the need for a given alarm was carefully scrutinized, somewhat limiting the total number of alarms
  2. Once the alarm was in place, it had a permanent “home” where an operator could become comfortable with its location and meaning

The Introduction of Digital Alarms

As control systems became digital, the creation and presentation of alarms changed significantly. First, where a “traditional” control panel was many square feet in size, digital control system human machine interfaces (HMIs) consisted of a few computer monitors which displayed a representation of the process in an area more appropriately measured in square inches than square feet.

Second, creating an alarm event was a simple matter of reconfiguring some software. Multiple levels of alarms (hi & hi-hi, lo & lo-lo) could easily be assigned to a single process value. This led to an increase in the number of possible alarm notifications. Finally, when an alarm was activated, it was presented as an icon, or as flashing text on a process schematic screen, and then logged in a dedicated alarm list somewhere within the large collection of display screens. However when the alarm was presented, it lacked the consistency of location and intuitive meaning that the traditional physical lamp had.

The Dilemma With Digital Alarms

The digital alarm systems worked acceptably well for single alarms and minor upsets. But for major upsets the limited visual real estate and the need to read and mentally place each alarm created bottlenecks to acknowledging and properly responding to large numbers of alarms in a short interval of time.

If a critical component in a process fails, for example a lubrication pump on a large induction fan, the result can be a “flood” of alarms occurring over a short time period. The first wave of alarms is associated with the immediate failure, low lube oil pressure, low lube oil flow, and high bearing temperatures. The second wave is associated with interlocks shutting down the fan, high inlet pressure, low air flow and low downstream pressure. With no ID fan the upstream boiler will soon start to shut down and generate numerous alarms, followed most likely by problems from the process or processes which are served by the boiler.

The ASM Consortium

Analyses of a number of serious industrial accidents has shown that a major contributor to the severity of the accidents was an overwhelming number of alarms that operators were not capable of understanding and properly responding to in a timely manner. As a result of these findings, in 1992 a consortium of companies including Honeywell and several petroleum and chemical manufacturers was established to study the issue of alarm management, or more generally, abnormal situation management.

The ASM Consortium, with funding from the National Institute of Standards and Technology, researched and developed a series of documents on operator situation awareness, operator effectiveness and alarm management. Since then a number of other industry groups and professional organizations, such as the Engineering Equipment and Materials Users Association in the UK and Instrument Society of America have also examined the issue of alarm management and issued best practices papers.

Alarm Management Best Practices

The central message of these alarm management best practices documents is that the alarm portion of a digital control system should be put together with as much care and design and the rest of the control system. It is not adequate to simply assign a high and low limit to each incoming process variables and call it good. There are a number of practices which can improve the usability and effectiveness of an alarm system. Some techniques are rather simple to implement, others are more complex and require more effort

1. Planning

When designing or evaluating an existing system, start by looking at each alarm. Evaluate whether it is really needed, and is it set correctly? For example, a pump motor may have an alarm which sounds if the motor trips out. However, if there is also a flow sensor downstream of the pump which has an alarm on it, if the pump stops, two alarms will register. Since the real effect on the process is a loss of flow, it makes sense to keep that alarm and eliminate the motor-trip alarm.

2.Prioritization

Alarms should be prioritized. Some alarms are safety related and should be presented to the operator in a manner that emphasizes their importance. High priority alarms should be presented in a fixed location on a dedicated alarm display. This allows operators to immediately recognize them and react in critical situations. It is very difficult to read, understand and quickly react to an alarm which is presented only in a scrolling list of alarms which will be continuously growing during a process upset.

3. Grouping & Suppression

Correctly identifying the required alarms and prioritizing them is a help, but these techniques alone will not stop a surge of alarms during a crisis. In order to significantly reduce the number of presented crisis alarms, methods like alarm grouping and alarm suppression are needed. As mentioned in the ID fan example above, a single point of failure can lead to several abnormal process conditions and thus several alarms.

It is possible to anticipate these patterns and create control logic which handles the situation more elegantly. In the case of the ID fan, if the inlet pressure to the fan goes high and the outlet flow drops it makes sense to present the operator with virtual alarm of “Fan down” rather than a dozen individual alarms, all presented within seconds of each other, that he or she has to deal with. While the operator is trying to comprehend a cluster of individual alarms to deduce that the fan is down, the upstream boiler may trip out.

Hopefully, with a single concise alarm of a lost fan, the operator can take action at the boiler and perhaps keep that unit running at reduced rate until the fan can be restored. All alarms are still registered by the system for diagnosis and troubleshooting, but only condensed, pertinent information is presented to the operator. This type of grouping and suppression can be done manually as well. If there is a process unit that is sometimes taken offline or bypassed, it makes sense to group and suppress all of the alarms associated with that unit’s operation. An operator shouldn’t have to continuously acknowledge a low flow alarm on a line that he knows has no flow in it.

4. Human Administration

Perhaps the most important part of alarm management is the actual human administration of the system. However a system is designed, its intent and use needs to be clearly communicated to the operators which use the system. Training operators on how to use and respond to alarms is as important as good original system design. Alarm management is a dynamic endeavor, and as operators use the system they will have feedback which will lead to design improvements. The system should be periodically audited to look for points of failure and areas of improvement. As processes change, the alarm configuration will also need to be changed. This ongoing attention to the alarm system will make it more robust and yield a system which will avert serious process related incidents.

Easily Integrating Maintenance Information with a Computerized Maintenance Management System (CMMS)

One of the challenges facing industrial process manufacturing is the growing number of data sources.

Examples of these data sources could be shift reports, process data historians, laboratory information management systems, or manufacturing execution systems. Being able to easily connect disparate data sources for decision-making is a key challenge in the age of IIOT. The lack of connection of these data sources and the creation of data silos is one version of the “big data” problem we hear about.

In industrial plants, instruments have typically been connected via a Distributed Control System (DCS) a computerized control system for a process or plant usually with a large number of control loops, in which autonomous controllers are distributed throughout the system, with centralized operator supervisory control. For decades, process data historians have been collecting that data and storing it for long periods of time. The collected data previously served as the main source for plant and operations data analysis and enabled plant personnel and management to understand what was happening in the plant, and through that information and analysis, make decisions to improve the plant operations.

An important source of information that we don’t often hear about is a Computerized Maintenance Management System, otherwise known as CMMS. The CMMS is a software package that maintains a computer database of information about an organization’s maintenance operations, information intended to help maintenance workers do their jobs more effectively. An example, could be determining which machines require maintenance and which storerooms contain the spare parts they need.

Over time, other data sources became common in plants. One example is a LIMS, Laboratory Information Management System, a software-based information management tool for laboratories. The LIMS data assisted plant personnel by delivering information about the quality of the produced product. The LIMS data added an additional layer of data to the data already in the historian. By creating effective interfaces between LIMS and the historian, decision- making became easier and more accessible.

Historically, both the LIMS and DCS data sources provided the two main sources of plant information. Going forward, we see the CMMS as providing a third. A common scenario could be plant personnel asking about the maintenance record for a physical asset such as a boiler. The subject matter expert (SME) used the available plant information, determined the root-cause is a particular physical asset needs repairs and a deeper dive into the issue is required. If available information within the CMMS states that the repair was scheduled, the SME can focus attention elsewhere. Knowledge that the repair has been performed frequently in the past would also draw attention to a larger problem that needs to be addressed. Most of the time this valuable repair history is either not accessible or user-friendly to the SME. Easily and accurately providing this information from the same decision support application already linked to the Historian and LIMS will provide immense value to the plant by connecting data and thus making decision making more informed.

A host of automated solutions available today are trying to solve “big data” problems, problems that often result from large amounts of data that may need to be analyzed and acted upon, through artificial intelligence (AI) and machine learning (ML) techniques. While AI and ML provide some effective solutions, human problem solving and judgment is necessary in most scenarios. The key to using AI and ML tools in a meaningful way is having people with the knowledge to seek out the right solution and understand the results. A key to successfully leveraging the SME’s skills and solving problems is effectively delivering the right data to answer the questions they are asking. SMEs are empowered when information from various sources throughout a plant is all available within a single, user-friendly interface of a powerful and effective CMMS.

Tech Terms Part II

As mentioned in our previous article, technical terms continue to change and evolve. Below are more terms we found useful to know and understand and think you will, too.

Artificial Intelligence (AI) , Machine Learning (ML) & Deep Learning – Not only a mouthful but also confusing word spaghetti that seem to get used interchangeably in a lot of scenarios.  Nvidia uses an interesting infographic to provide some clarification

The Nvidia definition uses the date of introduction to organize the hierarchy.  And since AI was used as common term first, they represent it as the most general term.  On other hand, it seems that intuitively AI would have to include characteristics associated with intelligence – adaptation and reason to name a few.  Whereas, Machine Learning could deliver value from data mining without necessarily having adaptive characteristics or applying reason.  For our purpose, we use Machine Learning as a broader term to describe solutions that address Predictive and Prescriptive Analytics and AI as a narrower subset, more analogous to Deep Learning.  But don’t be surprised to see continued use of these terms synonymously and we look forward to a clearer definition taking shape.

Digital Twin – An emerging term, commonly used in the AI and MI space, used to describe a software model of a physical object.  There is broad meaning to the term, in fact someday we might end up with more granularity like we did with Analytics.  On one extreme is the complicated models that represent an entire plant using first principle techniques, similar to a flight simulator for planes.  Plants have used these digital twins for decades to help commission and optimize plants.  More recently, the term has been used to describe applications like visualization and data models used for remote monitoring or software sensors – a statistical model that estimates the result of a physical instrument in a plant.  In these cases, we have digital representations of the physical object used for a variety of purposes.

Cloud – This is an interesting one.  For a long time, the term sounded sophisticated, but in reality just meant that your application/OS/etc. was hosted on a server somewhere else.  No different than many corporations created Data Centers years ago to centrally host and maintain software.  But applications and technology have evolved and the term Cloud has fulfilled the original promise.  Now Cloud solutions are purpose built to not only run on a server somewhere else but are built with scalability, ease of installation, support and security that make them unique.

Edge – Another example of expanding technology that’s been used in some industries for a long term.  It’s simply the idea of placing applications (data collection, processing, etc.) near the source or end user.  In Telecom, Edge computing is a significant advancement and a key differentiation in the 5G rollout.  In plants, instruments and pieces of equipment can now be considered part of the Edge.  However, real-time Historians have utilized Edge computing for decades.  Remote data collection nodes were placed as close to the data source (DCS or PLC) for years to deliver Store & Forward data collection and pre-processing of data.

In many Use Cases, Edge computing is just a product of the continued evolution of Cloud based computing and the realization that a hybrid strategy is required.  Some pieces will live in the Cloud and others will continue to live On Premise, the Edge.  If you’re peering down from the Cloud, the plant probably looks like the Edge.  To many us, it’s where we’ve been since the 1980’s.

A good article on discussing Cloud vs. Edge can be found here – https://www.arcweb.com/blog/edge-cloud-analytics

Block Chain – Probably our favorite term these days.  Right now it feels like a solution looking for a problem, but a list of buzzwords would not be complete without it.  So, until someone develops a proven Real-Time Historian or DCS block chain, we’ll let others work on this one and you won’t hear much from us (as a side note, as of June 25th, 2018 the average Bitcoin transaction confirmation time, over previous 60 days, was ~20 minutes – https://blockchain.info/charts/avg-confirmation-time?timespan=60days).

Have questions or need clarification? Feel free to reach out to us.

Five Reasons You Should Attend the 2018 dataPARC User Conference

 

It is that time of year again, time to gather with your peers and talk about some of the great benefits of dataPARC software. This year’s dataPARC user conference will be held at the Sentinel Hotel in Portland, Oregon from October 15- 18-2018. Besides getting to learn in a beautiful setting (Portland, OR in the fall – gorgeous!) the following are five reasons why you should attend:

1. You Will Learn New Ways of Utilizing dataPARC

At the dataPARC User Conference, you will find yourself in a room with over a hundred fellow dataPARC users from diverse industries such as oil & gas, food and beverage, chemicals and pulp and paper. Through dynamic presentations, you will witness creative and new ways that the robust software is used to streamline processes, improve reporting and access data quickly, ultimately benefitting your facility and profitability.   The last day of the conference is devoted entirely to interactive training so bring your laptop and get ready to learn. With a demo station set up with PARCview running, you will surely leave with innovative methods to utilize. Round table discussions at each table on relevant topics open up new possibilities and generate ideas.

2. You Will Meet Colleagues and Build a Valuable Network

The support of a knowledgeable network is invaluable. At the dataPARC user conference, you will have the opportunity to expand your professional network at our user conference. Users from all over the world will gather to share ideas, methods and strategies regarding what has worked for their business using dataPARC. Breaks and lunch allow times to get to know colleagues in a comfortable setting. Wednesday evening at the PunchBowl social includes games, karaoke, great food and a chance to relax and have fun.

3. You Will Be Inspired by our Keynote Speaker, Patrick Galvin

We picked a speaker this year that is both dynamic and relevant to your life as a busy professional. Patrick Galvin, author of The Connector’s Way, will teach you important points regarding building business relationships and the value of a connection. Patrick will kick off the conference and you will have had a chance to meet him the welcome reception. You will also be able to ask follow up questions at lunch following his keynote presentation. We will send you home with a copy of his book so that you may continue to learn about the principles and key points of his keynote speech.

4. You Will Stay Informed Regarding Upcoming Versions of dataPARC including 7.0

During the conference, you will learn about any recent changes and updates to the dataPARC software suite.We are gearing up for 7.0 and will be ready to share the new features with you. Capstone personnel will include changes in 7.0 during training sessions. Rather than just getting updates in an e-mail newsletter and corresponding updates video, you will get a chance to see PARCview in action. The demo station set up at the back of the conference room will also allow you to try out the new version.

5. You Will Meet and Interface with the Capstone Technology Team

All of us at Capstone Technology are eager to see you and get to know you. Whether you have communicated with us regarding support, engineering or something else, we look forward to having a conversation with you. Feel free to share some of your ideas and needs for features on future versions. Get to know the salesperson, engineer or support person you have spoken with on the phone.  Try PARCview at the demo station and grab one of us if you have a question.  Need something clarified or have a concern? We are here for you at the conference. We look forward to seeing you there!

Click here to learn more about the conference

Soft Sensors in the Process Industry – How Are They Helpful?

The process industry is constantly evolving as processes are refined and innovative technologies continue to transform the processing landscape. As complex systems are installed, upgraded and monitored, expectations for profitability and smooth delivery of product remain high. Soft sensors with predictive models provide scenarios in which estimations can drive decision-making and improve the reliability of current systems, often working hand-in-hand with their hard-sensor counterparts, creating comprehensive monitoring networks.

Soft sensors are virtual sensors that can alleviate the need for more expensive hardware sensors by their accurate predictions and are utilized heavily in the process industry. Their real-time predictions can alleviate restraints often brought on by limitations: either budget, person hours or current operating equipment. Engineers in the process industry at a chemical plant or a food processing facility may have federal environmental regulations they must contend with. They are in need of an accurate way to measure lab data and temperatures, all the while staying within budget and controlling capital expenditures. They may also have frequent challenges with a part of the process, requiring swift data to troubleshoot and identify the bottleneck and or challenge. Soft sensors can provide an economical and effective alternative to costly hard sensors, which require an expensive investment, require constant servicing and maintenance, and often fail. Through soft sensors, approximated calculations can provide in theory what raw data provides in reality. Using the last year of data, for example, a soft sensor can build a data model, which a process engineer can then use for a variety of calculations and decision-making.

Many physical properties and manual tests performed offline are related to properties, with sensors measured online, from general manufacturing process. For example, the strength of a final product is often related to the temperature the process or the amount of certain chemicals that are added. The strength may only be tested once per hour, but the temperature and chemical usage are measured every second. This relationship allows for soft sensors to estimate the strength in real-time. Another manual lab test may occur once per hour or a few times a day, whereas the soft sensor provides feedback minute-by-minute or second-by-second; soft sensors model “off-line” tests. The soft sensor uses a combination of historical process data recorded from online sensors and laboratory measurements to predict KPIs, replacing manual testing. The greatest benefit to soft sensors in the lab application is faster feedback of changes to physical properties. Simulated testing frees up operators and supervision to work on other high-priority tasks.

Soft sensors maximize your current data and signals that you have already collected in your process. Rather than consistently replacing and spending valuable budget dollars on hardware, the alternative soft-sensor solution works in tandem with your existing hardware sensors. By utilizing what is already there, both complications and downtime can be avoided. Process engineers can use soft sensors connected to your existing hard sensors, and facilities will benefit from real-time analyzing, monitoring and control, providing reliant calculation of parameters where no hardware sensor is available and reducing purchase and maintenance costs.

Through development of the dataPARC’s PARC view visualization software, dataPARC has its own version of soft sensors. Why would dataPARC’s soft sensors be advantageous for you in your plant or facility? Working in tandem with a soft sensor, The PARCmodel component of dataPARC’s product group predicts plant quality variables in real-time, allowing for estimation of properties that are impractical or impossible to measure online.

PARCmodel also reads live data, such as temperatures and pressures from the plant and uses them to calculate estimated quality values from user-entered models. PARCmodel builds models based on first principles or empirical models developed through techniques such as PCA and PLS.

PARCview soft sensors also do the following to ensure smooth operations:

  • Utilize Data From Any Source
  • Feature Closed-Loop Automation Control
  • Are Intuitive and User-Friendly

In addition:

PARCview provides a familiar user interface for model creation and optimization through drive model building with Trend control.

PARCmodel can read in data from any source available to PARCview, including OPCDA, OPCHDA, Plant historian systems, SQL and Excel.

Leverage your Soft Sensor models for use in advanced-control applications. The Soft Sensor output can be used like any other process value in the control solution.

Do you think that dataPARC’s soft sensors could be helpful in your facility? Contact us today to learn more and get customized product suggestions for your company.

Tech Terms Today – What Buzzword Was That?

Buzzwords have always been a part of technology but recently it seems the usage has exploded. At the same time usage is growing, the terms themselves have changed and evolved. Many contemporary terms now include a wide spectrum of meaning in their definitions, applying to new applications and solutions brought to market. From our perspective here at Capstone Technology, the more we can all talk a common language and the more we realize many terms are replacing old concepts, the better off we will be.

With that in mind, off we go. The following thoughts are coming from our vantage point here at Capstone, others may have a different points of view. One thing we think is clear – if you want to see the future, go visit your nearest modern process industry plant. Please let us know what you think!

Industry 4.0 – Originating in Germany, Industry 4.0 is simply the idea that we’re now in the Fourth Industrial Revolution. Many articles have laid out the various revolutions, Wikipedia being the most comprehensive: (https://en.wikipedia.org/wiki/Industry_4.0) Industry 4.0 is also one of many terms used to describe the continued technology advancements in manufacturing. The terminology below defines the elements that comprise Industry 4.0.

Internet of Things (IoT) – Probably the most ubiquitous term we’re currently seeing to describe the latest technology advancements to connect all things. In many contexts, I0T encompasses IIoT (see below), but for the sake of this article and in future discussions, we separate the two. We’re defining IoT as a focus on the consumer market, where there are distinct differences. The advancements can be thought of as:

  1. Revolutionary – a few years ago my thermostat was totally isolated and my watch told time
  2. Important, Not Mission Critical – They make our lives more convenient but if my thermostat goes offline my house can still stay warm. Given that it’s fine to rely on our home WIFI system.
  3. Connected Devices – we primarily see devices that used by isolated, become connected to the internet

Industrial Internet of Things (IIoT) – So, what does the extra “I” mean? Well, in our view, a lot. As this relates to technology advancements that focus on industrial processes (i.e. plants). In the IIoT space the advancements can be thought of as:

  1. Evolutionary – Industrial plants became connected over 30 years ago. The first ‘Distributed Control System’ is credited to Texaco back in 1959. Sure, new technology has made it easier for remote and disparate processes to get connected but the technology and concepts have been around a long time.
  2.  Mission Critical – These systems have to work, so the network and infrastructure built around them are robust and secure.
  3.  Connected Data – Especially for vendors like us, it’s about connecting data that always been there, thereby making it faster and easier to make decisions.

Analytics – This one gets used frequently in the world of data analysis and certainly means different things to different people. Broadly speaking, we see three types of analytics. To illustrate these distinct variations we will use the weather as an analogy.

  1. Descriptive Analytics – Analysis of data from the past (even if it is from a few seconds ago). When using descriptive analytics, individuals are still evaluating data and making decisions, they are gathering and synthesizing multiple instruments to tell you in simple language the current weather & historical conditions. Today it has been cloudy. There was a chance of rain.
  2. Predictive Analytics – Predictive analytics uses more complex analysis, including a time-predictive model. Predictive Analytics for weather would state the following: A Model Predicts Scattered showers this afternoon. Winds will be from the north at 10 miles an hour. Temperatures in the 40s.
  3. Prescriptive Analytics – Includes everything from the predictive model, but also needs more context. Prescriptive analytics for weather would state the following: First, I needs to know a few things about you, where you work and what you do for work. Bring a light coat and umbrella with you to work this morning. The wet roads will require more time to travel. You will need to leave for work by 7:52 to make your 8:30 meeting.

All in all, our conclusion is that tech industry buzzwords are necessary to navigate today’s technology landscape, especially in the process industry where systems and technologies are constantly evolving. Stay tuned for more buzzword definitions coming in future editions of PARCfocus newsletter!

Rolling Up Your Data For Easy Access

The first step toward understanding and optimizing a manufacturing process is to collect and archive data about the process. Ideally, the system used to accomplish this is a ”plant wide” information system, or PIMS, which collects not just process data, but also quality information and laboratory results, and operations information such as upcoming orders and inventory.

The real value of a PIMS is determined by how that collected data is organized, how it is retrieved, and what options are available to help you garner meaningful conclusions and results from the data.

Consider a situation where you have been asked to determine the average flow of steam supplied to a heat exchanger that is used to heat a product stream. No problem. You call up a trend of the last six months of steam flow data with the intention of using the averaging function that is built into the reporting package to generate your answer. Unfortunately, once the trend is up, you see that there are a few hours every day during which there is no product flow and so there is no steam flow. Since you are looking for the average steam flow during operation of the heater, your job just became more difficult. You have no choice but to export the data to a spreadsheet, manually eliminate the zero readings, and then calculate the average of the remaining valid values.

Most manufacturing facilities produce multiple variations of their product on each line. A toothpaste maker may periodically change the flavor; a papermaker may change the formulation of the furnish to make one type of paper stronger than another. These intentional process modifications give rise to the idea of “product runs” or “grade runs” and present significant challenges to data analysts.

Consider that you have been given another assignment. It has been noticed that the last few times the high strength grade A346_SS has been run, an increasing number of reels have been rejected because the MD strength test was too low. Your job is to determine the cause of this problem. There are a number of steps required to address this issue. Depending on the PIMS analysis tools available to you, they may be easy or they may be tedious.

There are probably several dozen critical process variables that you will want to examine. If this grade is not frequently run, you may be required to pull all the data (for all grades) for those critical variables for a period of many months. Returning that amount of data will probably take a great deal of time.

In the worst case scenario, you may have to make a separate query to determine the time periods when grade A346_SS was run, and then go through the data and manually extract those time periods for each process value. Alternatively, your analysis software may allow you to apply a filter to your query to return only the data related to grade A346_SS. This will reduce the work that needs to be done to the data that is returned, but the extra filtering may very well further increase the query duration. Query durations are network and hardware dependent, but returning three or four dozen tags of high frequency data for a 6 month time period could easily take more than 15 minutes.

A system which requires too much work to condense the data, or takes too long to retrieve it, is only marginally useful to the people who need it. PARCview has developed a method and a tool to address this issue.  PARCpde (PARC performance data engine) is a flexible real-time data aggregator which can work with any historian to provide fast access to large ranges of historical data in seconds. PARCpde is used to aggregate, or “rollup” data as it is created. Aggregates can be based on predefined time periods (hours, days, weeks, months) or custom periods, such as shifts or production months. In order to address the issue of grades, the aggregation period can be a flexible time period which is specified based on a production parameter like grade number or production run ID.

For each aggregated period, a number of statistics are automatically calculated and stored, including averages, durations, minimums, maximums and standard deviations. Filter criteria can be further applied to the aggregated data. For example, a “downtime tag’’ could be identified and used as a filter, so that only the process values during active production would be aggregated into the statistics. Condensing process values into statistics for predefined periods in an ongoing manner avoids the time consuming task of having to manually sort values and calculate statistics every time a question comes up. The aggregated statistics become properties of the base tag and do not require creation of a new tag. Finally, if you want statistics for a tag which had not been previously configured to be aggregated, it is possible to easily add that tag and backfill statistics for a specified period of time.

Simply possessing large amounts of stored data does not solve problems or increase productivity. Unless the proper tools are in place to use and interpret that data, the data will not be useful. High frequency data, or readings taken every few seconds, can be valuable. However, if the goal of a particular analysis task is to compare conditions over a long period of time, having to recall and process thousands of data points per hour becomes an impediment rather than an advantage. The best solution is to use a PIMS which has quick access to both aggregated historical datasets and high frequency detail data, and is equipped with the tools to seamlessly move between the two data types.

 

 

 

Advantages of a Plant Wide Information System

When it comes to operating a continuous manufacturing process of any kind, it is beneficial to have the maximum amount of data. Simply collecting and storing data does not, by itself, yield measurable benefits. In order to take full advantage of the data, it needs to be organized, archived and then made available in a variety of formats throughout a facility. This is the function of a versatile and robust plant-wide information system like the dataPARC software suite.

The term “plant-wide” applies to an information system in two ways. The first function is that of collecting information from various data sources throughout the actual manufacturing process, including the administrative infrastructure around the plant. The information system should then condition and archive the data.

“Data conditioning” refers to a variety of techniques that include, but are not limited to, averaging, filtering, correlating time stamps, creating combined calculated values, and aggregating raw data. The second application of the “plant-wide” term refers to the re-presentation of the conditioned data throughout the mill. While using a system like this may seem like an obvious idea, there are many plants  that do not utilize one.

The History of Data Management in Plants

Historically speaking, as manufacturing facilities have transitioned from analog mechanical and pneumatic control systems and paper based recordkeeping into the digital age, the changes have not been at all uniform. Production processes were largely re-instrumented and put under the control of computer based Distributed Control Systems (DCSs) or Programmable Logic Controllers (PLCs). Some of these systems had the ability to archive data, some did not. Initially these systems were offered by a variety of vendors and the exchange of data was either not considered, or discouraged. It is not unusual to see incompatible DCS systems from different vendors within a single facility. As computing technology advanced, the vendor offering the best blend of features and price constantly changed, leading to a diversification of systems as different areas of a facility were modernized.

The Problem with “Data Islands

Quality control labs also took advantage of advancing technology, and invested in database programs and communication interfaces tailored at archiving both manual data entries and automated input from certain instruments. Raw material ordering and inventory was tracked in database programs optimized for those purposes, as was warehousing and shipping information. Each department in a facility did indeed move forward. Digitizing and storing data reduced costs and increased efficiency at the departmental level. While this computerization increased the ability to share data between departments in some ways, a facility which relies on these marginally connected “data islands” is missing out on many of the benefits that can be realized with a plant-wide information capable of integrating data from all those sources.

Troubleshooting with quality and process data

Consider an example of troubleshooting a quality problem in an integrated pulp and paper mill, where product paper reels are produced every 20 to 60 minutes. Several quality tests are run on samples taken from each reel. Suppose that the machine direction (MD) tensile strength was measured as being below the lower acceptable limit for a particular grade on a couple of consecutive reels. With only “data islands” in place, this information would probably be made available to the paper machine operators through an electronic report, and they would be left on their own to figure out the cause and solution for this problem.

With a plant-wide information system in place, the MD strength data could be easily trended next to any number of upstream process variables. A good information system would have the ability to “time shift” the quality data, so that the drop in the strength number for the reels could be visually matched with changes in other process variables. Doing this, the machine operators would see that the drop in strength had started before the refining change was made.

A good plant-wide information system such as dataPARC would give the paper machine operators access to variables from outside their area. By casting their troubleshooting net a little farther, the PM operators could see that the time at which the drop in paper strength occurred at the reel closely matched an earlier upset in the digester, which led to the production of 3 hours of over-cooked, low strength pulp.

The Benefits of Process Information and Corresponding Insight

Having this insight would lead to two positive outcomes. Not only would the source of the low strength paper be discovered, but by knowing that it came from outside the paper machine, those operators would not create additional, possibly off-spec product by “chasing their tail” and further changing refiner settings. With the knowledge that the 3 hours of low strength pulp had largely already passed through the machine, they would also know that the strength number would in all likelihood return without the operators making any changes to the stock prep and machine settings. In this case the enhanced data access would lead to good decision making and more efficient operation.

Combing Raw Cost and Process information

In a manufacturing facility, electrical, fuel and raw material costs originate in Enterprise Resource Planning (ERP) software. These costs are sometimes dynamic, and the ability to access those numbers is an important capability for a plant-wide information system. Some facilities generate and sell electrical power as well as consume it. Having accurate real time cost data helps engineers and operators optimize fuel types, steam generation and electrical power flows to maximize profits.

Additionally, showing actual costs in process trends is a technique used to further operator involvement in optimizing a process.  A steam vent of 10,000 lbs per hour may provide a convenient way to operate a given process for a period of time, but it comes at a cost. If the vent flow is displayed as loss of $100 per hour based on the flow and value of steam, it is easier to communicate to operators the importance of eliminating that method of operation.

Pushing Process Information

While the previous two examples apply to enhancing the operation of an actual production process, it is equally important that the vital metrics of the process be seamlessly returned to an ERP or other administrative software for ordering and shipping reasons. Modern manufacturing philosophy says that minimizing inventory is one way to reduce costs. As the period of time between the production and shipping of goods or product is reduced, it becomes increasingly important for the shipping planners to have real time information about manufacturing problems which might lead to the inability to meet an order. It is the role of a plant-wide information system to make this interchange of data happen.

Goals of Plant- Wide Information Sharing

As stated above, a plant-wide information system should fulfill two important goals. One is to collect and archive as much data as is needed to operate the plant and allow for effective troubleshooting. Just as importantly, it should present, in various formats, the same conditioned and calculated values to everyone throughout the mill. By using a single set of values, all the decision makers from the planners and engineers to the process operators are all working with the same up-to-date data.

Contact us to learn more about dataPARC for your plant wide data integration needs.

A Guide To Reporting and Notifications with a Data Historian

Historian packages were originally intended to be a support tool for operating personnel. Current and historical data was constantly displayed on a dedicated screen next to the primary control screens, and users were intended to interact with it at that location more or less continuously. As the historian became a one-stop source for all types of data throughout a facility, it became a tool that could benefit supervisory and management personnel as well. This led to the development of a variety of remote notification and reporting tools to meet the somewhat different needs of these individuals.

DataPARC reporting
DataPARC is one of the leading historian and data analysis software packages available to process industries. DataPARC has a variety of mechanisms for relaying information to remote users of the system in order to keep them in contact with the process. At the most basic level, the system can be configured to email one or more people based on a single tag going beyond a set limit. A separate notification can also be sent at the time an operator enters a reason for the excursion, and also when the variable returns to a value within the limit.

At the next level of complexity, the system can populate and send an entire report, based on an event or a preset time schedule. Reports can be as simple as a snapshot showing the current values of a few KPIs, or as complex as a multipage report containing tables, process graphics, charts and trends. DataPARC has a built-in, flexible and easy to use application for developing report templates. DataPARC also offers an add-in which allows data to be shown within Excel. For people who are proficient with the tools within Excel, this is another avenue for creating reports. Reports created in Excel can be viewed natively in Excel or exported as .pdf or .html files for viewing on a wide range of platforms. Production, raw material consumption and environmental compliance can all be easily tracked by periodic reporting, and any deviations can be quickly spotted and rectified. Receiving a daily report just before a morning meeting provides a quick way to avoid unpleasant surprises at the meeting.

PARCmobile is the most flexible remote-user experience. PARCmobile  gives you continuously updated data and access to most of the features and all of the data within dataPARC, all delivered on a mobile device. Live trends and graphics make it possible to take the next step, beyond a single number or notification, and perform a wide ranging investigation of any process irregularities.

Generate the Best Reports Possible Using these Guidelines:
Different people have different methods of working. Not all reporting needs are the same. A process engineer troubleshooting a particular problem will want more granular, higher frequency reports focused on a particular area, at least for the duration of the issue, than an area manager who is monitoring multiple processes to make sure that they are generally on track. Nonetheless, here are some guidelines that will apply to most remote users most of the time:

Minimize the number of notifications that you receive, and choose them wisely. If you receive an email for every minor process excursion, their importance will diminish and you are liable to not notice or respond to an important notification. Focus on watching only crucial KPIs.

Reports should be simple. The primary purpose of mobile notification is to be alerted to new or potential problems, not to find causes or solve those problems based on the report.

Export reports in pdf format. This is a standard format which offers easy scalability and works well on virtually all software and hardware platforms.

Use the group function to notify everyone who might be affected by a process excursion. For example, if high torque in a clarifier is detected due to high solids coming in from a process sewer, all areas which are serviced by that sewer should be notified. Doing this will hopefully result in the problem being solved more quickly, as each area checks on their contribution simultaneously, rather than each area looking in sequence, only after each downstream contributor reports their results.

Incorporate dead banding and/or delay into your notifications. Again, this depends on your job role, but for most remote users of data, unless an excursion presents a safety hazard or compliance issue, you don’t need to know about it immediately. Minor excursions can resolve themselves or be handled by frontline operators. Delaying notifications helps to minimize their numbers by filtering out the minor issues from the major ones.

Whichever historian you use, using the built-in notification and reporting functions will increase its effectiveness by engaging a wider range of users. Having more eyes and brains monitoring a process will hopefully lead to problems being addressed more effectively and keep the process running more profitably.

Benefits of Calculated Variables

As an engineer in a manufacturing facility, you are excited that management has purchased and implemented a plant wide Information Management system, or PIM. This gives you the ability to collect and store process data, and to display both real time and historical process graphs which allow you and the operators to better understand the process. You can finally trend important process variables next to each other in order to visualize relationships that you suspect exist, and to use historical data for accurate diagnosis of problems, for example, was it lube oil pump failure, or loss of cooling water that led to the recent shutdown of a compressor?

Not long after you start doing your time based analysis of data, you develop the desire to trend not just raw process data, but modified versions of that data. In the simplest example of calculated data, a single trend might be modified by a constant. A chemical addition flow may be reported as gallons per minute, but you want to discuss and track that value as pounds per hour.

Another common scenario involves combining  two or  more tags. Perhaps you have an inlet and outlet pressure to a scrubber.  As the flow through the scrubber changes both values change and it would be better to monitor a single differential pressure rather than comparing two changing trends.

A second example of combining tags would be multiplying the total flow of a stream by the concentration of a component, perhaps the consistency of solids in the stream, to create a flow rate of just the solids. Even if the consistency value comes from a lab test, PARCview will pull the value in, and properly time synchronize and combine it will the flow value. The ability to observe and trend these created variables vastly increases the usefulness of the presentation system. The more you become involved with data analysis, the more you see the need to be able to manipulate the time based raw data to display the information you and others need to monitor. DataPARC has three techniques of somewhat increasing complexity which give users the ability to manipulate raw data. All involve creating a new “calculated variable tag.”

The first technique is available to all users and is very easily implemented.  Using the example of combining a total flow and a concentration to create a component flow tag, the procedure starts by dragging the total flow tag onto a trend. Simply clicking on the variable name within the header block at the top of the trend activates it for editing.  The tag can be modified by appending the text. Once that text is correctly entered, all the points for the current time span are calculated and a new calculated trend displays a total flow trend. The minimum and maximum values of the tag may need to be modified to properly display the trend. This new tag is called an “Expression” and can be dragged or copied to other trends.

The second technique for creating a calculated tag, a “simple formula”, involves a few more keystrokes but offers a number of key advantages. To create a simple formula, the Script Editor window is opened. Note that instead of an arithmetic expression the tag is followed by a name. This name is associated with programming code which is entered in a workspace on the Script Editor window. This code acts like a programming subroutine, accepting the tag name as an argument, and returning the evaluated value of the tag as an output.

The formula creation environment offers more flexibility in terms of logic than an Expression, it gives access to all the functionality of the VB.NET programming environment.  Another advantage of this approach is that formulas are saved by name and can be reused by others. A “standard “routine such as the conversion of Celsius temperature to Fahrenheit temperature can be created once, by one person, and then applied by anyone else in the future. Simply associating a different input tag with the formula name will create a new output tag. If the new tag is saved, it is placed in the master tag browser and becomes available to everyone.

The third technique for creating calculated tags is to create an “advanced formula”.  There is very little difference in the creation of a simple vs. advanced calculation tag.  The primary difference is in how the data is handled within the procedure. In the simple formula, if the timing of the data of different tags used in the calculation is not exact, the output points are automatically associated to the input times by PARCView.  In an advanced formula, the user has the opportunity/responsibility for the correct association of input and output data. For example, pulp consistency data may only be available only once an hour, because it is a lab test. If this data were being combined with a continuous total flow to find a dry fiber flow, it would be more accurate to multiply each flow value in the past hour times an average of the 1 hour old consistency and the most recent consistency, as opposed to using the one hour old consistency for the whole past hour.  This level of control is also desirable when creating some statistical functions.

In addition to providing users the capability to easily combine and customize tags,  the formula creation functionality of dataPARC  has been used  to build a number of named advanced formulas which can be or can be applied directly to tags with no programming at all. For example there are routines which allow the user to introduce a fixed time lag to a incoming signal, perhaps to simulate flow through extended pipe runs.  There are routines to totalize values over specified periods of time. A more sophisticated routine will totalize, and average, and even create a standard deviation value for an input tag, but only when a trigger tag, such as a grade or product is equal to a specified value.

Whether you used pre-built functions or program your own, the ability to easily configure calculated tags considerably expands your ability analyze process data, and to display the actual information which will help you and others operate and optimize the process.