The Digital Transformation Continues

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.

For many of us, It started with the very first personal computers like the Commodore 64, as well as e-mail, then our first mobile phones, and has worked up to Amazon’s Alexa and IPhone’s Siri.  We use our smartphones, which are essentially our own portable mini-computers, to navigate so many areas of our lives.  The digital transformation has undoubtedly also affected operations in our work lives as well as how data and communications are managed and how issues are resolved within processes.

In the past few years, our customers have become more aware of digital technology and we are hearing about their digital journeys more now than ever. In this awareness, there has been confusion with terms like Industry 4.0 and IIOT. These two concepts in particular can create confusion.

The first is a perception that these technologies are new, when in fact many of the technologies have been around for decades and the fundamental problem of data accessibility is not new.  There are new enablers or catalysts, like reduced cost and wireless technology, but the fundamental concept of collecting data to make better decisions has been around for decades.  The second issue is the association of technology (i.e. products) to a broad theme like IIOT or Industry 4.0.   This broad theme to the classic trap of solutions looking for problems to solve and customers starting with technology instead of starting with their challenges and problems.

The reality is that there are huge opportunities and we are all trying to figure out the right solutions. Engaging terminology is one way to convey messages in a way customers will notice and remember. IIOT and Industry 4.0 are examples of this terminology.

One term we heard often at a 2019 industry event was Digital Transformation.  Digital Transformation is a term that resonates.  The term implies that the journey of becoming digital is not  binary. There are not digital or non-digital companies; we are all on a digital spectrum.

At Capstone, we like to take this one-step further and break customers into five categories of transformation.  Assigning transformation categories provides a framework to define the broad challenges for each customer and present the best solution for a customer.  It also helps give customers a big picture view of what is possible for our customers.  Perhaps you can see yourself and your plant in one of the stages.

Stage 1:  Stage one is an early stage.  If you are in stage one you likely lots of manual operations and limited or no PLC or DCS.  On the early end of the scale are the Stage 1 customers, still looking to modernize their control systems from analog to digital.  There are opportunities at this end, but the primary focus should be identifying the right solutions (PLC upgrades, additional sensors/instruments, etc.) and topics like AR (Augmented Reality) and ML (Machine Learning) should be a longer-term vision.

Stage 2:  If you are in stage two, you are in a toddler stage.  Your plant has a PLC or DCS, but does not have a historian.  You also likely have many functions that require manual data.

Stage 3:  In stage three things are starting to move and you are starting to walk.  Your plant may be ready from a cultural standpoint but lacks the software tools.  You may have a small historian that is not heavily used or other homegrown tools, but you are able to see the vision of what is needed to go forward.

Stage 4:  Beginning to run describes stage four.  At stage four, your plant is fairly mature in its use of data.  You have a historian with most of the data coming in.  You likely has a variety of systems for LIMS, manual data, etc.  The culture is there to embrace data for decision-making, but the site lacks the right software to bring the data together.

Stage 5:  Marathon training.  If you are in stage five, you are in a very sophisticated plant that has a well-developed strategy for data and decision support.  You have settled on one major historian and have spent a lot of time and effort to transfer all data from other sources (LIMS, MES, ERP, etc.) into the historian and have a number of business systems pulling from the historian. Customers at Stage 5 are ready, from both an infrastructure and cultural perspective, for advanced topics like AR & ML.

At all stages there are opportunities, one just needs to match the right solution to the right situation.  The other thing to keep in mind is that these stages are not like mile markers on a highway, where once you pass the marker it is in your review mirror and no longer relevant.  A plant is like paddling a boat upstream, there are always forces trying to pull you back.  End-users need to keep an eye on the fundamentals and not lose sight that even improvements to fundamentals can deliver a lot of value.

Delivering value to the plant does not have to be the next big thing; it could be an improvement to something you are already doing. What are some of the small changes you could make to improve your plant’s operations?

We are here to help. Contact us for expertise and guidance in your digital transformation.



2018 dataPARC User Conference – A Recap

The 2018 dataPARC User Conference was a success! Held in Portland Oregon from October 15-18, people traveled from all over the world from six different countries to present or attend. Countries represented were: United States, Canada, South Korea, Taiwan, Thailand, and China. This year we had twenty Capstone presentations and eight customer or partner presentations.

Ron gave his much anticipated update and set the tone for a great three days of learning, sharing and networking.

Patrick Galvin energized the group with his dynamic keynote speech focusing on what it takes to build better business relationships. Patrick gave some great tips on time management and work life balance, too.

In addition to our usual customer and staff presentations, we featured break-out practical application sessions where attendees could learn about specific features and scenarios in PARCview. Topics included: Examples and Best practices for high performance process graphics, ways dataPARC can be deployed, 5 Whys- A Practical Methodology for Root Cause Analysis, Finding What’s Changed – Anomaly Detection For The Process, Asset Management – Leveraging Control Loops, Asset Management – Building Asset Models, How to Use Alarm Server – A Practical how to for Leveraging the Latest Technology for Notifications and Incident tracking, Types of Calcs: A session focused on the types of calculations we offer as well as when to use and highlight the opportunity of Historizing Scripted Calcs

On Tuesday night at the dinner and social, fun was had by all at the Punch Bowl, where attendees walked a few blocks to Pioneer Place Mall and enjoyed a taco truck, beverages, ping pong, cornhole, bowling and even karaoke!

All of the dataPARC User Conference presentations along with the PowerPoint slides, are available on the new dataPARC community forum. If you were not able to attend (or even if you did and you’d like to review a presentation), please feel free to sign up for an account on our community forum.

The next user conference is planned for fall of 2020. Mark your calendars, get thinking about what you’d like to see or present, and plan to join us!

Best Practices for Process Alarm Management

The purpose of process control alarms is to use automation to assist human operators as they monitor and control processes, and alert them to abnormal situations. Incoming process signals are continuously monitored, and if the value of a given signal moves into an abnormal range, a visual and/or audio alarm notifies the operator of that condition.

This seems like a simple concept, almost not worthy of a second thought, and unfortunately, sometimes the configuration of alarms in a control system doesn’t get the attention it deserves. Configuring and maintaining alarms properly requires careful planning and has a significant impact on the overall effectiveness of a control system.

Early Alarm Systems

Before digital process control, each alarm indicator required a dedicated lamp and some physical wiring. This meant that:

  1. Due to the effort required, the need for a given alarm was carefully scrutinized, somewhat limiting the total number of alarms
  2. Once the alarm was in place, it had a permanent “home” where an operator could become comfortable with its location and meaning

The Introduction of Digital Alarms

As control systems became digital, the creation and presentation of alarms changed significantly. First, where a “traditional” control panel was many square feet in size, digital control system human machine interfaces (HMIs) consisted of a few computer monitors which displayed a representation of the process in an area more appropriately measured in square inches than square feet.

Second, creating an alarm event was a simple matter of reconfiguring some software. Multiple levels of alarms (hi & hi-hi, lo & lo-lo) could easily be assigned to a single process value. This led to an increase in the number of possible alarm notifications. Finally, when an alarm was activated, it was presented as an icon, or as flashing text on a process schematic screen, and then logged in a dedicated alarm list somewhere within the large collection of display screens. However when the alarm was presented, it lacked the consistency of location and intuitive meaning that the traditional physical lamp had.

The Dilemma With Digital Alarms

The digital alarm systems worked acceptably well for single alarms and minor upsets. But for major upsets the limited visual real estate and the need to read and mentally place each alarm created bottlenecks to acknowledging and properly responding to large numbers of alarms in a short interval of time.

If a critical component in a process fails, for example a lubrication pump on a large induction fan, the result can be a “flood” of alarms occurring over a short time period. The first wave of alarms is associated with the immediate failure, low lube oil pressure, low lube oil flow, and high bearing temperatures. The second wave is associated with interlocks shutting down the fan, high inlet pressure, low air flow and low downstream pressure. With no ID fan the upstream boiler will soon start to shut down and generate numerous alarms, followed most likely by problems from the process or processes which are served by the boiler.

The ASM Consortium

Analyses of a number of serious industrial accidents has shown that a major contributor to the severity of the accidents was an overwhelming number of alarms that operators were not capable of understanding and properly responding to in a timely manner. As a result of these findings, in 1992 a consortium of companies including Honeywell and several petroleum and chemical manufacturers was established to study the issue of alarm management, or more generally, abnormal situation management.

The ASM Consortium, with funding from the National Institute of Standards and Technology, researched and developed a series of documents on operator situation awareness, operator effectiveness and alarm management. Since then a number of other industry groups and professional organizations, such as the Engineering Equipment and Materials Users Association in the UK and Instrument Society of America have also examined the issue of alarm management and issued best practices papers.

Alarm Management Best Practices

The central message of these alarm management best practices documents is that the alarm portion of a digital control system should be put together with as much care and design and the rest of the control system. It is not adequate to simply assign a high and low limit to each incoming process variables and call it good. There are a number of practices which can improve the usability and effectiveness of an alarm system. Some techniques are rather simple to implement, others are more complex and require more effort

1. Planning

When designing or evaluating an existing system, start by looking at each alarm. Evaluate whether it is really needed, and is it set correctly? For example, a pump motor may have an alarm which sounds if the motor trips out. However, if there is also a flow sensor downstream of the pump which has an alarm on it, if the pump stops, two alarms will register. Since the real effect on the process is a loss of flow, it makes sense to keep that alarm and eliminate the motor-trip alarm.


Alarms should be prioritized. Some alarms are safety related and should be presented to the operator in a manner that emphasizes their importance. High priority alarms should be presented in a fixed location on a dedicated alarm display. This allows operators to immediately recognize them and react in critical situations. It is very difficult to read, understand and quickly react to an alarm which is presented only in a scrolling list of alarms which will be continuously growing during a process upset.

3. Grouping & Suppression

Correctly identifying the required alarms and prioritizing them is a help, but these techniques alone will not stop a surge of alarms during a crisis. In order to significantly reduce the number of presented crisis alarms, methods like alarm grouping and alarm suppression are needed. As mentioned in the ID fan example above, a single point of failure can lead to several abnormal process conditions and thus several alarms.

It is possible to anticipate these patterns and create control logic which handles the situation more elegantly. In the case of the ID fan, if the inlet pressure to the fan goes high and the outlet flow drops it makes sense to present the operator with virtual alarm of “Fan down” rather than a dozen individual alarms, all presented within seconds of each other, that he or she has to deal with. While the operator is trying to comprehend a cluster of individual alarms to deduce that the fan is down, the upstream boiler may trip out.

Hopefully, with a single concise alarm of a lost fan, the operator can take action at the boiler and perhaps keep that unit running at reduced rate until the fan can be restored. All alarms are still registered by the system for diagnosis and troubleshooting, but only condensed, pertinent information is presented to the operator. This type of grouping and suppression can be done manually as well. If there is a process unit that is sometimes taken offline or bypassed, it makes sense to group and suppress all of the alarms associated with that unit’s operation. An operator shouldn’t have to continuously acknowledge a low flow alarm on a line that he knows has no flow in it.

4. Human Administration

Perhaps the most important part of alarm management is the actual human administration of the system. However a system is designed, its intent and use needs to be clearly communicated to the operators which use the system. Training operators on how to use and respond to alarms is as important as good original system design. Alarm management is a dynamic endeavor, and as operators use the system they will have feedback which will lead to design improvements. The system should be periodically audited to look for points of failure and areas of improvement. As processes change, the alarm configuration will also need to be changed. This ongoing attention to the alarm system will make it more robust and yield a system which will avert serious process related incidents.

Easily Integrating Maintenance Information with a Computerized Maintenance Management System (CMMS)

One of the challenges facing industrial process manufacturing is the growing number of data sources.

Examples of these data sources could be shift reports, process data historians, laboratory information management systems, or manufacturing execution systems. Being able to easily connect disparate data sources for decision-making is a key challenge in the age of IIOT. The lack of connection of these data sources and the creation of data silos is one version of the “big data” problem we hear about.

In industrial plants, instruments have typically been connected via a Distributed Control System (DCS) a computerized control system for a process or plant usually with a large number of control loops, in which autonomous controllers are distributed throughout the system, with centralized operator supervisory control. For decades, process data historians have been collecting that data and storing it for long periods of time. The collected data previously served as the main source for plant and operations data analysis and enabled plant personnel and management to understand what was happening in the plant, and through that information and analysis, make decisions to improve the plant operations.

An important source of information that we don’t often hear about is a Computerized Maintenance Management System, otherwise known as CMMS. The CMMS is a software package that maintains a computer database of information about an organization’s maintenance operations, information intended to help maintenance workers do their jobs more effectively. An example, could be determining which machines require maintenance and which storerooms contain the spare parts they need.

Over time, other data sources became common in plants. One example is a LIMS, Laboratory Information Management System, a software-based information management tool for laboratories. The LIMS data assisted plant personnel by delivering information about the quality of the produced product. The LIMS data added an additional layer of data to the data already in the historian. By creating effective interfaces between LIMS and the historian, decision- making became easier and more accessible.

Historically, both the LIMS and DCS data sources provided the two main sources of plant information. Going forward, we see the CMMS as providing a third. A common scenario could be plant personnel asking about the maintenance record for a physical asset such as a boiler. The subject matter expert (SME) used the available plant information, determined the root-cause is a particular physical asset needs repairs and a deeper dive into the issue is required. If available information within the CMMS states that the repair was scheduled, the SME can focus attention elsewhere. Knowledge that the repair has been performed frequently in the past would also draw attention to a larger problem that needs to be addressed. Most of the time this valuable repair history is either not accessible or user-friendly to the SME. Easily and accurately providing this information from the same decision support application already linked to the Historian and LIMS will provide immense value to the plant by connecting data and thus making decision making more informed.

A host of automated solutions available today are trying to solve “big data” problems, problems that often result from large amounts of data that may need to be analyzed and acted upon, through artificial intelligence (AI) and machine learning (ML) techniques. While AI and ML provide some effective solutions, human problem solving and judgment is necessary in most scenarios. The key to using AI and ML tools in a meaningful way is having people with the knowledge to seek out the right solution and understand the results. A key to successfully leveraging the SME’s skills and solving problems is effectively delivering the right data to answer the questions they are asking. SMEs are empowered when information from various sources throughout a plant is all available within a single, user-friendly interface of a powerful and effective CMMS.

Tech Terms Part II

As mentioned in our previous article, technical terms continue to change and evolve. Below are more terms we found useful to know and understand and think you will, too.

Artificial Intelligence (AI) , Machine Learning (ML) & Deep Learning – Not only a mouthful but also confusing word spaghetti that seem to get used interchangeably in a lot of scenarios.  Nvidia uses an interesting infographic to provide some clarification

The Nvidia definition uses the date of introduction to organize the hierarchy.  And since AI was used as common term first, they represent it as the most general term.  On other hand, it seems that intuitively AI would have to include characteristics associated with intelligence – adaptation and reason to name a few.  Whereas, Machine Learning could deliver value from data mining without necessarily having adaptive characteristics or applying reason.  For our purpose, we use Machine Learning as a broader term to describe solutions that address Predictive and Prescriptive Analytics and AI as a narrower subset, more analogous to Deep Learning.  But don’t be surprised to see continued use of these terms synonymously and we look forward to a clearer definition taking shape.

Digital Twin – An emerging term, commonly used in the AI and MI space, used to describe a software model of a physical object.  There is broad meaning to the term, in fact someday we might end up with more granularity like we did with Analytics.  On one extreme is the complicated models that represent an entire plant using first principle techniques, similar to a flight simulator for planes.  Plants have used these digital twins for decades to help commission and optimize plants.  More recently, the term has been used to describe applications like visualization and data models used for remote monitoring or software sensors – a statistical model that estimates the result of a physical instrument in a plant.  In these cases, we have digital representations of the physical object used for a variety of purposes.

Cloud – This is an interesting one.  For a long time, the term sounded sophisticated, but in reality just meant that your application/OS/etc. was hosted on a server somewhere else.  No different than many corporations created Data Centers years ago to centrally host and maintain software.  But applications and technology have evolved and the term Cloud has fulfilled the original promise.  Now Cloud solutions are purpose built to not only run on a server somewhere else but are built with scalability, ease of installation, support and security that make them unique.

Edge – Another example of expanding technology that’s been used in some industries for a long term.  It’s simply the idea of placing applications (data collection, processing, etc.) near the source or end user.  In Telecom, Edge computing is a significant advancement and a key differentiation in the 5G rollout.  In plants, instruments and pieces of equipment can now be considered part of the Edge.  However, real-time Historians have utilized Edge computing for decades.  Remote data collection nodes were placed as close to the data source (DCS or PLC) for years to deliver Store & Forward data collection and pre-processing of data.

In many Use Cases, Edge computing is just a product of the continued evolution of Cloud based computing and the realization that a hybrid strategy is required.  Some pieces will live in the Cloud and others will continue to live On Premise, the Edge.  If you’re peering down from the Cloud, the plant probably looks like the Edge.  To many us, it’s where we’ve been since the 1980’s.

A good article on discussing Cloud vs. Edge can be found here –

Block Chain – Probably our favorite term these days.  Right now it feels like a solution looking for a problem, but a list of buzzwords would not be complete without it.  So, until someone develops a proven Real-Time Historian or DCS block chain, we’ll let others work on this one and you won’t hear much from us (as a side note, as of June 25th, 2018 the average Bitcoin transaction confirmation time, over previous 60 days, was ~20 minutes –

Have questions or need clarification? Feel free to reach out to us.

Five Reasons You Should Attend the 2018 dataPARC User Conference


It is that time of year again, time to gather with your peers and talk about some of the great benefits of dataPARC software. This year’s dataPARC user conference will be held at the Sentinel Hotel in Portland, Oregon from October 15- 18-2018. Besides getting to learn in a beautiful setting (Portland, OR in the fall – gorgeous!) the following are five reasons why you should attend:

1. You Will Learn New Ways of Utilizing dataPARC

At the dataPARC User Conference, you will find yourself in a room with over a hundred fellow dataPARC users from diverse industries such as oil & gas, food and beverage, chemicals and pulp and paper. Through dynamic presentations, you will witness creative and new ways that the robust software is used to streamline processes, improve reporting and access data quickly, ultimately benefitting your facility and profitability.   The last day of the conference is devoted entirely to interactive training so bring your laptop and get ready to learn. With a demo station set up with PARCview running, you will surely leave with innovative methods to utilize. Round table discussions at each table on relevant topics open up new possibilities and generate ideas.

2. You Will Meet Colleagues and Build a Valuable Network

The support of a knowledgeable network is invaluable. At the dataPARC user conference, you will have the opportunity to expand your professional network at our user conference. Users from all over the world will gather to share ideas, methods and strategies regarding what has worked for their business using dataPARC. Breaks and lunch allow times to get to know colleagues in a comfortable setting. Wednesday evening at the PunchBowl social includes games, karaoke, great food and a chance to relax and have fun.

3. You Will Be Inspired by our Keynote Speaker, Patrick Galvin

We picked a speaker this year that is both dynamic and relevant to your life as a busy professional. Patrick Galvin, author of The Connector’s Way, will teach you important points regarding building business relationships and the value of a connection. Patrick will kick off the conference and you will have had a chance to meet him the welcome reception. You will also be able to ask follow up questions at lunch following his keynote presentation. We will send you home with a copy of his book so that you may continue to learn about the principles and key points of his keynote speech.

4. You Will Stay Informed Regarding Upcoming Versions of dataPARC including 7.0

During the conference, you will learn about any recent changes and updates to the dataPARC software suite.We are gearing up for 7.0 and will be ready to share the new features with you. Capstone personnel will include changes in 7.0 during training sessions. Rather than just getting updates in an e-mail newsletter and corresponding updates video, you will get a chance to see PARCview in action. The demo station set up at the back of the conference room will also allow you to try out the new version.

5. You Will Meet and Interface with the Capstone Technology Team

All of us at Capstone Technology are eager to see you and get to know you. Whether you have communicated with us regarding support, engineering or something else, we look forward to having a conversation with you. Feel free to share some of your ideas and needs for features on future versions. Get to know the salesperson, engineer or support person you have spoken with on the phone.  Try PARCview at the demo station and grab one of us if you have a question.  Need something clarified or have a concern? We are here for you at the conference. We look forward to seeing you there!

Click here to learn more about the conference

Soft Sensors in the Process Industry – How Are They Helpful?

The process industry is constantly evolving as processes are refined and innovative technologies continue to transform the processing landscape. As complex systems are installed, upgraded and monitored, expectations for profitability and smooth delivery of product remain high. Soft sensors with predictive models provide scenarios in which estimations can drive decision-making and improve the reliability of current systems, often working hand-in-hand with their hard-sensor counterparts, creating comprehensive monitoring networks.

Soft sensors are virtual sensors that can alleviate the need for more expensive hardware sensors by their accurate predictions and are utilized heavily in the process industry. Their real-time predictions can alleviate restraints often brought on by limitations: either budget, person hours or current operating equipment. Engineers in the process industry at a chemical plant or a food processing facility may have federal environmental regulations they must contend with. They are in need of an accurate way to measure lab data and temperatures, all the while staying within budget and controlling capital expenditures. They may also have frequent challenges with a part of the process, requiring swift data to troubleshoot and identify the bottleneck and or challenge. Soft sensors can provide an economical and effective alternative to costly hard sensors, which require an expensive investment, require constant servicing and maintenance, and often fail. Through soft sensors, approximated calculations can provide in theory what raw data provides in reality. Using the last year of data, for example, a soft sensor can build a data model, which a process engineer can then use for a variety of calculations and decision-making.

Many physical properties and manual tests performed offline are related to properties, with sensors measured online, from general manufacturing process. For example, the strength of a final product is often related to the temperature the process or the amount of certain chemicals that are added. The strength may only be tested once per hour, but the temperature and chemical usage are measured every second. This relationship allows for soft sensors to estimate the strength in real-time. Another manual lab test may occur once per hour or a few times a day, whereas the soft sensor provides feedback minute-by-minute or second-by-second; soft sensors model “off-line” tests. The soft sensor uses a combination of historical process data recorded from online sensors and laboratory measurements to predict KPIs, replacing manual testing. The greatest benefit to soft sensors in the lab application is faster feedback of changes to physical properties. Simulated testing frees up operators and supervision to work on other high-priority tasks.

Soft sensors maximize your current data and signals that you have already collected in your process. Rather than consistently replacing and spending valuable budget dollars on hardware, the alternative soft-sensor solution works in tandem with your existing hardware sensors. By utilizing what is already there, both complications and downtime can be avoided. Process engineers can use soft sensors connected to your existing hard sensors, and facilities will benefit from real-time analyzing, monitoring and control, providing reliant calculation of parameters where no hardware sensor is available and reducing purchase and maintenance costs.

Through development of the dataPARC’s PARC view visualization software, dataPARC has its own version of soft sensors. Why would dataPARC’s soft sensors be advantageous for you in your plant or facility? Working in tandem with a soft sensor, The PARCmodel component of dataPARC’s product group predicts plant quality variables in real-time, allowing for estimation of properties that are impractical or impossible to measure online.

PARCmodel also reads live data, such as temperatures and pressures from the plant and uses them to calculate estimated quality values from user-entered models. PARCmodel builds models based on first principles or empirical models developed through techniques such as PCA and PLS.

PARCview soft sensors also do the following to ensure smooth operations:

  • Utilize Data From Any Source
  • Feature Closed-Loop Automation Control
  • Are Intuitive and User-Friendly

In addition:

PARCview provides a familiar user interface for model creation and optimization through drive model building with Trend control.

PARCmodel can read in data from any source available to PARCview, including OPCDA, OPCHDA, Plant historian systems, SQL and Excel.

Leverage your Soft Sensor models for use in advanced-control applications. The Soft Sensor output can be used like any other process value in the control solution.

Do you think that dataPARC’s soft sensors could be helpful in your facility? Contact us today to learn more and get customized product suggestions for your company.

Tech Terms Today – What Buzzword Was That?

Buzzwords have always been a part of technology but recently it seems the usage has exploded. At the same time usage is growing, the terms themselves have changed and evolved. Many contemporary terms now include a wide spectrum of meaning in their definitions, applying to new applications and solutions brought to market. From our perspective here at Capstone Technology, the more we can all talk a common language and the more we realize many terms are replacing old concepts, the better off we will be.

With that in mind, off we go. The following thoughts are coming from our vantage point here at Capstone, others may have a different points of view. One thing we think is clear – if you want to see the future, go visit your nearest modern process industry plant. Please let us know what you think!

Industry 4.0 – Originating in Germany, Industry 4.0 is simply the idea that we’re now in the Fourth Industrial Revolution. Many articles have laid out the various revolutions, Wikipedia being the most comprehensive: ( Industry 4.0 is also one of many terms used to describe the continued technology advancements in manufacturing. The terminology below defines the elements that comprise Industry 4.0.

Internet of Things (IoT) – Probably the most ubiquitous term we’re currently seeing to describe the latest technology advancements to connect all things. In many contexts, I0T encompasses IIoT (see below), but for the sake of this article and in future discussions, we separate the two. We’re defining IoT as a focus on the consumer market, where there are distinct differences. The advancements can be thought of as:

  1. Revolutionary – a few years ago my thermostat was totally isolated and my watch told time
  2. Important, Not Mission Critical – They make our lives more convenient but if my thermostat goes offline my house can still stay warm. Given that it’s fine to rely on our home WIFI system.
  3. Connected Devices – we primarily see devices that used by isolated, become connected to the internet

Industrial Internet of Things (IIoT) – So, what does the extra “I” mean? Well, in our view, a lot. As this relates to technology advancements that focus on industrial processes (i.e. plants). In the IIoT space the advancements can be thought of as:

  1. Evolutionary – Industrial plants became connected over 30 years ago. The first ‘Distributed Control System’ is credited to Texaco back in 1959. Sure, new technology has made it easier for remote and disparate processes to get connected but the technology and concepts have been around a long time.
  2.  Mission Critical – These systems have to work, so the network and infrastructure built around them are robust and secure.
  3.  Connected Data – Especially for vendors like us, it’s about connecting data that always been there, thereby making it faster and easier to make decisions.

Analytics – This one gets used frequently in the world of data analysis and certainly means different things to different people. Broadly speaking, we see three types of analytics. To illustrate these distinct variations we will use the weather as an analogy.

  1. Descriptive Analytics – Analysis of data from the past (even if it is from a few seconds ago). When using descriptive analytics, individuals are still evaluating data and making decisions, they are gathering and synthesizing multiple instruments to tell you in simple language the current weather & historical conditions. Today it has been cloudy. There was a chance of rain.
  2. Predictive Analytics – Predictive analytics uses more complex analysis, including a time-predictive model. Predictive Analytics for weather would state the following: A Model Predicts Scattered showers this afternoon. Winds will be from the north at 10 miles an hour. Temperatures in the 40s.
  3. Prescriptive Analytics – Includes everything from the predictive model, but also needs more context. Prescriptive analytics for weather would state the following: First, I needs to know a few things about you, where you work and what you do for work. Bring a light coat and umbrella with you to work this morning. The wet roads will require more time to travel. You will need to leave for work by 7:52 to make your 8:30 meeting.

All in all, our conclusion is that tech industry buzzwords are necessary to navigate today’s technology landscape, especially in the process industry where systems and technologies are constantly evolving. Stay tuned for more buzzword definitions coming in future editions of PARCfocus newsletter!

Rolling Up Your Data For Easy Access

The first step toward understanding and optimizing a manufacturing process is to collect and archive data about the process. Ideally, the system used to accomplish this is a ”plant wide” information system, or PIMS, which collects not just process data, but also quality information and laboratory results, and operations information such as upcoming orders and inventory.

The real value of a PIMS is determined by how that collected data is organized, how it is retrieved, and what options are available to help you garner meaningful conclusions and results from the data.

Consider a situation where you have been asked to determine the average flow of steam supplied to a heat exchanger that is used to heat a product stream. No problem. You call up a trend of the last six months of steam flow data with the intention of using the averaging function that is built into the reporting package to generate your answer. Unfortunately, once the trend is up, you see that there are a few hours every day during which there is no product flow and so there is no steam flow. Since you are looking for the average steam flow during operation of the heater, your job just became more difficult. You have no choice but to export the data to a spreadsheet, manually eliminate the zero readings, and then calculate the average of the remaining valid values.

Most manufacturing facilities produce multiple variations of their product on each line. A toothpaste maker may periodically change the flavor; a papermaker may change the formulation of the furnish to make one type of paper stronger than another. These intentional process modifications give rise to the idea of “product runs” or “grade runs” and present significant challenges to data analysts.

Consider that you have been given another assignment. It has been noticed that the last few times the high strength grade A346_SS has been run, an increasing number of reels have been rejected because the MD strength test was too low. Your job is to determine the cause of this problem. There are a number of steps required to address this issue. Depending on the PIMS analysis tools available to you, they may be easy or they may be tedious.

There are probably several dozen critical process variables that you will want to examine. If this grade is not frequently run, you may be required to pull all the data (for all grades) for those critical variables for a period of many months. Returning that amount of data will probably take a great deal of time.

In the worst case scenario, you may have to make a separate query to determine the time periods when grade A346_SS was run, and then go through the data and manually extract those time periods for each process value. Alternatively, your analysis software may allow you to apply a filter to your query to return only the data related to grade A346_SS. This will reduce the work that needs to be done to the data that is returned, but the extra filtering may very well further increase the query duration. Query durations are network and hardware dependent, but returning three or four dozen tags of high frequency data for a 6 month time period could easily take more than 15 minutes.

A system which requires too much work to condense the data, or takes too long to retrieve it, is only marginally useful to the people who need it. PARCview has developed a method and a tool to address this issue.  PARCpde (PARC performance data engine) is a flexible real-time data aggregator which can work with any historian to provide fast access to large ranges of historical data in seconds. PARCpde is used to aggregate, or “rollup” data as it is created. Aggregates can be based on predefined time periods (hours, days, weeks, months) or custom periods, such as shifts or production months. In order to address the issue of grades, the aggregation period can be a flexible time period which is specified based on a production parameter like grade number or production run ID.

For each aggregated period, a number of statistics are automatically calculated and stored, including averages, durations, minimums, maximums and standard deviations. Filter criteria can be further applied to the aggregated data. For example, a “downtime tag’’ could be identified and used as a filter, so that only the process values during active production would be aggregated into the statistics. Condensing process values into statistics for predefined periods in an ongoing manner avoids the time consuming task of having to manually sort values and calculate statistics every time a question comes up. The aggregated statistics become properties of the base tag and do not require creation of a new tag. Finally, if you want statistics for a tag which had not been previously configured to be aggregated, it is possible to easily add that tag and backfill statistics for a specified period of time.

Simply possessing large amounts of stored data does not solve problems or increase productivity. Unless the proper tools are in place to use and interpret that data, the data will not be useful. High frequency data, or readings taken every few seconds, can be valuable. However, if the goal of a particular analysis task is to compare conditions over a long period of time, having to recall and process thousands of data points per hour becomes an impediment rather than an advantage. The best solution is to use a PIMS which has quick access to both aggregated historical datasets and high frequency detail data, and is equipped with the tools to seamlessly move between the two data types.




A Guide To Reporting and Notifications with a Data Historian

Historian packages were originally intended to be a support tool for operating personnel. Current and historical data was constantly displayed on a dedicated screen next to the primary control screens, and users were intended to interact with it at that location more or less continuously. As the historian became a one-stop source for all types of data throughout a facility, it became a tool that could benefit supervisory and management personnel as well. This led to the development of a variety of remote notification and reporting tools to meet the somewhat different needs of these individuals.

DataPARC reporting
DataPARC is one of the leading historian and data analysis software packages available to process industries. DataPARC has a variety of mechanisms for relaying information to remote users of the system in order to keep them in contact with the process. At the most basic level, the system can be configured to email one or more people based on a single tag going beyond a set limit. A separate notification can also be sent at the time an operator enters a reason for the excursion, and also when the variable returns to a value within the limit.

At the next level of complexity, the system can populate and send an entire report, based on an event or a preset time schedule. Reports can be as simple as a snapshot showing the current values of a few KPIs, or as complex as a multipage report containing tables, process graphics, charts and trends. DataPARC has a built-in, flexible and easy to use application for developing report templates. DataPARC also offers an add-in which allows data to be shown within Excel. For people who are proficient with the tools within Excel, this is another avenue for creating reports. Reports created in Excel can be viewed natively in Excel or exported as .pdf or .html files for viewing on a wide range of platforms. Production, raw material consumption and environmental compliance can all be easily tracked by periodic reporting, and any deviations can be quickly spotted and rectified. Receiving a daily report just before a morning meeting provides a quick way to avoid unpleasant surprises at the meeting.

PARCmobile is the most flexible remote-user experience. PARCmobile  gives you continuously updated data and access to most of the features and all of the data within dataPARC, all delivered on a mobile device. Live trends and graphics make it possible to take the next step, beyond a single number or notification, and perform a wide ranging investigation of any process irregularities.

Generate the Best Reports Possible Using these Guidelines:
Different people have different methods of working. Not all reporting needs are the same. A process engineer troubleshooting a particular problem will want more granular, higher frequency reports focused on a particular area, at least for the duration of the issue, than an area manager who is monitoring multiple processes to make sure that they are generally on track. Nonetheless, here are some guidelines that will apply to most remote users most of the time:

Minimize the number of notifications that you receive, and choose them wisely. If you receive an email for every minor process excursion, their importance will diminish and you are liable to not notice or respond to an important notification. Focus on watching only crucial KPIs.

Reports should be simple. The primary purpose of mobile notification is to be alerted to new or potential problems, not to find causes or solve those problems based on the report.

Export reports in pdf format. This is a standard format which offers easy scalability and works well on virtually all software and hardware platforms.

Use the group function to notify everyone who might be affected by a process excursion. For example, if high torque in a clarifier is detected due to high solids coming in from a process sewer, all areas which are serviced by that sewer should be notified. Doing this will hopefully result in the problem being solved more quickly, as each area checks on their contribution simultaneously, rather than each area looking in sequence, only after each downstream contributor reports their results.

Incorporate dead banding and/or delay into your notifications. Again, this depends on your job role, but for most remote users of data, unless an excursion presents a safety hazard or compliance issue, you don’t need to know about it immediately. Minor excursions can resolve themselves or be handled by frontline operators. Delaying notifications helps to minimize their numbers by filtering out the minor issues from the major ones.

Whichever historian you use, using the built-in notification and reporting functions will increase its effectiveness by engaging a wider range of users. Having more eyes and brains monitoring a process will hopefully lead to problems being addressed more effectively and keep the process running more profitably.