Advantages of a Plant Wide Information System

When it comes to operating a continuous manufacturing process of any kind, it is beneficial to have the maximum amount of data. Simply collecting and storing data does not, by itself, yield measurable benefits. In order to take full advantage of the data, it needs to be organized, archived and then made available in a variety of formats throughout a facility. This is the function of a versatile and robust plant-wide information system like the dataPARC software suite.

The term “plant-wide” applies to an information system in two ways. The first function is that of collecting information from various data sources throughout the actual manufacturing process, including the administrative infrastructure around the plant. The information system should then condition and archive the data.

“Data conditioning” refers to a variety of techniques that include, but are not limited to, averaging, filtering, correlating time stamps, creating combined calculated values, and aggregating raw data. The second application of the “plant-wide” term refers to the re-presentation of the conditioned data throughout the mill. While using a system like this may seem like an obvious idea, there are many plants  that do not utilize one.

The History of Data Management in Plants

Historically speaking, as manufacturing facilities have transitioned from analog mechanical and pneumatic control systems and paper based recordkeeping into the digital age, the changes have not been at all uniform. Production processes were largely re-instrumented and put under the control of computer based Distributed Control Systems (DCSs) or Programmable Logic Controllers (PLCs). Some of these systems had the ability to archive data, some did not. Initially these systems were offered by a variety of vendors and the exchange of data was either not considered, or discouraged. It is not unusual to see incompatible DCS systems from different vendors within a single facility. As computing technology advanced, the vendor offering the best blend of features and price constantly changed, leading to a diversification of systems as different areas of a facility were modernized.

The Problem with “Data Islands

Quality control labs also took advantage of advancing technology, and invested in database programs and communication interfaces tailored at archiving both manual data entries and automated input from certain instruments. Raw material ordering and inventory was tracked in database programs optimized for those purposes, as was warehousing and shipping information. Each department in a facility did indeed move forward. Digitizing and storing data reduced costs and increased efficiency at the departmental level. While this computerization increased the ability to share data between departments in some ways, a facility which relies on these marginally connected “data islands” is missing out on many of the benefits that can be realized with a plant-wide information capable of integrating data from all those sources.

Troubleshooting with quality and process data

Consider an example of troubleshooting a quality problem in an integrated pulp and paper mill, where product paper reels are produced every 20 to 60 minutes. Several quality tests are run on samples taken from each reel. Suppose that the machine direction (MD) tensile strength was measured as being below the lower acceptable limit for a particular grade on a couple of consecutive reels. With only “data islands” in place, this information would probably be made available to the paper machine operators through an electronic report, and they would be left on their own to figure out the cause and solution for this problem.

With a plant-wide information system in place, the MD strength data could be easily trended next to any number of upstream process variables. A good information system would have the ability to “time shift” the quality data, so that the drop in the strength number for the reels could be visually matched with changes in other process variables. Doing this, the machine operators would see that the drop in strength had started before the refining change was made.

A good plant-wide information system such as dataPARC would give the paper machine operators access to variables from outside their area. By casting their troubleshooting net a little farther, the PM operators could see that the time at which the drop in paper strength occurred at the reel closely matched an earlier upset in the digester, which led to the production of 3 hours of over-cooked, low strength pulp.

The Benefits of Process Information and Corresponding Insight

Having this insight would lead to two positive outcomes. Not only would the source of the low strength paper be discovered, but by knowing that it came from outside the paper machine, those operators would not create additional, possibly off-spec product by “chasing their tail” and further changing refiner settings. With the knowledge that the 3 hours of low strength pulp had largely already passed through the machine, they would also know that the strength number would in all likelihood return without the operators making any changes to the stock prep and machine settings. In this case the enhanced data access would lead to good decision making and more efficient operation.

Combing Raw Cost and Process information

In a manufacturing facility, electrical, fuel and raw material costs originate in Enterprise Resource Planning (ERP) software. These costs are sometimes dynamic, and the ability to access those numbers is an important capability for a plant-wide information system. Some facilities generate and sell electrical power as well as consume it. Having accurate real time cost data helps engineers and operators optimize fuel types, steam generation and electrical power flows to maximize profits.

Additionally, showing actual costs in process trends is a technique used to further operator involvement in optimizing a process.  A steam vent of 10,000 lbs per hour may provide a convenient way to operate a given process for a period of time, but it comes at a cost. If the vent flow is displayed as loss of $100 per hour based on the flow and value of steam, it is easier to communicate to operators the importance of eliminating that method of operation.

Pushing Process Information

While the previous two examples apply to enhancing the operation of an actual production process, it is equally important that the vital metrics of the process be seamlessly returned to an ERP or other administrative software for ordering and shipping reasons. Modern manufacturing philosophy says that minimizing inventory is one way to reduce costs. As the period of time between the production and shipping of goods or product is reduced, it becomes increasingly important for the shipping planners to have real time information about manufacturing problems which might lead to the inability to meet an order. It is the role of a plant-wide information system to make this interchange of data happen.

Goals of Plant- Wide Information Sharing

As stated above, a plant-wide information system should fulfill two important goals. One is to collect and archive as much data as is needed to operate the plant and allow for effective troubleshooting. Just as importantly, it should present, in various formats, the same conditioned and calculated values to everyone throughout the mill. By using a single set of values, all the decision makers from the planners and engineers to the process operators are all working with the same up-to-date data.

Contact us to learn more about dataPARC for your plant wide data integration needs.

A Guide To Reporting and Notifications with a Data Historian

Historian packages were originally intended to be a support tool for operating personnel. Current and historical data was constantly displayed on a dedicated screen next to the primary control screens, and users were intended to interact with it at that location more or less continuously. As the historian became a one-stop source for all types of data throughout a facility, it became a tool that could benefit supervisory and management personnel as well. This led to the development of a variety of remote notification and reporting tools to meet the somewhat different needs of these individuals.

DataPARC reporting
DataPARC is one of the leading historian and data analysis software packages available to process industries. DataPARC has a variety of mechanisms for relaying information to remote users of the system in order to keep them in contact with the process. At the most basic level, the system can be configured to email one or more people based on a single tag going beyond a set limit. A separate notification can also be sent at the time an operator enters a reason for the excursion, and also when the variable returns to a value within the limit.

At the next level of complexity, the system can populate and send an entire report, based on an event or a preset time schedule. Reports can be as simple as a snapshot showing the current values of a few KPIs, or as complex as a multipage report containing tables, process graphics, charts and trends. DataPARC has a built-in, flexible and easy to use application for developing report templates. DataPARC also offers an add-in which allows data to be shown within Excel. For people who are proficient with the tools within Excel, this is another avenue for creating reports. Reports created in Excel can be viewed natively in Excel or exported as .pdf or .html files for viewing on a wide range of platforms. Production, raw material consumption and environmental compliance can all be easily tracked by periodic reporting, and any deviations can be quickly spotted and rectified. Receiving a daily report just before a morning meeting provides a quick way to avoid unpleasant surprises at the meeting.

PARCmobile is the most flexible remote-user experience. PARCmobile  gives you continuously updated data and access to most of the features and all of the data within dataPARC, all delivered on a mobile device. Live trends and graphics make it possible to take the next step, beyond a single number or notification, and perform a wide ranging investigation of any process irregularities.

Generate the Best Reports Possible Using these Guidelines:
Different people have different methods of working. Not all reporting needs are the same. A process engineer troubleshooting a particular problem will want more granular, higher frequency reports focused on a particular area, at least for the duration of the issue, than an area manager who is monitoring multiple processes to make sure that they are generally on track. Nonetheless, here are some guidelines that will apply to most remote users most of the time:

Minimize the number of notifications that you receive, and choose them wisely. If you receive an email for every minor process excursion, their importance will diminish and you are liable to not notice or respond to an important notification. Focus on watching only crucial KPIs.

Reports should be simple. The primary purpose of mobile notification is to be alerted to new or potential problems, not to find causes or solve those problems based on the report.

Export reports in pdf format. This is a standard format which offers easy scalability and works well on virtually all software and hardware platforms.

Use the group function to notify everyone who might be affected by a process excursion. For example, if high torque in a clarifier is detected due to high solids coming in from a process sewer, all areas which are serviced by that sewer should be notified. Doing this will hopefully result in the problem being solved more quickly, as each area checks on their contribution simultaneously, rather than each area looking in sequence, only after each downstream contributor reports their results.

Incorporate dead banding and/or delay into your notifications. Again, this depends on your job role, but for most remote users of data, unless an excursion presents a safety hazard or compliance issue, you don’t need to know about it immediately. Minor excursions can resolve themselves or be handled by frontline operators. Delaying notifications helps to minimize their numbers by filtering out the minor issues from the major ones.

Whichever historian you use, using the built-in notification and reporting functions will increase its effectiveness by engaging a wider range of users. Having more eyes and brains monitoring a process will hopefully lead to problems being addressed more effectively and keep the process running more profitably.

Benefits of Calculated Variables

As an engineer in a manufacturing facility, you are excited that management has purchased and implemented a plant wide Information Management system, or PIM. This gives you the ability to collect and store process data, and to display both real time and historical process graphs which allow you and the operators to better understand the process. You can finally trend important process variables next to each other in order to visualize relationships that you suspect exist, and to use historical data for accurate diagnosis of problems, for example, was it lube oil pump failure, or loss of cooling water that led to the recent shutdown of a compressor?

Not long after you start doing your time based analysis of data, you develop the desire to trend not just raw process data, but modified versions of that data. In the simplest example of calculated data, a single trend might be modified by a constant. A chemical addition flow may be reported as gallons per minute, but you want to discuss and track that value as pounds per hour.

Another common scenario involves combining  two or  more tags. Perhaps you have an inlet and outlet pressure to a scrubber.  As the flow through the scrubber changes both values change and it would be better to monitor a single differential pressure rather than comparing two changing trends.

A second example of combining tags would be multiplying the total flow of a stream by the concentration of a component, perhaps the consistency of solids in the stream, to create a flow rate of just the solids. Even if the consistency value comes from a lab test, PARCview will pull the value in, and properly time synchronize and combine it will the flow value. The ability to observe and trend these created variables vastly increases the usefulness of the presentation system. The more you become involved with data analysis, the more you see the need to be able to manipulate the time based raw data to display the information you and others need to monitor. DataPARC has three techniques of somewhat increasing complexity which give users the ability to manipulate raw data. All involve creating a new “calculated variable tag.”

The first technique is available to all users and is very easily implemented.  Using the example of combining a total flow and a concentration to create a component flow tag, the procedure starts by dragging the total flow tag onto a trend. Simply clicking on the variable name within the header block at the top of the trend activates it for editing.  The tag can be modified by appending the text. Once that text is correctly entered, all the points for the current time span are calculated and a new calculated trend displays a total flow trend. The minimum and maximum values of the tag may need to be modified to properly display the trend. This new tag is called an “Expression” and can be dragged or copied to other trends.

The second technique for creating a calculated tag, a “simple formula”, involves a few more keystrokes but offers a number of key advantages. To create a simple formula, the Script Editor window is opened. Note that instead of an arithmetic expression the tag is followed by a name. This name is associated with programming code which is entered in a workspace on the Script Editor window. This code acts like a programming subroutine, accepting the tag name as an argument, and returning the evaluated value of the tag as an output.

The formula creation environment offers more flexibility in terms of logic than an Expression, it gives access to all the functionality of the VB.NET programming environment.  Another advantage of this approach is that formulas are saved by name and can be reused by others. A “standard “routine such as the conversion of Celsius temperature to Fahrenheit temperature can be created once, by one person, and then applied by anyone else in the future. Simply associating a different input tag with the formula name will create a new output tag. If the new tag is saved, it is placed in the master tag browser and becomes available to everyone.

The third technique for creating calculated tags is to create an “advanced formula”.  There is very little difference in the creation of a simple vs. advanced calculation tag.  The primary difference is in how the data is handled within the procedure. In the simple formula, if the timing of the data of different tags used in the calculation is not exact, the output points are automatically associated to the input times by PARCView.  In an advanced formula, the user has the opportunity/responsibility for the correct association of input and output data. For example, pulp consistency data may only be available only once an hour, because it is a lab test. If this data were being combined with a continuous total flow to find a dry fiber flow, it would be more accurate to multiply each flow value in the past hour times an average of the 1 hour old consistency and the most recent consistency, as opposed to using the one hour old consistency for the whole past hour.  This level of control is also desirable when creating some statistical functions.

In addition to providing users the capability to easily combine and customize tags,  the formula creation functionality of dataPARC  has been used  to build a number of named advanced formulas which can be or can be applied directly to tags with no programming at all. For example there are routines which allow the user to introduce a fixed time lag to a incoming signal, perhaps to simulate flow through extended pipe runs.  There are routines to totalize values over specified periods of time. A more sophisticated routine will totalize, and average, and even create a standard deviation value for an input tag, but only when a trigger tag, such as a grade or product is equal to a specified value.

Whether you used pre-built functions or program your own, the ability to easily configure calculated tags considerably expands your ability analyze process data, and to display the actual information which will help you and others operate and optimize the process.


What Makes a Great User Interface?

We all experience user interfaces on a daily basis whether in our cars, on our mobile phones or on our personal or work computers. A user interface is a gateway; it is a visual path to an experience as well as information or functionality. A user interface is also a language of its own that allows one to navigate a program or application.

When a user-friendly, well-designed user interface is effective, it makes tasks so much easier and makes better use of limited time. With effective design, a person does not have to spin their wheels trying to get a task done or access necessary information.

Remember the last time you tried to complete a task on your phone, computer or other digital interface to find it did not work, or you could not get the information that you wanted?  We all know that feeling of complete frustration when what we tried to do did not work!

This brings us to the million-dollar question – what does make a great user interface? What specific features make using a program or application a breeze to use?

After doing research, the following is what we found tended to be the predominant features in a great user interface:

Simple Navigation

Getting around in a program is very important.  Programs and applications with a simple, user-friendly navigation scored high. Words like concise and succinct came up frequently.  Concise navigation enables the user to interact with the user interface by featuring less extraneous imagery or information that could potentially make an action confusing.

Intuitive Features

Designing interfaces with next steps that are intuitive are very important in this busy day and age when time is limited. Nothing is better than trying to do something in a program and having it seamlessly cooperate the way that would most make sense.  An effective user interface is designed in a way that is intuitive, feels familiar, and is natural and instinctively understood.

Effective Graphics

Effective, relevant graphics represent functionality that is easily accessed by visual representation and recognition. In a matter of seconds, a user knows what the graphic represents and how it can help them accomplish a task, or locate information.

At Capstone, we are happy to report that our dataPARC user interface scored high in all three categories and here are some of the reasons why:

  • dataPARC offers drag and drop features: Users can add tags to almost any display and immediately get live feedback. Using the drag and drop feature is very intuitive.
  • dataPARC offers a multitude of visual ways to access the same information. Whether it is a visual display, a number display, charts, bar graphs or customizable reports, we have the information for you in the way you want it.
  • dataPARC’s customization possibilities are endless. The data format may be different for specific roles in the process industry. While a plant manager may need specific overview information, an operator may need very detailed data.  dataPARC has you covered with customizable reports in the way that you want them.
  • dataPARC is YOUR tool. Unlike the competition, you can change and customize what you see without the use of a third party application.

Want to know more about the dataPARC software suite and how its intuitive interface can benefit your business? Contact us and someone will be in touch with you shortly.

The 2017 dataPARC User Conference Was a Networking and Learning Success Story

Over 121 people gathered for the 2017 dataPARC User Conference in beautiful Portland, Oregon from May 15 through the 18th.
Attendees traveled from Canada, South Korea, Taiwan, Thailand, China and Lebanon to experience the presentations, networking and training. 
Keynote speaker and emcee Rennie Crabtree helped facilitate nine internal presentations and eight client presentations on software integration, functionality and features.
Also included were six round table discussions on key topics which allowed attendees to share their experiences with dataPARC with fellow users.  Assigned seating ensured a mix of industries and a chance to get to know new people.
Among the favorite sessions were the KapStone  presentation and the Capstone training session “Tips and Tricks in PARCview.
Social events included a welcome reception on the first night and a fun dinner at the nearby Punchbowl Social with great food, games, including bowling, karaoke foos ball and cornhole.
85% of people surveyed rated the conference an 8 or higher on a scale of 1 to 10.  At the next conference attendees surveyed said they wanted more training and more hands on experiences.  Attendees also wanted to see more use cases and live examples of dataPARC in action.
 Stay tuned for the dates of our next user conference — coming fall of 2018!

Version – Get the scoop on this minor release Release


This version is a minor release improving on the features in the 5.5.2 series.




  • Added ability in Centerline Config to select a tag and then share it’s process to all tags in the Centerline.


  • DMODXN/HT2N contribution tags will now be created for each of the input tags, allowing tag contributions to be viewed over time.

PARCgraphics Designer

  • Deadband support added to comparison operators < , <= , > , >= . Deadband can be set as either a constant or a percent.

PARCview Manual

  • New tutorials added to PARCview Manual.

System Configuration

  • New setting added to System Defaults (SystemConfig>System Defaults>General) allowing uncertain quality values to be treated as good quality. This flag impacts values in Trend and PARCgraphics.



  • PARCValue now provides a ValObj property to more easily see if the value is null,  the Value property will return a DBNull value which is more appropriate as a legacy capability.

PARCcalc Support

  • Adding an “IncludeEndBound” parameter overload to “NormalizeToStep” that allows it to include the interval that starts on the end time. When normalizing IV tags, the final interval will only be a one second interval. Previously the function was only including intervals >= start and < end. If includeEndBound is true it will include all intervals that are >= start  and <= end

PARCview Localiztion

  • Added ability to check for and load current culture
  • Updated Chinese Localization files (Chinese language)


  • Refined bulk process update in Centerline Config to only update tags that have an undefined process.


  • Improved functionality and performance when performing backfill.


  • Add dead time feature to OPC DA sources, to allow avoiding artificial zero values put out when OPC DA server first connects.
  • Add options to wait between HDA tag registrations and sync reads.
  • Add flat file invariant culture parsing option.
  • Add option to wait between OPC DA tag and tag group registrations.
  • Add ability for OPC HDA sources to periodically check for new tags
  • Update PARCIO logging to include informational type messages in the log files.
  • Update max bad tag exceeded logic to only re-establish the connection up to the maximum retry amount.
  • Add ability to periodically check for changed or deactivated OPC DA tags
  • Add browsing for OPC DA servers from OPC DA source configuration window.


  • Add additional logging to log files.

Excel Add-In

  • Enable PARCxla SQL Calc sources work with Unicode characters in tag and source names.

PARCgraphic Designer

  • Modify “Symbol Compare Logic” to allow for more natural two-input connections and reverse inputs.


Static, Live or Dynamic – Which Report is Best?

In the digital age there are so many options for how to best share information. Whether it is flashing across your screen, or filling your inbox, reports of all kinds are normal in the workplace. Whether with pictures, video or even streaming, how you share can determine the type of influence the information may have. How data is reported also determines the influence or relationship others have with that data.   Reports can be categorized into different types: live and static, as well as a lesser known type; dynamic. Static and live reports each have their own inherent advantages and disadvantages.  Knowing when to use each type of report is key to presenting the relevant information and improving performance and ease of use.

Read More

Welcome to 5.5

5-5Coming off nearly another year of sleep deprivation our developers are pleased to announce the completion of PARCview’s next major version, 5.5!

Similar to the WPF expansion in our last major release, 5.5 is packed with new tools and treats that have our engineers salivating! From graphic logic controls to the new PARCview Configuration Manager we will take a closer look at some of the great new features in this release.

Read More

Why We’re Implementing the OPC UA Spec. (and how it will benefit our customers)

why-we-are-implementing-OPC-UAGoing back 15 years now, dataPARC had the notion of a “Process Area” that allowed tags from multiple systems to be organized by Asset, providing filters (like Grade or Product) for all tags assigned to an Asset and for other useful associations to be applied globally. Building on this experience, the next major version of PARCview takes the next step in Asset Management and includes an adoption of the ISA 95 companion specification to OPC UA. The implementation will allow end-users a familiar, standards-based architecture for organizing their plant data.
Read More

What is an Energy Management System?

what-is-energy-mgmtAll forms of commerce require energy. Industrial processing and manufacturing facilities tend to be the largest consumers, but even service industries such as insurance and banking require large buildings which must be heated, cooled and lit. The newest large energy consuming enterprises are data centers, which are large clusters of computers which store and serve up the data which flows through the internet. Regardless of the end use or the industry, companies strive to minimize production costs by minimizing energy consumption.

In addition to economic incentives, it is increasingly accepted that every BTU that is generated by burning fossil fuels (the source of the majority of our energy) leads to an increase in atmospheric carbon dioxide. This, in turn, appears to be causing some undesirable changes in the global climate. There are now more reasons than ever for companies to seriously strive to reduce energy consumption.

Read More