Tech Terms Part II

As mentioned in our previous article, technical terms continue to change and evolve. Below are more terms we found useful to know and understand and think you will, too.

Artificial Intelligence (AI) , Machine Learning (ML) & Deep Learning – Not only a mouthful but also confusing word spaghetti that seem to get used interchangeably in a lot of scenarios.  Nvidia uses an interesting infographic to provide some clarification

The Nvidia definition uses the date of introduction to organize the hierarchy.  And since AI was used as common term first, they represent it as the most general term.  On other hand, it seems that intuitively AI would have to include characteristics associated with intelligence – adaptation and reason to name a few.  Whereas, Machine Learning could deliver value from data mining without necessarily having adaptive characteristics or applying reason.  For our purpose, we use Machine Learning as a broader term to describe solutions that address Predictive and Prescriptive Analytics and AI as a narrower subset, more analogous to Deep Learning.  But don’t be surprised to see continued use of these terms synonymously and we look forward to a clearer definition taking shape.

Digital Twin – An emerging term, commonly used in the AI and MI space, used to describe a software model of a physical object.  There is broad meaning to the term, in fact someday we might end up with more granularity like we did with Analytics.  On one extreme is the complicated models that represent an entire plant using first principle techniques, similar to a flight simulator for planes.  Plants have used these digital twins for decades to help commission and optimize plants.  More recently, the term has been used to describe applications like visualization and data models used for remote monitoring or software sensors – a statistical model that estimates the result of a physical instrument in a plant.  In these cases, we have digital representations of the physical object used for a variety of purposes.

Cloud – This is an interesting one.  For a long time, the term sounded sophisticated, but in reality just meant that your application/OS/etc. was hosted on a server somewhere else.  No different than many corporations created Data Centers years ago to centrally host and maintain software.  But applications and technology have evolved and the term Cloud has fulfilled the original promise.  Now Cloud solutions are purpose built to not only run on a server somewhere else but are built with scalability, ease of installation, support and security that make them unique.

Edge – Another example of expanding technology that’s been used in some industries for a long term.  It’s simply the idea of placing applications (data collection, processing, etc.) near the source or end user.  In Telecom, Edge computing is a significant advancement and a key differentiation in the 5G rollout.  In plants, instruments and pieces of equipment can now be considered part of the Edge.  However, real-time Historians have utilized Edge computing for decades.  Remote data collection nodes were placed as close to the data source (DCS or PLC) for years to deliver Store & Forward data collection and pre-processing of data.

In many Use Cases, Edge computing is just a product of the continued evolution of Cloud based computing and the realization that a hybrid strategy is required.  Some pieces will live in the Cloud and others will continue to live On Premise, the Edge.  If you’re peering down from the Cloud, the plant probably looks like the Edge.  To many us, it’s where we’ve been since the 1980’s.

A good article on discussing Cloud vs. Edge can be found here – https://www.arcweb.com/blog/edge-cloud-analytics

Block Chain – Probably our favorite term these days.  Right now it feels like a solution looking for a problem, but a list of buzzwords would not be complete without it.  So, until someone develops a proven Real-Time Historian or DCS block chain, we’ll let others work on this one and you won’t hear much from us (as a side note, as of June 25th, 2018 the average Bitcoin transaction confirmation time, over previous 60 days, was ~20 minutes – https://blockchain.info/charts/avg-confirmation-time?timespan=60days).

Have questions or need clarification? Feel free to reach out to us.

Five Reasons You Should Attend the 2018 dataPARC User Conference

 

It is that time of year again, time to gather with your peers and talk about some of the great benefits of dataPARC software. This year’s dataPARC user conference will be held at the Sentinel Hotel in Portland, Oregon from October 15- 18-2018. Besides getting to learn in a beautiful setting (Portland, OR in the fall – gorgeous!) the following are five reasons why you should attend:

1. You Will Learn New Ways of Utilizing dataPARC

At the dataPARC User Conference, you will find yourself in a room with over a hundred fellow dataPARC users from diverse industries such as oil & gas, food and beverage, chemicals and pulp and paper. Through dynamic presentations, you will witness creative and new ways that the robust software is used to streamline processes, improve reporting and access data quickly, ultimately benefitting your facility and profitability.   The last day of the conference is devoted entirely to interactive training so bring your laptop and get ready to learn. With a demo station set up with PARCview running, you will surely leave with innovative methods to utilize. Round table discussions at each table on relevant topics open up new possibilities and generate ideas.

2. You Will Meet Colleagues and Build a Valuable Network

The support of a knowledgeable network is invaluable. At the dataPARC user conference, you will have the opportunity to expand your professional network at our user conference. Users from all over the world will gather to share ideas, methods and strategies regarding what has worked for their business using dataPARC. Breaks and lunch allow times to get to know colleagues in a comfortable setting. Wednesday evening at the PunchBowl social includes games, karaoke, great food and a chance to relax and have fun.

3. You Will Be Inspired by our Keynote Speaker, Patrick Galvin

We picked a speaker this year that is both dynamic and relevant to your life as a busy professional. Patrick Galvin, author of The Connector’s Way, will teach you important points regarding building business relationships and the value of a connection. Patrick will kick off the conference and you will have had a chance to meet him the welcome reception. You will also be able to ask follow up questions at lunch following his keynote presentation. We will send you home with a copy of his book so that you may continue to learn about the principles and key points of his keynote speech.

4. You Will Stay Informed Regarding Upcoming Versions of dataPARC including 7.0

During the conference, you will learn about any recent changes and updates to the dataPARC software suite.We are gearing up for 7.0 and will be ready to share the new features with you. Capstone personnel will include changes in 7.0 during training sessions. Rather than just getting updates in an e-mail newsletter and corresponding updates video, you will get a chance to see PARCview in action. The demo station set up at the back of the conference room will also allow you to try out the new version.

5. You Will Meet and Interface with the Capstone Technology Team

All of us at Capstone Technology are eager to see you and get to know you. Whether you have communicated with us regarding support, engineering or something else, we look forward to having a conversation with you. Feel free to share some of your ideas and needs for features on future versions. Get to know the salesperson, engineer or support person you have spoken with on the phone.  Try PARCview at the demo station and grab one of us if you have a question.  Need something clarified or have a concern? We are here for you at the conference. We look forward to seeing you there!

Click here to learn more about the conference

Soft Sensors in the Process Industry – How Are They Helpful?

The process industry is constantly evolving as processes are refined and innovative technologies continue to transform the processing landscape. As complex systems are installed, upgraded and monitored, expectations for profitability and smooth delivery of product remain high. Soft sensors with predictive models provide scenarios in which estimations can drive decision-making and improve the reliability of current systems, often working hand-in-hand with their hard-sensor counterparts, creating comprehensive monitoring networks.

Soft sensors are virtual sensors that can alleviate the need for more expensive hardware sensors by their accurate predictions and are utilized heavily in the process industry. Their real-time predictions can alleviate restraints often brought on by limitations: either budget, person hours or current operating equipment. Engineers in the process industry at a chemical plant or a food processing facility may have federal environmental regulations they must contend with. They are in need of an accurate way to measure lab data and temperatures, all the while staying within budget and controlling capital expenditures. They may also have frequent challenges with a part of the process, requiring swift data to troubleshoot and identify the bottleneck and or challenge. Soft sensors can provide an economical and effective alternative to costly hard sensors, which require an expensive investment, require constant servicing and maintenance, and often fail. Through soft sensors, approximated calculations can provide in theory what raw data provides in reality. Using the last year of data, for example, a soft sensor can build a data model, which a process engineer can then use for a variety of calculations and decision-making.

Many physical properties and manual tests performed offline are related to properties, with sensors measured online, from general manufacturing process. For example, the strength of a final product is often related to the temperature the process or the amount of certain chemicals that are added. The strength may only be tested once per hour, but the temperature and chemical usage are measured every second. This relationship allows for soft sensors to estimate the strength in real-time. Another manual lab test may occur once per hour or a few times a day, whereas the soft sensor provides feedback minute-by-minute or second-by-second; soft sensors model “off-line” tests. The soft sensor uses a combination of historical process data recorded from online sensors and laboratory measurements to predict KPIs, replacing manual testing. The greatest benefit to soft sensors in the lab application is faster feedback of changes to physical properties. Simulated testing frees up operators and supervision to work on other high-priority tasks.

Soft sensors maximize your current data and signals that you have already collected in your process. Rather than consistently replacing and spending valuable budget dollars on hardware, the alternative soft-sensor solution works in tandem with your existing hardware sensors. By utilizing what is already there, both complications and downtime can be avoided. Process engineers can use soft sensors connected to your existing hard sensors, and facilities will benefit from real-time analyzing, monitoring and control, providing reliant calculation of parameters where no hardware sensor is available and reducing purchase and maintenance costs.

Through development of the dataPARC’s PARC view visualization software, dataPARC has its own version of soft sensors. Why would dataPARC’s soft sensors be advantageous for you in your plant or facility? Working in tandem with a soft sensor, The PARCmodel component of dataPARC’s product group predicts plant quality variables in real-time, allowing for estimation of properties that are impractical or impossible to measure online.

PARCmodel also reads live data, such as temperatures and pressures from the plant and uses them to calculate estimated quality values from user-entered models. PARCmodel builds models based on first principles or empirical models developed through techniques such as PCA and PLS.

PARCview soft sensors also do the following to ensure smooth operations:

  • Utilize Data From Any Source
  • Feature Closed-Loop Automation Control
  • Are Intuitive and User-Friendly

In addition:

PARCview provides a familiar user interface for model creation and optimization through drive model building with Trend control.

PARCmodel can read in data from any source available to PARCview, including OPCDA, OPCHDA, Plant historian systems, SQL and Excel.

Leverage your Soft Sensor models for use in advanced-control applications. The Soft Sensor output can be used like any other process value in the control solution.

Do you think that dataPARC’s soft sensors could be helpful in your facility? Contact us today to learn more and get customized product suggestions for your company.

Tech Terms Today – What Buzzword Was That?

Buzzwords have always been a part of technology but recently it seems the usage has exploded. At the same time usage is growing, the terms themselves have changed and evolved. Many contemporary terms now include a wide spectrum of meaning in their definitions, applying to new applications and solutions brought to market. From our perspective here at Capstone Technology, the more we can all talk a common language and the more we realize many terms are replacing old concepts, the better off we will be.

With that in mind, off we go. The following thoughts are coming from our vantage point here at Capstone, others may have a different points of view. One thing we think is clear – if you want to see the future, go visit your nearest modern process industry plant. Please let us know what you think!

Industry 4.0 – Originating in Germany, Industry 4.0 is simply the idea that we’re now in the Fourth Industrial Revolution. Many articles have laid out the various revolutions, Wikipedia being the most comprehensive: (https://en.wikipedia.org/wiki/Industry_4.0) Industry 4.0 is also one of many terms used to describe the continued technology advancements in manufacturing. The terminology below defines the elements that comprise Industry 4.0.

Internet of Things (IoT) – Probably the most ubiquitous term we’re currently seeing to describe the latest technology advancements to connect all things. In many contexts, I0T encompasses IIoT (see below), but for the sake of this article and in future discussions, we separate the two. We’re defining IoT as a focus on the consumer market, where there are distinct differences. The advancements can be thought of as:

  1. Revolutionary – a few years ago my thermostat was totally isolated and my watch told time
  2. Important, Not Mission Critical – They make our lives more convenient but if my thermostat goes offline my house can still stay warm. Given that it’s fine to rely on our home WIFI system.
  3. Connected Devices – we primarily see devices that used by isolated, become connected to the internet

Industrial Internet of Things (IIoT) – So, what does the extra “I” mean? Well, in our view, a lot. As this relates to technology advancements that focus on industrial processes (i.e. plants). In the IIoT space the advancements can be thought of as:

  1. Evolutionary – Industrial plants became connected over 30 years ago. The first ‘Distributed Control System’ is credited to Texaco back in 1959. Sure, new technology has made it easier for remote and disparate processes to get connected but the technology and concepts have been around a long time.
  2.  Mission Critical – These systems have to work, so the network and infrastructure built around them are robust and secure.
  3.  Connected Data – Especially for vendors like us, it’s about connecting data that always been there, thereby making it faster and easier to make decisions.

Analytics – This one gets used frequently in the world of data analysis and certainly means different things to different people. Broadly speaking, we see three types of analytics. To illustrate these distinct variations we will use the weather as an analogy.

  1. Descriptive Analytics – Analysis of data from the past (even if it is from a few seconds ago). When using descriptive analytics, individuals are still evaluating data and making decisions, they are gathering and synthesizing multiple instruments to tell you in simple language the current weather & historical conditions. Today it has been cloudy. There was a chance of rain.
  2. Predictive Analytics – Predictive analytics uses more complex analysis, including a time-predictive model. Predictive Analytics for weather would state the following: A Model Predicts Scattered showers this afternoon. Winds will be from the north at 10 miles an hour. Temperatures in the 40s.
  3. Prescriptive Analytics – Includes everything from the predictive model, but also needs more context. Prescriptive analytics for weather would state the following: First, I needs to know a few things about you, where you work and what you do for work. Bring a light coat and umbrella with you to work this morning. The wet roads will require more time to travel. You will need to leave for work by 7:52 to make your 8:30 meeting.

All in all, our conclusion is that tech industry buzzwords are necessary to navigate today’s technology landscape, especially in the process industry where systems and technologies are constantly evolving. Stay tuned for more buzzword definitions coming in future editions of PARCfocus newsletter!

Rolling Up Your Data For Easy Access

The first step toward understanding and optimizing a manufacturing process is to collect and archive data about the process. Ideally, the system used to accomplish this is a ”plant wide” information system, or PIMS, which collects not just process data, but also quality information and laboratory results, and operations information such as upcoming orders and inventory.

The real value of a PIMS is determined by how that collected data is organized, how it is retrieved, and what options are available to help you garner meaningful conclusions and results from the data.

Consider a situation where you have been asked to determine the average flow of steam supplied to a heat exchanger that is used to heat a product stream. No problem. You call up a trend of the last six months of steam flow data with the intention of using the averaging function that is built into the reporting package to generate your answer. Unfortunately, once the trend is up, you see that there are a few hours every day during which there is no product flow and so there is no steam flow. Since you are looking for the average steam flow during operation of the heater, your job just became more difficult. You have no choice but to export the data to a spreadsheet, manually eliminate the zero readings, and then calculate the average of the remaining valid values.

Most manufacturing facilities produce multiple variations of their product on each line. A toothpaste maker may periodically change the flavor; a papermaker may change the formulation of the furnish to make one type of paper stronger than another. These intentional process modifications give rise to the idea of “product runs” or “grade runs” and present significant challenges to data analysts.

Consider that you have been given another assignment. It has been noticed that the last few times the high strength grade A346_SS has been run, an increasing number of reels have been rejected because the MD strength test was too low. Your job is to determine the cause of this problem. There are a number of steps required to address this issue. Depending on the PIMS analysis tools available to you, they may be easy or they may be tedious.

There are probably several dozen critical process variables that you will want to examine. If this grade is not frequently run, you may be required to pull all the data (for all grades) for those critical variables for a period of many months. Returning that amount of data will probably take a great deal of time.

In the worst case scenario, you may have to make a separate query to determine the time periods when grade A346_SS was run, and then go through the data and manually extract those time periods for each process value. Alternatively, your analysis software may allow you to apply a filter to your query to return only the data related to grade A346_SS. This will reduce the work that needs to be done to the data that is returned, but the extra filtering may very well further increase the query duration. Query durations are network and hardware dependent, but returning three or four dozen tags of high frequency data for a 6 month time period could easily take more than 15 minutes.

A system which requires too much work to condense the data, or takes too long to retrieve it, is only marginally useful to the people who need it. PARCview has developed a method and a tool to address this issue.  PARCpde (PARC performance data engine) is a flexible real-time data aggregator which can work with any historian to provide fast access to large ranges of historical data in seconds. PARCpde is used to aggregate, or “rollup” data as it is created. Aggregates can be based on predefined time periods (hours, days, weeks, months) or custom periods, such as shifts or production months. In order to address the issue of grades, the aggregation period can be a flexible time period which is specified based on a production parameter like grade number or production run ID.

For each aggregated period, a number of statistics are automatically calculated and stored, including averages, durations, minimums, maximums and standard deviations. Filter criteria can be further applied to the aggregated data. For example, a “downtime tag’’ could be identified and used as a filter, so that only the process values during active production would be aggregated into the statistics. Condensing process values into statistics for predefined periods in an ongoing manner avoids the time consuming task of having to manually sort values and calculate statistics every time a question comes up. The aggregated statistics become properties of the base tag and do not require creation of a new tag. Finally, if you want statistics for a tag which had not been previously configured to be aggregated, it is possible to easily add that tag and backfill statistics for a specified period of time.

Simply possessing large amounts of stored data does not solve problems or increase productivity. Unless the proper tools are in place to use and interpret that data, the data will not be useful. High frequency data, or readings taken every few seconds, can be valuable. However, if the goal of a particular analysis task is to compare conditions over a long period of time, having to recall and process thousands of data points per hour becomes an impediment rather than an advantage. The best solution is to use a PIMS which has quick access to both aggregated historical datasets and high frequency detail data, and is equipped with the tools to seamlessly move between the two data types.

 

 

 

A Guide To Reporting and Notifications with a Data Historian

Historian packages were originally intended to be a support tool for operating personnel. Current and historical data was constantly displayed on a dedicated screen next to the primary control screens, and users were intended to interact with it at that location more or less continuously. As the historian became a one-stop source for all types of data throughout a facility, it became a tool that could benefit supervisory and management personnel as well. This led to the development of a variety of remote notification and reporting tools to meet the somewhat different needs of these individuals.

DataPARC reporting
DataPARC is one of the leading historian and data analysis software packages available to process industries. DataPARC has a variety of mechanisms for relaying information to remote users of the system in order to keep them in contact with the process. At the most basic level, the system can be configured to email one or more people based on a single tag going beyond a set limit. A separate notification can also be sent at the time an operator enters a reason for the excursion, and also when the variable returns to a value within the limit.

At the next level of complexity, the system can populate and send an entire report, based on an event or a preset time schedule. Reports can be as simple as a snapshot showing the current values of a few KPIs, or as complex as a multipage report containing tables, process graphics, charts and trends. DataPARC has a built-in, flexible and easy to use application for developing report templates. DataPARC also offers an add-in which allows data to be shown within Excel. For people who are proficient with the tools within Excel, this is another avenue for creating reports. Reports created in Excel can be viewed natively in Excel or exported as .pdf or .html files for viewing on a wide range of platforms. Production, raw material consumption and environmental compliance can all be easily tracked by periodic reporting, and any deviations can be quickly spotted and rectified. Receiving a daily report just before a morning meeting provides a quick way to avoid unpleasant surprises at the meeting.

PARCmobile is the most flexible remote-user experience. PARCmobile  gives you continuously updated data and access to most of the features and all of the data within dataPARC, all delivered on a mobile device. Live trends and graphics make it possible to take the next step, beyond a single number or notification, and perform a wide ranging investigation of any process irregularities.

Generate the Best Reports Possible Using these Guidelines:
Different people have different methods of working. Not all reporting needs are the same. A process engineer troubleshooting a particular problem will want more granular, higher frequency reports focused on a particular area, at least for the duration of the issue, than an area manager who is monitoring multiple processes to make sure that they are generally on track. Nonetheless, here are some guidelines that will apply to most remote users most of the time:

Minimize the number of notifications that you receive, and choose them wisely. If you receive an email for every minor process excursion, their importance will diminish and you are liable to not notice or respond to an important notification. Focus on watching only crucial KPIs.

Reports should be simple. The primary purpose of mobile notification is to be alerted to new or potential problems, not to find causes or solve those problems based on the report.

Export reports in pdf format. This is a standard format which offers easy scalability and works well on virtually all software and hardware platforms.

Use the group function to notify everyone who might be affected by a process excursion. For example, if high torque in a clarifier is detected due to high solids coming in from a process sewer, all areas which are serviced by that sewer should be notified. Doing this will hopefully result in the problem being solved more quickly, as each area checks on their contribution simultaneously, rather than each area looking in sequence, only after each downstream contributor reports their results.

Incorporate dead banding and/or delay into your notifications. Again, this depends on your job role, but for most remote users of data, unless an excursion presents a safety hazard or compliance issue, you don’t need to know about it immediately. Minor excursions can resolve themselves or be handled by frontline operators. Delaying notifications helps to minimize their numbers by filtering out the minor issues from the major ones.

Whichever historian you use, using the built-in notification and reporting functions will increase its effectiveness by engaging a wider range of users. Having more eyes and brains monitoring a process will hopefully lead to problems being addressed more effectively and keep the process running more profitably.

Benefits of Calculated Variables

As an engineer in a manufacturing facility, you are excited that management has purchased and implemented a plant wide Information Management system, or PIM. This gives you the ability to collect and store process data, and to display both real time and historical process graphs which allow you and the operators to better understand the process. You can finally trend important process variables next to each other in order to visualize relationships that you suspect exist, and to use historical data for accurate diagnosis of problems, for example, was it lube oil pump failure, or loss of cooling water that led to the recent shutdown of a compressor?

Not long after you start doing your time based analysis of data, you develop the desire to trend not just raw process data, but modified versions of that data. In the simplest example of calculated data, a single trend might be modified by a constant. A chemical addition flow may be reported as gallons per minute, but you want to discuss and track that value as pounds per hour.

Another common scenario involves combining  two or  more tags. Perhaps you have an inlet and outlet pressure to a scrubber.  As the flow through the scrubber changes both values change and it would be better to monitor a single differential pressure rather than comparing two changing trends.

A second example of combining tags would be multiplying the total flow of a stream by the concentration of a component, perhaps the consistency of solids in the stream, to create a flow rate of just the solids. Even if the consistency value comes from a lab test, PARCview will pull the value in, and properly time synchronize and combine it will the flow value. The ability to observe and trend these created variables vastly increases the usefulness of the presentation system. The more you become involved with data analysis, the more you see the need to be able to manipulate the time based raw data to display the information you and others need to monitor. DataPARC has three techniques of somewhat increasing complexity which give users the ability to manipulate raw data. All involve creating a new “calculated variable tag.”

The first technique is available to all users and is very easily implemented.  Using the example of combining a total flow and a concentration to create a component flow tag, the procedure starts by dragging the total flow tag onto a trend. Simply clicking on the variable name within the header block at the top of the trend activates it for editing.  The tag can be modified by appending the text. Once that text is correctly entered, all the points for the current time span are calculated and a new calculated trend displays a total flow trend. The minimum and maximum values of the tag may need to be modified to properly display the trend. This new tag is called an “Expression” and can be dragged or copied to other trends.

The second technique for creating a calculated tag, a “simple formula”, involves a few more keystrokes but offers a number of key advantages. To create a simple formula, the Script Editor window is opened. Note that instead of an arithmetic expression the tag is followed by a name. This name is associated with programming code which is entered in a workspace on the Script Editor window. This code acts like a programming subroutine, accepting the tag name as an argument, and returning the evaluated value of the tag as an output.

The formula creation environment offers more flexibility in terms of logic than an Expression, it gives access to all the functionality of the VB.NET programming environment.  Another advantage of this approach is that formulas are saved by name and can be reused by others. A “standard “routine such as the conversion of Celsius temperature to Fahrenheit temperature can be created once, by one person, and then applied by anyone else in the future. Simply associating a different input tag with the formula name will create a new output tag. If the new tag is saved, it is placed in the master tag browser and becomes available to everyone.

The third technique for creating calculated tags is to create an “advanced formula”.  There is very little difference in the creation of a simple vs. advanced calculation tag.  The primary difference is in how the data is handled within the procedure. In the simple formula, if the timing of the data of different tags used in the calculation is not exact, the output points are automatically associated to the input times by PARCView.  In an advanced formula, the user has the opportunity/responsibility for the correct association of input and output data. For example, pulp consistency data may only be available only once an hour, because it is a lab test. If this data were being combined with a continuous total flow to find a dry fiber flow, it would be more accurate to multiply each flow value in the past hour times an average of the 1 hour old consistency and the most recent consistency, as opposed to using the one hour old consistency for the whole past hour.  This level of control is also desirable when creating some statistical functions.

In addition to providing users the capability to easily combine and customize tags,  the formula creation functionality of dataPARC  has been used  to build a number of named advanced formulas which can be or can be applied directly to tags with no programming at all. For example there are routines which allow the user to introduce a fixed time lag to a incoming signal, perhaps to simulate flow through extended pipe runs.  There are routines to totalize values over specified periods of time. A more sophisticated routine will totalize, and average, and even create a standard deviation value for an input tag, but only when a trigger tag, such as a grade or product is equal to a specified value.

Whether you used pre-built functions or program your own, the ability to easily configure calculated tags considerably expands your ability analyze process data, and to display the actual information which will help you and others operate and optimize the process.

 

The 2017 dataPARC User Conference Was a Networking and Learning Success Story

Over 121 people gathered for the 2017 dataPARC User Conference in beautiful Portland, Oregon from May 15 through the 18th.
Attendees traveled from Canada, South Korea, Taiwan, Thailand, China and Lebanon to experience the presentations, networking and training. 
Keynote speaker and emcee Rennie Crabtree helped facilitate nine internal presentations and eight client presentations on software integration, functionality and features.
Also included were six round table discussions on key topics which allowed attendees to share their experiences with dataPARC with fellow users.  Assigned seating ensured a mix of industries and a chance to get to know new people.
Among the favorite sessions were the KapStone  presentation and the Capstone training session “Tips and Tricks in PARCview.
Social events included a welcome reception on the first night and a fun dinner at the nearby Punchbowl Social with great food, games, including bowling, karaoke foos ball and cornhole.
85% of people surveyed rated the conference an 8 or higher on a scale of 1 to 10.  At the next conference attendees surveyed said they wanted more training and more hands on experiences.  Attendees also wanted to see more use cases and live examples of dataPARC in action.
 Stay tuned for the dates of our next user conference — coming fall of 2018!

Version 5.5.2.1 – Get the scoop on this minor release

5.5.2.1 Release

 

This version is a minor release improving on the features in the 5.5.2 series.

 

NEW FEATURES

Centerline

  • Added ability in Centerline Config to select a tag and then share it’s process to all tags in the Centerline.

PARCmodel

  • DMODXN/HT2N contribution tags will now be created for each of the input tags, allowing tag contributions to be viewed over time.

PARCgraphics Designer

  • Deadband support added to comparison operators < , <= , > , >= . Deadband can be set as either a constant or a percent.

PARCview Manual

  • New tutorials added to PARCview Manual.

System Configuration

  • New setting added to System Defaults (SystemConfig>System Defaults>General) allowing uncertain quality values to be treated as good quality. This flag impacts values in Trend and PARCgraphics.

IMPROVEMENTS

PARCvalue

  • PARCValue now provides a ValObj property to more easily see if the value is null,  the Value property will return a DBNull value which is more appropriate as a legacy capability.

PARCcalc Support

  • Adding an “IncludeEndBound” parameter overload to “NormalizeToStep” that allows it to include the interval that starts on the end time. When normalizing IV tags, the final interval will only be a one second interval. Previously the function was only including intervals >= start and < end. If includeEndBound is true it will include all intervals that are >= start  and <= end

PARCview Localiztion

  • Added ability to check for and load current culture
  • Updated Chinese Localization files (Chinese language)

Centerline

  • Refined bulk process update in Centerline Config to only update tags that have an undefined process.

PARChistory

  • Improved functionality and performance when performing backfill.

PARCIO

  • Add dead time feature to OPC DA sources, to allow avoiding artificial zero values put out when OPC DA server first connects.
  • Add options to wait between HDA tag registrations and sync reads.
  • Add flat file invariant culture parsing option.
  • Add option to wait between OPC DA tag and tag group registrations.
  • Add ability for OPC HDA sources to periodically check for new tags
  • Update PARCIO logging to include informational type messages in the log files.
  • Update max bad tag exceeded logic to only re-establish the connection up to the maximum retry amount.
  • Add ability to periodically check for changed or deactivated OPC DA tags
  • Add browsing for OPC DA servers from OPC DA source configuration window.

PARCtagSync

  • Add additional logging to log files.

Excel Add-In

  • Enable PARCxla SQL Calc sources work with Unicode characters in tag and source names.

PARCgraphic Designer

  • Modify “Symbol Compare Logic” to allow for more natural two-input connections and reverse inputs.

 

Static, Live or Dynamic – Which Report is Best?

In the digital age there are so many options for how to best share information. Whether it is flashing across your screen, or filling your inbox, reports of all kinds are normal in the workplace. Whether with pictures, video or even streaming, how you share can determine the type of influence the information may have. How data is reported also determines the influence or relationship others have with that data.   Reports can be categorized into different types: live and static, as well as a lesser known type; dynamic. Static and live reports each have their own inherent advantages and disadvantages.  Knowing when to use each type of report is key to presenting the relevant information and improving performance and ease of use.

Read More